SSD storage has this air of being fragile though.
You are supposed to not wear it out too much. It's not like
in the good old days where you would write to your
hard disk without any extra thought.
Before I switched to SSD I was already using tmpfs to store
/tmp in RAM, for performance reasons mostly.
Since RAM is relatively cheap these days I have lots
of it in my home server.
When I switched to SSD, I extended this concept by
mounting /var/cache/apt/archives via tmpfs.
This is to save disk I/O for package updates.
And for a machine running sid which is more or less
regularily updated, this saves quite some useless writes to disk.
Roughly 100MB per update.
Nowadays, I have almost all of my source code checkouts and build-trees in
/tmp/src. myrepos makes this very easy to handle.
Simply clone all repositories you are interested in, and remember
their details with mr register. After a reboot, all you need
to do is:
$ mkdir /tmp/src
$ cd !$
$ mr up
Your repository locations have (likely) been recorded in ~/.mrconfig.
Of course there is the danger of accidentally loosing data due
to a power outage. However, I feel this leads to a rather
clean workflow. Changes need to be pushed anyway. So since you
can not trust your local repository to really persist, you
are forced to push your changes regularily, which is a good thing!
Besides, I have a UPS at home which gives me roughly one hour of
backup power. This was enough to sustain every power
outage I have witnessed in the past years. Typically, outages
are rather short lived in my area, somewhere between a few seconds and
It is rather rare that electricity goes missing for more than half an hour.
The only downside of this approach is that from time to time, you will
use more bandwidth by re-cloning repositories after a reboot.
But really, who needs reboots on Linux? Last time I had
half a year uptime, and only rebooted because I wanted to jump
from linux 3.8 to 3.13.
This approach only works on long-running machines though.
It is probably not very useful for a laptop.