I have many very large disks spread across several machines at my home office — currently 14TB on my desktop and 13TB on my main file server. I have also have a few others knocking about for backups which I put into an external hot-drive slot on my desktop case. The environment that I'm in is extremely hostile to disks — I churn through a lot of data running compilations, tests, VMs and doing animations. Not to mention other data processing tasks associated with logs across large numbers of servers. Added to the heat in Thailand disks don't last very long. Recently I've been getting better at replacing disks before they totally fail.
Back in the days when I was using only Windows I used to stripe up smaller disks into single large logical volumes. This make using the disks far easier (and in many cases increases performance), but it comes at a great cost — if any disk fails then the entire logical volume dies with it. I've also tried out RAID using mdadm
. This works well until anything at all goes wrong. I don't think I've ever come across any tool that was quite so user hostile as mdadm
. Every time I've needed to use it it's taken me half a day to re-learn how to do even the simplest of tasks. LVM also has some RAID abilities, but I've never tried them.
I first started to use LVM because it has great tools for resizing disk partitions, but these days I don't use any of those. The main reason I use it now is that it is the only way I know of to correctly handle very large disks.
There are three layers that need to be initialised:
pvscan
and pvdisplay
.vgscan
and vgdisplay
lvscan
and lvdisplay
The configuration I'm always after is rather simple: a single volume group per disk and then a single logical volume that takes up the whole disk as well.
First create the physical volume directly onto the disk. The standard disk partitioning programs don't seem to work properly with disks larger than a couple of TB (and you may not notice until you're writing data above the 2TB mark).
sudo pvcreate /dev/sdf
We want a volume group for a single disk. It's tempting to do other things here and to try to be clever — pretty much always a bad idea (see above).
sudo vgcreate miro2 /dev/sdf
Create the logical volume:
sudo lvcreate --name files6 --extents 100%VG miro2
Finally make a file system:
sudo mkfs.ext4 /dev/mapper/miro2-files6
I use a naming convention for the volume groups that includes the machine name. This makes it easier to move a disk between machines without suffering a name clash. I also just use an increasing number for the logical volume names for simplicity's sake. These disks get mounted through fstab
into a folder each under /mnt
:
/dev/mapper/miro1-files5 /mnt/files5 ext4 noatime,errors=remount-ro 0 1 /dev/mapper/miro2-files6 /mnt/files6 ext4 noatime,errors=remount-ro 0 1
I always use noatime
simply because I think it does save some writes and I never have reason to care about access times. I use bind mounts to put individual folders on the disks where I want them:
/mnt/files4/projects /home/kirit/Projects none bind 0 0
My next task is to try to work out how to use smartctl
better and add that to my internal monitoring in the hopes of getting more advanced notice that disks are failing.