It seems people are starting to look at putting some of their older hardware to use as simple network storage servers. Since posting a quick little screen shot a few days ago showing a 32-bit Ubuntu installation with a 7.5TB partition I've had two people ask me how they could do this with a netbook and a bunch of external drives. While I would usually not suggest that people do RAID over USB unless there's a dedicated box for the hard drives, there's nothing wrong with looking at this from a strictly academic standpoint.
What We'll Need:
- one netbook (Atom processors welcome)
- a bunch of same-sized USB hard disks (for this example, I will be using five 2TB Seagate hard drives)
- a powered USB Hub (this depends on how many hard drives you want to connect to the netbook)
- Ubuntu on a bootable USB stick
- one of your favorite beverages
First, let's install the package. From the terminal, type:
sudo apt-get install mdadm
This will install all of the required packages to use the software RAID tool, and will configure the system appropriately. Next, if you're like me, you'd probably prefer to use a file system like XFS for larger partitions spread across disks. I won't get into the details of why I would choose XFS over other file systems like Ext3, Ext4, or ZFS, but you can install it with a simple
sudo apt-get install xfs and
sudo apt-get install xfsprogs
Again, because this is strictly an academic exercise, this won't be necessary but it also won't hurt anything if we go this way.
Now comes the fun part. Let's create a definition with mdadm. First, identify the locations of your external hard drives. Ideally, the external disks would not be mounted, as that would make this process quite difficult. In my case, the five hard drives range from /dev/sdb to /dev/sdf. I would like to put these together in a RAID5 configuration, and I would like to use a 16K chunk size. With this in mind, I would type:
sudo mdadm —create /dev/md0 —chunk=16 —level=5 —raid-devices=5 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf
Let's break this down.
—create /dev/mdo creates the array name (I typically use md0, md1, md2, etc.)
—chunk=16 this is optional, but sets the chunk size for the RAID array. Caution: bigger is not always better!
—level=5 signifies what kind of RAID array we would like (0, 1, 5, 6, etc.)
—raid-devices=5 states the number of drives (or partitions) that will be connected to the array. This is immediately followed by the locations of each drive (or partition) to be attached.
sudo mkfs.xfs /dev/md0
Depending on the total size of the RAID array and the speed of the system, this could take a while. When it's all done, you'll see something like the screenshot on the right. So far so good? I bet you're not even half-way through that favorite beverage!
Next, let's confirm that everything is running as it should. If you've configured RAID5 or 6, you've probably noticed that the drives are still busy even though we're not using them yet. This is normal, as the system is creating a recovery plan should anything go wrong with one of the drives later. You can check on the status of the array by typing
cat /proc/mdstat. You should see something like this:
sudo mdadm —detail —scan > /etc/mdadm/mdadm.conf
Next we need a mount point for the RAID array. This can be thought of as a very large directory. I like to use things like "ebs", "volA" and "volB" to keep things simple, but you can call it anything you'd like. Let's make that mount point now:
Then we add a line to the end of the /etc/fstab file. This file controls how drives are auto-mounted after a reboot. Type:
/dev/md0 /volA xfs errors=remount-ro 1 2
Please remember to change
xfs to the array, mount point, and file system you chose earlier, otherwise none of this will work when you reboot. Now you should be able to mount the RAID array by typing:
mount -a. This command will mount anything listed in the fstab file that isn't already mounted.
If you've made it this far, congratulations. You're all done. Just to make sure that everything is running properly, do a quick
cat /proc/mdstat and
df -h. You should see something like this: