Adding a ZFS pool on my home server (running debian unstable)

by Yuji Yokoo


ZFS is great. It comes with many features like built-in snapshots and spanning over multiple drives. It sounds like a good idea for my home machine which stores a lot of media data. I can have hardware redundancy by just setting up with extra drives. Fortunately, today, adding zfs support to a linux box is extremely simple. I used exact commands from http://zfsonlinux.org/debian.html and had no issue.

pools are much like partitions you can mount, which can be mirrored and striped like in raid. vdevs are raw disks that make up pools (like hdds in raids). I hear you could use a file or a partition as vdevs but the common practice is to use the entire disk.

 Okay. Let's create a pool: 

 

$ sudo zpool create zpool0 sdd sde
invalid vdev specification
use '-f' to override the following errors:

/dev/sdd does not contain an EFI label but it may contain partition information in the MBR.

Okay. One problem. Not sure what it means but it seems like it is because I'm missing partition information on these disks (http://blog.td-online.co.uk/?p=317). Using '-f' is supposed to override.

$ sudo zpool create -f zpool0 sdd sde

$ sudo zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
zpool0 5.44T 580K 5.44T 0% 1.00x ONLINE -

$ sudo zpool status
pool: zpool0
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
zpool0 ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0

Success! It is automatically mounted too:

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs           19G  8.6G  8.7G  50% /
udev             10M     0   10M   0% /dev
tmpfs           789M  1.8M  787M   1% /run
/dev/sda1        19G  8.6G  8.7G  50% /
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           2.4G     0  2.4G   0% /run/shm
/dev/sda6        88G   72G   12G  87% /home
zpool0          5.4T  128K  5.4T   1% /zpool0

But I want to change the mount point, so I run this command: 

$ sudo zfs set mountpoint=/opt/zdata0 zpool0
$ df -h /opt/zdata0
Filesystem Size Used Avail Use% Mounted on
zpool0 5.4T 128K 5.4T 1% /opt/zdata0

Now I have a 5.4TB partition to store all my media files.

Reboot issues: I found that this setup will require me to run zfs mount -a every time I reboot. This is not ideal. The solution is to modify /etc/default/zfs:

 

# Run `zfs mount -a` during system start?
# This should be 'no' if zfs-mountall or a systemd generator is available.
ZFS_MOUNT='yes' # <- change this line from 'no' to 'yes'

Now my zfs partitions should be automatically mounted.

UPDATE: To create radiz-2, use zpool create -f zpool0 raidz2 sdd sde sdf sdg sdh

UPDATE: It's better to use disk ids you can get from /dev/disk/by-id than sdd, sde etc.