I decided to use the ZFS file system for my NAS. Although licensing issues prevent it from being ported to the linux kernel, there is a ZFS-FUSE project which has ported ZFS to run in userspace via FUSE.
ZFS is a mature file system (and tool set) which does device pooling, redundant storage, checksumming, snapshots and copy-on-write clones. It also has a very cool deduplication feature where you can configure the file system to look for identical chunks of data and store those only once. Nice!
Getting ZFS-FUSE on Debian Lenny
We'll install some tools, compile and manually start the zfs-fuse daemon. Note that I use the latest source from the "official" repository here, not the last stable release.
aptitude install git-core libaio-dev libattr1-dev libacl1-dev libz-dev libfuse-dev libfuse2 scons libssl-devAt the time of writing, the "scons install" command doesn't seem to install the debian init script. Also, the debian init script which is part of the source has a small error. We'll take care of that manually:
git clone http://git.zfs-fuse.net/official zfs-official
cd zfs-official/src
scons
scons install
cd ../debian
nano zfs-fuse.init
# fix the line "DAEMON=/usr/sbin/zfs-fuse"
# it should be "DAEMON=/usr/local/sbin/zfs-fuse"
su
cp zfs-fuse.default /etc/default/zfs-fuse
cp zfs-fuse.init /etc/init.d/zfs-fuse
chmod +x /etc/init.d/zfs-fuse
aptitude install sysv-rc-conf
sysv-rc-conf
# use arrows to scroll down to zfs-fuse
# use arrows and space to enable run levels 2,3,4,5
# use q to quit
Setting up a ZFS storage pool and file systems
Currently I have two new 500GB disks available for storage. My first plan was to split each disk in two partitions to build a "safe" storage pool (mirrored over two partitions) and a "bulk" storage pool (no redundancy, striped over two partitions). However, a recurring theme in the ZFS Best Bractices Guide is that you should not slice up your disks if you can avoid it. Therefore, I'll keep things simple and just create one big 500GB pool of mirrored storage.
# start the zfs daemonI will however, still create two separate file systems in this pool for "archive" and "bulk" storage. This makes it easy to have different backup policies for each data set.
zfs-fuse
zpool create nas-pool mirror \
/dev/disk/by-id/scsi-SATA_ST3500418AS_9VM7RWGV \
/dev/disk/by-id/scsi-SATA_ST3500418AS_9VM7SHA5
zpool status
zfs create nas-pool/archiveBecause ZFS is designed to handle storage pools with potentially thousands or more file systems, you don't have to manually edit /etc/fstab to set up mount points. The shown mount command will automatically mount all available ZFS file systems as /pool-name/file-system-name. This is also what the init script does.
zfs create nas-pool/bulk
zfs list
zfs mount -a
Exposing the ZFS file systems on the network via samba
First we'll set up a "nasusers" group which has read/write access to the ZFS file system:
# create nasusers group and add a user to itNow give those users a samba password:
groupadd nasusers
usermod -a -G nasusers wim
# give nasusers read/write access
cd /nas-pool
chmod 2770 archive
chmod 2770 bulk
chgrp nasusers archive
chgrp nasusers bulk
smbpasswd -a wimAdd a section like this to /etc/samba/smb.conf like this for each folder to expose:
[archive]Finally, restart samba:
path=/nas-pool/archive
browsable=yes
writable=yes
valid users= @nasusers
/etc/init.d/samba restartNow the ZFS file systems should be available on the network, and users can start copying there stuff in there. In a future post we'll explore how to leverage some of those advanced ZFS features.
6 comments:
There is an error in your commands to do this.
The line usermod wim -a -G nasusers
should in fact be
usermod nasusers -a -G wim
Otherwise you would get an error!!
@Anonymous: thanks for the feedback, I have fixed it in the post.
Have you found any stability issues? I used zfs on freebsd recently and it was horrendous. I moved to linux raid + ext3 and have been in a dream world.. but I am hunting compression. Compression + rsnapshot = dream come true for me.
@Anonymous: no, ZFS-fuse appears to be stable for me.
I've converted to zfs. Looks like it's working. Now I have rsnapshot with compression. Thanks!
cool. now how to set ZFS ARC min/max size and other ZFS kernel options? In FreeBSD I had edited a rc or loader .conf file. In Linux, I dunno which file to look for.
Post a Comment