Difference between revisions of "FreeBSD ZFS"
(add zfs "benchmarking") |
(add NSM production notes from 'seau') |
||
Line 80: | Line 80: | ||
tank/gzip-6 20971520 26752 20944768 0% /tank/gzip-6 | tank/gzip-6 20971520 26752 20944768 0% /tank/gzip-6 | ||
tank/gzip-9 20971520 26752 20944768 0% /tank/gzip-9 | tank/gzip-9 20971520 26752 20944768 0% /tank/gzip-9 | ||
+ | |||
+ | == Production System == | ||
+ | === Introduction === | ||
+ | I have 880GB to work with, so: | ||
+ | ^:root@seau:/# zpool create tank /dev/mfid0s1g | ||
+ | |||
+ | Once the tank is there, I opted to save a bit off the top and tried both 'gzip-6' and 'lzjb'..<br/> | ||
+ | * 'gzip-6' turns out to cause too long of a buffer-sync-to-disk, and makes the system block while it does so. | ||
+ | * 'lzjb' does not seem to do this, so it is what I went with in the end. | ||
+ | |||
+ | ^:root@seau:/# zfs create -o compression=lzjb -o mountpoint=/nsm -o atime=off -o quota=850G | ||
+ | |||
+ | I'm running 'daemonlogger' like this: | ||
+ | |||
+ | ^:root@seau:/# daemonlogger -d -i bce1 -l /nsm/bce1 -u nsm -g nsm -r -s 1073741824 -m 840 -P /nsm/run -S 65535 | ||
+ | |||
+ | Overall, this means 880GB in the tank, 850GB in the tank/nsm pool, 30GB leftover in the tank,<br/> | ||
+ | and daemonlogger makes 840 1GB files, so 10GB leftover in the mount (/nsm).<br/> | ||
+ | |||
+ | === Random Statistic === | ||
+ | ^:root@seau:/nsm/bce1# zpool iostat | ||
+ | capacity operations bandwidth | ||
+ | pool used avail read write read write | ||
+ | ---------- ----- ----- ----- ----- ----- ----- | ||
+ | tank 6.89G 873G 0 62 3.76K 7.41M |
Revision as of 15:59, 9 October 2009
Background
I had some ideas regarding using ZFS for some NSM-related duties.. mainly employing the 'compression' features
that it natively offers. My tests were done using FreeBSD 8.0-RC1/amd64 on a Dell 1950..
Implementation
I have 2*500GB WD SATA 7200RPM drives with a Dell PERC5/i (mfi(4)) keeping them in RAID configuration.
My testing machine has both disks in RAID1, in production they'll run with RAID0 to maximize space, since
data integrity isn't really important (best-effort application).
I have 1 spare partition '/dev/mfid0s1g' of a bit over 400GB to play with.
I'm testing using 3 different compression algorithms: "lzjb", "gzip-6" and "gzip-9".
Create the tank:
^:root@viola:~# zpool create tank /dev/mfid0s1g
Create 3 filesystems using the varying algorithms:
^:root@viola:~# zfs create -o compression=lzjb -o mountpoint=/tank/lzjb -o atime=off -o quota=20G tank/lzjb ^:root@viola:~# zfs create -o compression=gzip-6 -o mountpoint=/tank/gzip-6 -o atime=off -o quota=20G tank/gzip-6 ^:root@viola:~# zfs create -o compression=gzip-9 -o mountpoint=/tank/gzip-9 -o atime=off -o quota=20G tank/gzip-9
"Benchmarks" using 'dd'
lzjb
^:root@viola:~# zfs create -o compression=lzjb -o mountpoint=/tank/lzjb -o atime=off -o quota=20G tank/lzjb ^:root@viola:/tank/lzjb# dd if=/dev/zero of=test.20G bs=1G count=20 20+0 records in 20+0 records out 21474836480 bytes transferred in 40.658224 secs (528179405 bytes/sec) ^:root@viola:/tank/lzjb# dd if=/dev/urandom of=test.16M bs=16M count=1 1+0 records in 1+0 records out 16777216 bytes transferred in 0.300089 secs (55907489 bytes/sec)
gzip-6
^:root@viola:/# zfs create -o compression=gzip -o mountpoint=/tank/gzip-6 -o atime=off -o quota=20G tank/gzip-6 ^:root@viola:/tank/gzip-6# dd if=/dev/zero of=test.20G bs=1G count=20 20+0 records in 20+0 records out 21474836480 bytes transferred in 40.611659 secs (528785009 bytes/sec) ^:root@viola:/tank/gzip-6# dd if=/dev/urandom of=test.16M bs=16M count=1 1+0 records in 1+0 records out 16777216 bytes transferred in 0.299824 secs (55956881 bytes/sec)
gzip-9
^:root@viola:/tank/lzjb# zfs create -o compression=gzip-9 -o mountpoint=/tank/gzip-9 -o atime=off -o quota=20G tank/gzip-9 ^:root@viola:/tank/gzip-9# dd if=/dev/zero of=test.20G bs=1G count=20 20+0 records in 20+0 records out 21474836480 bytes transferred in 40.541599 secs (529698803 bytes/sec) ^:root@viola:/tank/gzip-9# dd if=/dev/urandom of=test.16M bs=16M count=1 1+0 records in 1+0 records out 16777216 bytes transferred in 0.299950 secs (55933397 bytes/sec)
Summary
So, it doesn't seem that in this brief bit of testing that there is any appreciable speed/time
differences... but, its also not a very comprehensive bit of testing.. maybe some bonnie testing
at a later date?
Results from 'df'
Contents of /tank/[compressedfs]:
- 2*12M (/boot/kernel/kernel: 12087259)
- 1*16M (/dev/urandom)
- 1*20G (/dev/zero)
^:root@viola:/tank/gzip-6# df -h /tank /tank/lzjb /tank/gzip-6 /tank/gzip-9 Filesystem Size Used Avail Capacity Mounted on tank 421G 0B 421G 0% /tank tank/lzjb 20G 32M 20G 0% /tank/lzjb tank/gzip-6 20G 26M 20G 0% /tank/gzip-6 tank/gzip-9 20G 26M 20G 0% /tank/gzip-9 ^:root@viola:/tank/gzip-9# df /tank /tank/lzjb /tank/gzip-6 /tank/gzip-9 Filesystem 1K-blocks Used Avail Capacity Mounted on tank 441691520 0 441691520 0% /tank tank/lzjb 20971520 33152 20938368 0% /tank/lzjb tank/gzip-6 20971520 26752 20944768 0% /tank/gzip-6 tank/gzip-9 20971520 26752 20944768 0% /tank/gzip-9
Production System
Introduction
I have 880GB to work with, so:
^:root@seau:/# zpool create tank /dev/mfid0s1g
Once the tank is there, I opted to save a bit off the top and tried both 'gzip-6' and 'lzjb'..
- 'gzip-6' turns out to cause too long of a buffer-sync-to-disk, and makes the system block while it does so.
- 'lzjb' does not seem to do this, so it is what I went with in the end.
^:root@seau:/# zfs create -o compression=lzjb -o mountpoint=/nsm -o atime=off -o quota=850G
I'm running 'daemonlogger' like this:
^:root@seau:/# daemonlogger -d -i bce1 -l /nsm/bce1 -u nsm -g nsm -r -s 1073741824 -m 840 -P /nsm/run -S 65535
Overall, this means 880GB in the tank, 850GB in the tank/nsm pool, 30GB leftover in the tank,
and daemonlogger makes 840 1GB files, so 10GB leftover in the mount (/nsm).
Random Statistic
^:root@seau:/nsm/bce1# zpool iostat capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- tank 6.89G 873G 0 62 3.76K 7.41M