Thursday, February 21, 2013

Change from EnhanceIO to FlashCache

Since EnhanceIO was not stable I changed to FlashCache and now everything seems to work correctly.

Performance is about the same, as with EnhanceIO

I'll write more when I have time.


Saturday, February 9, 2013

Solaris / Openindiana trashed here comes Linux

It seemed to be so that the Solaris / Openindiana solution was not stable enough to bring this NAS to life.... here comes Linux

I chose to test the NAS by installing basic Ubuntu from mini installation (12.10) and upgrade Kernel to 3.7.6

Here's the list what I have in it now

  • Linux Ubuntu 12.10 with Kernel 3.7.6
  • Infiniband & opensm in connected mode
  • EnhanceIO (hd cache doing caching for software RAID6) (https://github.com/stec-inc/EnhanceIO)
  • 2x SSD RAID1 for cache device (55GB of usable cache)
  • 4x 2TB in RAID6 storage (ext4) (3.6TB of storage)
  • Samba for Windows connectivity
  • NFS for Virtual Machines
  • Webmin and Ajenti for web management

I have tested speeds directly on the NAS and through Infiniband. Below is a test from Bonnie++ and directly dd:ing to the disk
Version 1.96Sequential OutputSequential InputRandom
Seeks
Sequential CreateRandom Create
SizePer CharBlockRewritePer CharBlockNum FilesCreateReadDeleteCreateReadDelete
K/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU
bb31G112998427376262402892857619966446748++++++++16++++++++++++++++++++++++++++++++++++++++++++++++
Latency15154us396ms2729ms7362us32064us2667usLatency259us347us628us269us10us396u


Writes

time dd if=/dev/zero of=test.file bs=10M count=10000

10000+0 records in
10000+0 records out
104857600000 bytes (105 GB) copied, 304.926 s, 344 MB/s

real    5m5.893s
user    0m0.076s
sys     1m13.032s


time dd if=/dev/zero of=test.file bs=10M count=3000


3000+0 records in
3000+0 records out
31457280000 bytes (31 GB) copied, 79.0217 s, 398 MB/s

real    1m19.040s
user    0m0.008s
sys     0m18.900s

time dd if=/dev/zero of=test.file bs=10M count=1000

1000+0 records in
1000+0 records out
10485760000 bytes (10 GB) copied, 21.2707 s, 493 MB/s

real    0m21.273s
user    0m0.000s
sys     0m6.500s

Reads

time dd if=test.file of=/dev/null bs=10M count=10000

10000+0 records in
10000+0 records out
104857600000 bytes (105 GB) copied, 233.494 s, 449 MB/s

real    3m53.500s
user    0m0.028s
sys     2m17.232s

time dd if=test.file of=/dev/null bs=10M count=3000

3000+0 records in
3000+0 records out
31457280000 bytes (31 GB) copied, 54.6859 s, 575 MB/s

real    0m54.692s
user    0m0.016s
sys     0m28.104s

time dd if=test.file of=/dev/null bs=10M count=1000

1000+0 records in
1000+0 records out
10485760000 bytes (10 GB) copied, 16.5363 s, 634 MB/s

real    0m16.553s
user    0m0.012s
sys     0m9.480s

I'll add some Infiniband speeds later

T

Friday, September 21, 2012

Infiniband

Infiniband adapters arrived so I inserted one to the NAS and started testing the speeds.


First of all, CIFS/SMB write traffic even through IB when sync=always looks bad. It gives around 80MB/s writes and that is not acceptable. When sync=standard it doesn't go through ZIL and the RAIDZ2 can write 100-140MB/s. Still not acceptable.... waiting to figure something out.

Reading on the other hand works perfectly, I transferred around 3GB file from cache and it was around 600MB/s, that is more than acceptable speed. Transfer directly from RAIDZ2 was around 200MB/s, this is also enough.

Below is the test I did with iperf from NAS to my other host:


[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  8.20 GBytes  7.05 Gbits/sec

So to conclude this time, the connectivity between hosts work perfectly and is fast enough.. the only problem is CIFS/SMB writing. (NFS was not tested yet, since I need an IB Switch to connect more hosts)


Sunday, September 16, 2012

Performance 2

I have been testing a lot with the performance tuning and decided that since I need a reliable data storage to keep my stuff safe, I will keep the RAIDZ2 array with mirrored ZIL and striped L2ARC. Below are two charts showing the performance I get with direct writes and reads to the filesystem.
























So what comes to writing, we can see that the bigger the blocksize, the faster the speed (says Captain Obvious) and the reading from the fs is somewhat inconsistent, but the speeds are pretty acceptable.

After these tests I turned off atime from the fs and the speeds are a bit better with each blocksize. (zfs set atime=off <fs>)

more later...




Saturday, September 15, 2012

Performance

Performance testing was done with NFS & SMB copying from Windows 7 computer. Which was a mistake... NFS on Windows 7 is not a very working solution. I was getting around 35MB/s transfer speeds with NFS and SSD as ZIL device. I almost decided to dump the whole thing.. but then..

I changed to use a  Linux box and voila, speed was 110MB/s constantly with NFS, but then the SMB was still not so good, around 120MB/s at start and it went down to 45MB/s in the end of the transfer on Windows 7 machine. So I decided to change the whole fs to always use sync, so that it would go through ZIL. This gave it a constant speed of 80MB/s which isn't still very good, but manageable.

Direct writing to the fs was tested with both, striped and mirrored ZIL, with sync always on. Speeds were 333MB/s with mirrored ZIL and 512MB/s with striped ZIL, which would be something we are looking for.

Reading speed has not been tested yet, I am waiting for another Infiniband card, since this one is not supported by the OS.

So as a conclusion:


  • Do not use Windows 7 as NFS client
  • Keep all writes synchronized (zfs set sync=always <fs>)
  • Max write speed depends on your ZIL speed (my setup, 333MB/s - 512MB/s)
  • SMB writes are not as fast as I hoped, but we'll see after changing to Infiniband

Some more later..



OS Installation

OS installation was done from one USB to another USB


  • Openindiana Build 151a5 Server 64bit
  • Napp-IT (installed using ->  wget -O - www.napp-it.org/nappit | perl)


Configured a pool named "data" with all four 2TB disks as RAIDZ2 and partitioned SSD's:
(We need good data integrity and redundancy on this box, so if this doesn't give enough speed, we go to mirroring)
  • 60% for ZIL mirror
  • 39% for L2ARC Stripe
To add them to the pool ZIL and L2ARC devices:
  • zpool add data log mirror c5t0d0p1 c5t1d0p1
  • zpool add data cache c5t0d0p2 c5t1d0p2
That's the basic disk configuration, now let's see the performance.....

Friday, September 14, 2012

Parts arrived: part 2

Now I finally got all the parts, the following additions arrived today


  • 2x  OCZ Agility 3 60GB 2.5" SSD SATA III
  • 1x  Eolize SVD-NC11-4 Mini-ITX NAS
  • 1x 4GB USB memory for OS

Now it's finally time to start building the machine....



Oh well, the case is a bit small for the motherboard&CPU&FAN combo, I had to "customize" the FAN a bit to make it fit in the case. No worries, it fits now.

This is how I did it (cut one of the four FAN holders away)







See the cut "holder"










Otherwise everything fit in "ok"

The final product looks like this


















A black box...

Next we start the OS installation, this will be in the next post..