Skip to content. | Skip to navigation

IT Virtualization Blog

Personal tools

This is SunRain Plone Theme
You are here: Home / Users / lmarzke / howto / zfs2950

10TB ZFS Heirarchial Storage on Dell 2950 project

by lmarzke last modified Apr 30, 2014 11:35 AM
ZFS NAS storage project built on top of Dell 2950 hardware and open source ZFS.

ZFS Storage on Dell 2950 III

This is a multi-part article on building a high performance ZFS storage appliance from mostly spare/used parts.

2950 SmartOS

For a somewhat long introduction into everything about ZFS that matters from the engineers who wrote it:
http://tinyurl.com/zfslastwordvideo

Motivation

The use-case for this project is my consultancy Vsphere Lab environment where I run a decent number of VM's and need good performance but is affordable.   In addition I had a few extra 2950 servers with internal storage that I wanted to use to keep the cost down.

Previously I had used both an external Thecus NAS and a Nexenta VM running on the same ESXi host with some success.   The Thecus ran VM's well but during storage vMotion with a lot of I/O the Thecus random I/O for VM's almost dropped off-line.   The Nexenta VM did not have this problem but it used nearly 30% of the host RAM, and obviously made host patching more of an issue since the storage had to be shut down for patching.

Background

ZFS has quite a few benefits:

  • 128 bit filesystem that has very few limits.
  • Software RAID
  • Volume manager is part of ZFS,  not a separate tool
  • Easily add SSD devices for read-cache (L2ARC) or separate write-log (SLOG)
  • Transaction based writes,  unlimited snapshots,  replication

In short many of the features of high-end storage such as NetApp are available here in an open-source product.

Design

I had a existing Dell 2950 with 6 x 2TB  NL-SAS drives which is perfect for a small ZFS server , especially when supplemented with a few SSD devices.    The first design issue is weather to use a system pool for the OS which means losing two of the six drives for the OS.   That would mean that I would be limited to 4 drives in RAID-10 for the data pool or only 4TB  of space.   Alternately one of the ZFS distros called SmartOS runs on USB Flash,  leaving all 6 drives free for data,  so this definitely seemed the way to go.

ZFS groups the storage pool into vDev's  ( or virtual devices ) each of which must be a redundant mirror or RAIDZx unit.  The write performance scales with the number of vDev's  (Not the number of disks )  So my choices were:

  • 3 vDevs:  3 2 disk mirrors with a capacity of 6TB
  • 2 vDevs:  2 3-disk RAIDZ1 vDevs with a capacity of 8TB
  • 1 vDev:   1 6-disk RAIDZ1 vDev with a capacity of 10TB

Since my environment is a small lab,  and because I was planning to add SSD's to improve write performance anyway,   I decide to go with the latter choice of 10TB.

Cost and Advantages

The cost of everything purchased from Ebay for this project was less than  $2250.   Since I already had an existing Dell server with internal storage I actually only needed to purchase 2 SSD's and a RAID controller for about $300 total .

 

For further details please see the following:

 

Part II - Hardware Selection and Setup.

Part III - Special Considerations for SSD's .

Part IV -  Software Selection and Setup

Part V - NFS shares Setup

Part VI - Performance Testing .  ( Coming soon )

Part VII - Snapshots and Replication  ( Coming soon )

 

Document Actions