Skip to content. | Skip to navigation

IT Virtualization Blog

Personal tools

This is SunRain Plone Theme
You are here: Home / Users / lmarzke / howto / zfs2950 / SSD use for ZFS

SSD use for ZFS

by lmarzke last modified Apr 27, 2014 05:26 PM
SSD useage considerations when used for ZIL/SLOG.

When using SSD's for the ZIL separate Log device ( SLOG ) or any server use there are some considerations detailed in the following Intel notes:

Using Intel 320 in server applications:

Note the following are performed with the SSD attached to a Linux computer.

Setting up SSD for SLOG

Specifically when used in a write intensive application like the SLOG,  the writes may eventually wear out the device unless precautions are taken.   One procedure is to allocate a small percentage of the device ( like 5% ) to use as the SLOG.    This allows the device to spread out writes over the remaining unused cells,  with a 10X or more increase in unit lifetime.   Note that this only works on a factory new device or after a 'security-erase' has been performed.

The Linux 'hdparm' program supports security erase on the Intel 320,  however this is difficult and dangerous to perform for several reasons.

 

To erase the device a security password is required.

  • hdparm --security-add-pass foobar /dev/sdb  ( add user password )
  • hdparm --security-erase foobar  /dev/sdb     (erase device )

After rebooting,  some laptop BIOS with TPM security chips may automatically wake out the security features of the SSD and 'freeze' security features before the OS is booted.   This means that Linux or any other OS will not be able to do further operations on the SSD.

To correct this,  umount,  un-hot-plug the SSD,  then remount it again and immediately issue one of the folowing:

  • hdparm --security-unlock  foobar  /dev/sdb
  • hdparm --security-disable foobar  /dev/sdb

Then check the status with:

  • hdparm -I /dev/sdb

When the device is both unfrozen and unlocked,  remove the password with:

  • hdparm --security-set-pass NULL /dev/sdb

Note:  When hotplugging a Disk the device may change,  so be careful.

Also disable read-ahead for use with ZFS

  • hdparm -A0 /dev/sdb
  • hdparm  -A  /dev/sdb  ( Verify setting read-ahead is off )

 

For use as a SLOG,  once the device is erased,   create a single partition of about

3.5GB ( or 5% of 80G ).   Do not create any other partitions to leave the rest of the cells unused.

After moving the SSD back to Smart OS the partitioning can be done as follows:

  • use diskinfo command to list disks on system.  Find the SSD units.
  • Run 'format' /dev/c5d0
  • Enter 'fdisk' menu
  • Create partition 1,  DO  NOT USE 100%,  use 5% as Solaris2 partition.

 

Further info on Solaris Format/Fdisk is found at:

Then use the following to add the SLOG device to pool 'zones' assuming the SSD was device  c5d0

  • zfs add zones log c5d0p1          ( partition 1 added as log device )

Note:  Partition 0 is the entire disk ,  do not use partition 0.

 

Setting up SSD for L2ARC

For the L2ARC there is no need to erase the device.     Just create a partition using the entire device.

  • disk info ( to list disks )
  • format /dev/c6d0
  • Enter fdisk menu ( YES - use entire disk as partition )

Also disable read-ahead for use with ZFS

  • hdparm -A0 /dev/sdb
  • hdparm  -A  /dev/sdb  ( Verify setting read-ahead is off )

 

After moving the SSD back to SmartOS system:

Add the L2ARC to the pool with

  • zfs add zones cache c6d0p1

 

 

Zpool status

When both SLOG and L2ARC have been added to the existing pool 'zones'  the status will be:

[root@00-22-19-92-d6-4d ~]# zpool status
  pool: zones
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: scrub repaired 0 in 6h5m with 0 errors on Sun Apr 27 01:07:29 2014
config:
        NAME        STATE     READ WRITE CKSUM
        zones       ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c0t0d0  ONLINE       0     0     0
            c0t1d0  ONLINE       0     0     0
            c0t2d0  ONLINE       0     0     0
            c0t3d0  ONLINE       0     0     0
            c0t4d0  ONLINE       0     0     0
            c0t5d0  ONLINE       0     0     0
        logs
          c5d0p1    ONLINE       0     0     0
        cache
          c4d0p1    ONLINE       0     0     0

errors: No known data errors

 

To see the sizes allocated properly use zpool iostat -v

 

[root@00-22-19-92-d6-4d ~]# zpool iostat
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zones       3.06T  7.82T    268    138  23.7M  2.09M
[root@00-22-19-92-d6-4d ~]# zpool iostat -v
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zones       3.06T  7.82T    268    138  23.7M  2.09M
  raidz1    3.06T  7.82T    268    110  23.7M  1.04M
    c0t0d0      -      -     84     26  4.97M   223K
    c0t1d0      -      -     86     26  4.97M   221K
    c0t2d0      -      -     84     27  4.97M   223K
    c0t3d0      -      -     82     26  4.96M   221K
    c0t4d0      -      -     85     26  4.97M   223K
    c0t5d0      -      -     82     26  4.96M   221K
logs            -      -      -      -      -      -
  c5d0p1    20.2M  3.67G      0     28     11  1.04M
cache           -      -      -      -      -      -
  c4d0p1    10.3G  64.2G      0      1  11.8K   128K
----------  -----  -----  -----  -----  -----  -----

 

Note that the log device is a 3.67G plus 20M,  and the cache device is 64G + 10G

which is approximately what we expected.

 

Goto:   Part I

 

 

 

 

 

 

 

 

 

 

Document Actions