In my previous article we discussed the main reasons that you ought to be investigating ZFS as a storage platform for your organization. In this article we will be documenting some of the major pain points that people have when giving Solaris a shot. One common theme that you will notice throughout my writings about Solaris is that I am a firm believer in the over-engineering of Solaris by its engineers, I think that just about everything that feels wrong in Solaris is actually the “most correct” way however in a lot of cases it is simply not the “best” way.
Storage Device Naming
Physical disk devices are named something like this c7t0d0s0 now this is actually the “most correct” way of labeling a disk as opposed to Linux for example where this disk would be sda1. This breaks down pretty logically which is why I think this name is simply over-engineered as opposed to just conceived in the mind of a madman. The c represents the logical controller number, so if you have multiple controllers you will have different numbers here. The t represents the physical bus target number. The d represents the disk number which I think in SATA and SAS systems you won’t see a variation, however if you have IDE then you would have multiple disks on the same physical bus. And finally the s represents the slice of the disk, additionally there can be a p instead of an s if the disk is an MBR disk, which would represent the partition.
Network Device Naming
Network device naming is also a bit complex. I have written instructions on how to configure networking end-to-end in the following articles Solaris 11: Network Configuration Basic and Solaris 11: Network Configuration Advanced. I recommend you familiarize yourself with the differences in naming and configuration before jumping headlong into the configuration of a Solaris machine. Additionally in Linux device naming is pretty straight forward. If you have a network card more times than not it will be called ethX where X is a numerical identifier for the network card. In Solaris the model of the card determines the interface identifier, for example out of three different machines I have the following names (bnx0, bnx1, bge0, e1000g0, e1000g1, ige0, ige1, ige2, ige3). So clearly you cannot just jump onto a machine and assume that you are looking for eth0. However as part of configuration the interfaces can be renamed to make more sense in your organization, so if you have a dedicated storage network that a Solaris machine connects to you could name it storage0 or san0 or if you had an interface which lived in your DMZ you could name it dmz0.
ZFS Versions
Not all ZFS implementations are equal. Here is a brief listing of where you can get ZFS and what version it gets you, I have additionally indicated which versions are not free as in beer with $$.
- Solaris 11 Express – version 31 – $$
- OpenSolaris 2009.06 – version 14
- FreeBSD 8.2 – version 15
- NexentaStor Community – version 26
- NexentaStor Enterprise – version 22 – $$
- OpenIndiana – version 28
Of the free versions OpenSolaris will most likely not continue development, and OpenIndiana has not reached stable yet. FreeBSD is missing some features (iSCSI and CIFS) and additionally does not perform as well as Solaris. NexentaStor Community is a fantastic platform, it really takes the difficult bits away from Solaris and abstracts them through the web interface, although with it being based on OpenSolaris it seems to be dependent on OpenIndiana to have a real future. Additionally NexentaStor Community deployments are limited to 14TB which seems generous to me for a free solution.
Now quickly lets just go over what version of ZFS key features were added.
- Version 17 – Triple Parity RAIDZ (RAIDZ3)
- Version 21 – Deduplication
- Version 30 – Encryption
So ultimately Solaris 11 Express is really the way to go as far as features go, and frankly while it does cost money the cost is on par with what you would pay for an Enterprise OS (RHEL, Windows) and it is way below what you would pay for a SAN if you were to source all the hardware yourself.
Licensing Solaris 11 Express
Basically with Solaris 11 Express you will need to pay a license fee to Oracle for the right to run their software. Additionally on the Oracle Technical Network you are able to use Solaris 11 Express for testing and development for free as long as it is not used in production. Now please keep in mind I am not a lawyer, I have never played one on TV either, and frankly I am not even a fan of lawyer shows on TV. On the other hand I have read the license here and I am comfortable with my interpretation of the terms. If you believe I am incorrect in my assumptions please let me know using comments or my contact form. All that said, my ZFS documentation pertains to the use of Solaris 11 Express unless otherwise specifically stated.
If you are not comfortable with these license terms I have found that NexentaStor Community is also a viable alternative if you do not need encryption or more than 14TB of data.
SPARC Hardware
Also please keep in mind that you can additionally run Solaris on non-Sparc hardware, in my documentation that is what I am doing. We do have some Sparc hardware, however it is all in production so I cannot be blowing away disks and creating a larger i/o load, we do have some extra x86 hardware which I am using for this purpose. Sparc has many major differences, a boot eeprom which boots the OS, while with x86 you must use a separate boot loader (grub).