I’ve been using SmartOS for one year as my main development OS, and I want to share my experiences because SmartOS is a very atypical system and as such it requires an atypical style of usage. Well, atypical for me. Unlike all of the generations of engineers that came before me, I’ve been using Linux, OpenSolaris, and then OpenIndiana on stand-alone consumer hardware — desktops and laptops. This translates into me being the sole user of a system who’s usefulness is more or less independent of network connectivity. Having this kind of monopoly leads to very bad habits such as messing with global configs, installing custom software directly into
/bin, storing personal files in
/rpool/docs, not using Zones whenever possible, and so on. All in all, a very messy situation, that doesn’t bother anyone but should bother me. Naturally, I’ve been slow to change my heathen ways. In fact I didn’t change these habits until I was forced to by deciding to use SmartOS as my main development OS (yeah, it’s that compelling).
First thing’s first: you probably won’t be able to run it on your desktop or laptop. It’s not that it can’t theoretically run on those systems — it’s very lean — it just has some problems with firmware and drivers and the like. As such it requires a very specific brand of motherboard: supermicro. Supermicro boards are server boards, which means you can only install SmartOS on a server. As I said, I never had the need for a server, so I had to part with some cash to get one from eBay. My current hardware is at the bottom of this post. Another thing to keep in mind is that the usability of your SmartOS server is directly correlated to the network connectivity. That’s kind of a downer, as it won’t always be available to you, but it will be available to you from potentially all devices that are connected to the internet — yes, I’ve used SmartOS from my phone.
The other thing is that SmartOS is designed to be a hypervisor — a host for virtual machines. Most OSes can be such hosts in addition to being “regular” operating systems — like running VMWare or Parallels on top of Mac OS X. SmartOS can only be used to host VM’s and nothing more. It’s like being able to run VMWare on the Mac, but not being able to use the Mac for anything else — everything you do happens in one virtual machine or another.
The SmartOS host (also known as the global zone) runs entirely in RAM and is read-only. Areas like
/usr can never be changed. They can only be changed if you compile a custom SmartOS live USB. The only areas that can be changed are
/usbkey. The latter holds the configuration for SmartOS while the former can be used to install optional software. If you need to use
/opt you really need to use it. It’s typically used to start custom services during boot, though I’ve also seen people use it to install Xorg.
The main consequence of this kind of design is that the host is completely insulated from the VMs. The host can be upgraded without breaking anything in the VM itself. It also means that there is no root ZFS pool. To OpenSolaris and Solaris veterans this is huge, because it allows you to use RAID-Z on your system, where before you would have been constrained to mirroring — unless you were forward-looking enough to use separate boot and data drives.
The main challenge with SmartOS is figuring out how to separate your data from the host system, and how to make this data available to a multitude of virtual machines — many of which will need to be deleted and re-created. Furthermore, one has to figure out how to efficiently restore those re-created virtual machines to the usable state they were in before they were destroyed — this mostly refers to installed packages, config files, and the home directory. Also, one has to get used to hopping between various purpose-specific virtual machines, instead of just sticking to a single machine.
As I said before, I used to store my data — which consists of a vast collection of (legally obtained) PDF books, music files, personal software projects, and historical performance data — as a bunch of subdirectories in
/rpool. This simply will not do in the new world order. The default ZFS pool in SmartOS is the
zones pool. To start off, in the global zone, I created a non-conflicting directory heirarchy to hold all of my data:
I broadly divide my software activities into two categories: analysis and synthesis. The analysis category consists of various software analysis data and notes that are persistent, and allows me to resume a line inquiry after a long break or distraction. Most of my source repositories have infrastructure for collecting and logging performance (and debug) data. However, this data will be overwritten the next time the framework is run, or the next time
make clean is executed. In order to have historical performance data, the logs had to be dated and stored somewhere and thus the analysis data set was born. The synthesis data set contains everything that I create which is mostly software, blog posts, and essays. My collection of ebooks and music was located in
/rpool/music. This is what the transfer looked like from an OpenIndiana machine to a SmartOS machine:
There was also the neccessity to separate my home-files from the system. So I created a new data set just for this purpose:
skel directory contains all of the dot-files from the OpenIndiana install. Any changes to those dot-files has to happen in that directory. Then a script is used to deliver the files to a home directory. Keeping your files in a VM is a bad idea since (a) destroying that VM will destroy those files too (as well as the user that owns those files — in my case the user
nickziv) — and (b) sharing those files between VMs becomes less convenient.
You will want to automate the creation of your user for new VMs as well as the delivery of those files. Thankfully the
useradd utility seems to have had this use-case in mind. Here is a script that creates my user, delivers the files, and gives me the administrative privileges (local to the guest VM only). Note that this script only works on SmartOS VMs — more on those later.
-k switch copies the files in skel to
$HOME for that user. Most administrators like to use NFS for this sort of thing, but I think that it is overkill for my needs. In later sections we will see how we can save our installed packages between VMs.
Virtual machines are the unit of currency in SmartOS. You can create two kinds of VMs: OS and KVM. In short, OS VMs are faster in every respect to KVM VMs, but are less flexible: they can only run applications that run on Illumos, and their system directories can’t be permanently changed (specifically
/usr) — this is because they share these directories with the global zone, which means all of your OS VMs get upgraded at no cost when you upgrade your release of SmartOS. This makes them ideal as sandboxes for work you would have previously done in an Illumos machine’s global zone. KVM VMs are slower, and use more RAM and disk, but they make up for it by allowing you to run any OSes (like Linux, Windows, Haiku, FreeBSD, and Plan9) that you may need. This makes them ideal for running legacy applications and applications that aren’t packaged for SmartOS (but are packaged for, say, Debian or FreeBSD).
Both OS and KVM VMs are created from _images}. Images are base templates. The two commands SmartOS provides for managing images and VMs are, respectively,
vmadm. Generally, you use
imgadm to get images, while you use
vmadm to turn them into working VMs.
By default images are provided by Joyent, but you can specify other publishers. To see which ones are available for your use just do the following:
The list shows available images, which are uniquely identified by a UUID. You can grab an image by executing the following:
You can create a VM based off of an image using
vmadm, which takes JSON manifests of the VM you want. Below is one such manifest that I use to create my development VM. You’ll notice that at the end of the file are properties which tell
vmadm to create a LOFS mount of the
/depot directory (which, keep in mind, is the root of ALL of my other personal ZFS data sets). Using this config, you have to manage these datasets using the
zfs command from the global zone — trust me, this is the simplest way.
If you’re like me and like having lots of VMs, you’ll want to have a place where you can store all of your manifests. I created a ZFS dataset for this purpose under
zones/depot/manifests. I’m not sure why I didn’t put this under the
synthesis dataset, but whatever, this is what I ended up with.
Also, if you use vim, you’ll want to save the manifests using a
.js extension instead of a
.json extension, because vim only uses highlighting for the former.
You’ll notice that you can give your VMs names (aliases). All of the tools in SmartOS identify VMs using UUIDS, which are a pain to type. So, if you choose to use unique aliases for each VM, you can write wrapper-scripts that expand the aliases into UUIDS. I use the following script to login to OS VMs, instead of zlogin.
zvmlogin nickziv dev
You’ll probably want to make such scripts for common VM-related things you do in the global zone. The bash shell in the global zone can do tab-completion for those UUIDs if you start typing them, but I find it unsatisfactory — requires listing the VMs and eyeballing the first 4 to 6 letters of the UUID.
In an OS VM you install packages through the
pkgin utility which is a wrapper over the cross-platform
pkgsrc packaging infrastructure from NetBSD. You search of a package using the
You install a package using the
install subcommand, which can be abbreviated to
-y flag tells the utility to assume “yes” for all questions.
I maintain a KSH script that installs all of the packages that I need for development, in the
home_files directory I mentioned above. It looks like this:
Whenever a I create an OS VM for development, usually to replace an old one, I run the
create\_nickziv.ksh script to create my user, and then I run ` devpkgs.ksh` to install all packages. Sometimes packages for certain environments are managed outside of
pkgsrc in an environment-specific packaging system (like that used by R). You’ll have to maintain separate scripts for these, as I do for R (called
install.R). Which can run like so:
pfexec Rscript install.R.
This is a far-cry from a full-blown Chef or SaltStack deployment, but it’s worked for me so far, and I see no need at the moment to go with something more complicated.
I hope that my experience is helpful to you. There are other SmartOS topics that I’d like to discuss, but time is limited. Good night, and good luck.