OpenIndiana supports several methods of virtualization:
illumos-gateis currently limited to Intel CPUs with EPT extensions, although porting work is underway (as of March 2013) to add some AMD CPU support.
Zones are an OpenIndiana feature that provides operating system-level virtualization. Each zone is managed as a completely separate OpenIndiana machine. Zones have very low overhead and are one of the most efficient forms of OS virtualization.
The global zone is the first zone to boot and requires hardware access. From the global zone, non-global zones are created and booted. Boot time for non-global zones is very fast, often a few seconds. The CPU, network, and memory resources for each zone can be controlled from the global zone, ensuring fair access to system resources. As with other forms of virtualization, each zone is isolated from the other zones – zones cannot see processes or resources used in other zones. The low marginal cost of a zone allows large systems have tens or even hundreds of zones without significant overhead. The theoretical limit to the number of zones on a single platform is 8,192.
An easy way to implement zones is to use a separate ZFS file system as the zone root's backing store. File systems are easy to create in ZFS and zones can take advantage of the ZFS snapshot and clone features. Due to the strong isolation between zones, sharing a file system must be done with traditional file sharing methods (eg NFS).
When each zone is created it comes with a minimal set of packages, and from there you can add and use most packages and applications as required.
Quick Setup Example
For each zone (in a simple configuration), you really only need a few bits of info.
192.168.254.200for this example of shared networking (in exclusive networking the zone sets its own IP address from inside, and can use DHCP);
As a user with Primary Administrator role, you create the zone with
# zonecfg -z example_zone
This begins a configuration dialog, similar to the following:
|create||This puts you inside the zone configuration program where you can change and update settings particular to the zone specified with |
set physical = e1000g0
set address = 192.168.254.200
The most important resource is a virtual network card, this is added with
This example configures networking in the
|Then tell the zone where its root filesystem will be create and mounted in the global zone|
|Then verify the changes to generally check that no mistakes were made.|
|Then commit the changes and exit the zone configuration program.|
Now all you have to do is install and boot your zone, the install process download the basic packages from your IPS repository and then boot performs a virtual hardware boot of your new zone.
While it is possible to infinitely complicate things (to use different IPS repositories, etc.), the simple installation method is:
# zoneadm -z example_zone install # zoneadm -z example_zone boot
Whilst booting for the first time you will need to ask some basic configuration (you can set this all up via zone configuration beforehand in a
/etc/sysidcfg file). To login is as the zone local console (as if you were sitting in front of a real machine as it boots), type:
# zlogin -C example_zone
Answer the questions (when it asks you for terminal type, the answer will be in most cases xterm for your interactivce GUI sessions, or ansi or vt100 for your headless SSH sessions).
Once done you can log in locally with
zlogin example_zone (you will get a login prompt), or you can ssh in via the IP address you provided to zone config.
That is it, your zone is now up and running; as zones start with a minimal configuration, you will likely be missing many of the niceties you would expect. All are available via IPS packaging, for example if you miss the editor
nano, then from your example_zones command prompt type:
# pkg install nano
In general you are likely to want to install lots of of packages depending on what your using the zone for.
Hypervisor-based virtualization (not zones) allow to run different operating systems such as Windows and Linux, in virtual machines hosted on an OpenIndiana machine. Note that you can only have one hypervisor engine accelerated by hardware (i.e. CPU extensions) active at any one time. Operating-system level virtualization (zones, containers) require the guest operating environments to use the same kernel as the host, though possibly with a different set of user-space programs (especially in case of branded zones), and are often combined with hypervisor-managed VMs to fine-grain resource control or administrative access to the VMs or just to group them by project.
There was a bug #1723 in illumos-gate (fixed in the gate, chset 13903; not yet included in OpenIndiana as of oi_151a7) which caused hypervisor drivers (for KVM and VirtualBox) to hang during boot of the host on systems with x2apics (mostly Sandy Bridge).
The workaround is to disable apix:
A proper solution is to update your kernel with a local rebuild of the illumos-gate (or updating to a newer release of your OS, when available).
Some of the more generic available options will be covered in this chapter. Depending on your deployment goals and resources, and not in the least – on your networking topology, you may have a need for much more complicated setups than are explained here. Some such setups are detailed on other pages in this Wiki and the illumos Wiki, others can be found on blogs throughout the Internet.