Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 5.3

OpenIndiana supports several methods of virtualization:



The global zone is the first zone to boot and requires hardware access. From the global zone, non-global zones are created and booted. Boot time for non-global zones is very fast, often a few seconds. The CPU, network, and memory resources for each zone can be controlled from the global zone, ensuring fair access to system resources. As with other forms of virtualization, each zone is isolated from the other zones – zones cannot see processes or resources used in other zones. The low marginal cost of a zone allows large systems have tens or even hundreds of zones without significant overhead. The theoretical limit to the number of zones on a single platform is 8,192.

An easy way to implement zones is to use a separate ZFS file system as the zone root's backing store. File systems are easy to create in ZFS and zones can take advantage of the ZFS snapshot and clone features. Due to the strong isolation between zones, sharing a file system must be done with traditional file sharing methods (eg NFS).

When each zone is created  it comes with a minimal set of packages, and from there you can add and use most packages and applications as required.

Quick Setup Example

For each zone (in a simple configuration), you really only need a few bits of info.

  • The zone's name - something you can remember it by. For this example I'm naming the zone, example_zone
  • The physical nic - which physical network cards the zone will share. For this example I'm using e1000g0
  • An IP address the zone will use. for this example
  • The mount point in the global zone for the zone's file system. For this example I'm using /export/example_zone

As a user with Primary Administrator role, you create the zone with

zonecfg -z example_zone


this puts you inside the zone configuration program where you can change and update settings particular to the zone specified with -z. zonecfg break different resource groups of data, you add a new resource with add. The most important resource is a virtual network card, this is added with add net then details are added and then end closes the editing of this resource. 

add net  

set physical = e1000g0

set address =


Then tell the zone where its root filesystem will be create and mounted in the global zone

set zonepath=/export/example_zone

Then commit the changes and exit the zone configuration program.



Now all you have to do is install and boot your zone, the install process download the basic packages from your IPS repository and then boot performs a virtual hardware boot of your new zone.

zoneadm -z example_zone install

zoneadm -z example_zone boot

Whilst booting for the first time you will need to ask some basic configuration (you can set this all up via zone configuration before hand), to login is as the zone local console (as if you were sitting in front of a real machine as it boots), type

zlogin -C example_zone

Answer the questions (when it asks you for terminal type, the answer will be in most cases xterm), once done you can log in locally (you will get a login prompt) or you can ssh in via the IP address you provided to zone config.

That is it your zone is now up and running, as zones start with a minimal configuration, you will likely be missing many of the niceties you would expect. All are available via IPS packaging, for example if you miss the editor nano then from your example_zones command prompt type

pkg install nano

In general you are likely to want to install lots of of packages depending on what your using the zone for.

7.2 xVM

7.3 Other.......

  • with local zones (containers) allows to run processes using the same OpenIndiana kernel and system resources. Overheads are the lowest, while processes are isolated, but they are still UNIX processes. It is possible to use "branded" zones to emulate certain releases of other UNIX OSes, including Solaris 8, 9 and 10 (specify supported release-update numbers) and a Linux environment similar to kernel 2.4-based RHEL/CentOS.
  • 7.3 KVM:  (correction may be needed) Type-1 Hypervisor with QEMU-based KVM (since development release oi_151a), where each VM is a kernel-space process with smaller latencies and overheads in comparison to Type-2 virtualization. The published version in illumos-gate is currently limited to Intel CPUs with EPT extensions, although porting work is underway (as of March 2013) to add some AMD CPU support.
    Xen-based virtualization (also rooted in QEMU(question)) was at some time part of OpenSolaris project, but became outdated and was ultimately dropped due to lack of resources and invested interest.
  • 7.2 VirtualBox: Type-2 Hypervisor with VirtualBox software running in a global or local zone. Each VM is a userspace process for the physical host.

Hypervisor-based virtualization (not zones) allow to run different operating systems such as Windows and Linux, in virtual machines hosted on an OpenIndiana machine. Note that you can only have one hypervisor engine accelerated by hardware (i.e. CPU extensions) active at any one time. Operating-system level virtualization (zones, containers) require the guest operating environments to use the same kernel as the host, though possibly with a different set of user-space programs (especially in case of branded zones), and are often combined with hypervisor-managed VMs to fine-grain resource control or administrative access to the VMs or just to group them by project.


There was a bug #1723 in illumos-gate (fixed in the gate, chset 13903; not yet included in OpenIndiana as of oi_151a7) which caused hypervisor drivers (for KVM and VirtualBox) to hang during boot of the host on systems with x2apics (mostly Sandy Bridge).

The workaround is to disable apix:

Code Block
echo "set apix_enable=0" >> /etc/system 

A proper solution is to update your kernel with a local rebuild of the illumos-gate (or updating to a newer release of your OS, when available).

Some of the more generic available options will be covered in this chapter. Depending on your deployment goals and resources, and not in the least – on your networking topology, you may have a need for much more complicated setups than are explained here. Some such setups are detailed on other pages in this Wiki and the illumos Wiki, others can be found on blogs throughout the Internet.