OpenIndiana supports several methods of virtualization:
The global zone is the first zone to boot and requires hardware access. From the global zone, non-global zones are created and booted. Boot time for non-global zones is very fast, often a few seconds. The CPU, network, and memory resources for each zone can be controlled from the global zone, ensuring fair access to system resources. As with other forms of virtualization, each zone is isolated from the other zones – zones cannot see processes or resources used in other zones. The low marginal cost of a zone allows large systems have tens or even hundreds of zones without significant overhead. The theoretical limit to the number of zones on a single platform is 8,192.
An easy way to implement zones is to use a separate ZFS file system as the zone root's backing store. File systems are easy to create in ZFS and zones can take advantage of the ZFS snapshot and clone features. Due to the strong isolation between zones, sharing a file system must be done with traditional file sharing methods (eg NFS).
When each zone is created it comes with a minimal set of packages, and from there you can add and use most packages and applications as required.
Quick Setup Example
For each zone (in a simple configuration), you really only need a few bits of info.
As a user with Primary Administrator role, you create the zone with
zonecfg -z example_zone
this puts you inside the zone configuration program where you can change and update settings particular to the zone specified with -z. zonecfg break different resource groups of data, you add a new resource with add. The most important resource is a virtual network card, this is added with add net then details are added and then end closes the editing of this resource.
set physical = e1000g0
set address = 192.168.254.200
Then tell the zone where its root filesystem will be create and mounted in the global zone
Then commit the changes and exit the zone configuration program.
Now all you have to do is install and boot your zone, the install process download the basic packages from your IPS repository and then boot performs a virtual hardware boot of your new zone.
zoneadm -z example_zone install
zoneadm -z example_zone boot
Whilst booting for the first time you will need to ask some basic configuration (you can set this all up via zone configuration before hand), to login is as the zone local console (as if you were sitting in front of a real machine as it boots), type
zlogin -C example_zone
Answer the questions (when it asks you for terminal type, the answer will be in most cases xterm), once done you can log in locally (you will get a login prompt) or you can ssh in via the IP address you provided to zone config.
That is it your zone is now up and running, as zones start with a minimal configuration, you will likely be missing many of the niceties you would expect. All are available via IPS packaging, for example if you miss the editor nano then from your example_zones command prompt type
pkg install nano
In general you are likely to want to install lots of of packages depending on what your using the zone for.
illumos-gateis currently limited to Intel CPUs with EPT extensions, although porting work is underway (as of March 2013) to add some AMD CPU support.
Hypervisor-based virtualization (not zones) allow to run different operating systems such as Windows and Linux, in virtual machines hosted on an OpenIndiana machine. Note that you can only have one hypervisor engine accelerated by hardware (i.e. CPU extensions) active at any one time. Operating-system level virtualization (zones, containers) require the guest operating environments to use the same kernel as the host, though possibly with a different set of user-space programs (especially in case of branded zones), and are often combined with hypervisor-managed VMs to fine-grain resource control or administrative access to the VMs or just to group them by project.
There was a bug #1723 in illumos-gate (fixed in the gate, chset 13903; not yet included in OpenIndiana as of oi_151a7) which caused hypervisor drivers (for KVM and VirtualBox) to hang during boot of the host on systems with x2apics (mostly Sandy Bridge).
The workaround is to disable apix:
A proper solution is to update your kernel with a local rebuild of the illumos-gate (or updating to a newer release of your OS, when available).
Some of the more generic available options will be covered in this chapter. Depending on your deployment goals and resources, and not in the least – on your networking topology, you may have a need for much more complicated setups than are explained here. Some such setups are detailed on other pages in this Wiki and the illumos Wiki, others can be found on blogs throughout the Internet.