Skip to end of metadata
Go to start of metadata

Documentation has moved

This page has been moved to http://docs.openindiana.org/handbook/systems-administration/#zones. Please, don't edit it.

TODO notes from Kevin J. Wooley:

Work in progress -- please pardon the point-form notes to start.

  • (plus)light-weight, in most cases zone overhead isn't measurable
  • (plus)global zone details
  • (plus)details on which resources are shared between zones
  • zone management:
    • (plus)creation
    • (plus) installation
    • (minus)booting, rebooting, and shutdown
    • (minus)auto-start on global zone boot
    • (minus)uninstallation
    • (minus)deletion
    • (minus)migration
  • (+/-) zone properties
  • (plus)zone networking
  • ...

TODO notes from Jim Klimov: with plus/minus signs I've marked the presence of the subjects in the text. There is minimal detail on some of these, but we'd likely want to keep this chapter compact (I'm notoriously bad at that) and provide references to detailed pages as appropriate for further reading.

Zones are an OpenIndiana feature that provides operating system-level virtualization. Each zone is managed as a completely separate OpenIndiana machine, or, technically, an "Operating Environment" (OE). Zones have very low overhead and are one of the most efficient forms of OS virtualization.

The global zone is the first zone to boot and requires hardware access. From the global zone, non-global zones are created and booted. Boot time for non-global zones is very fast, often a few seconds. The CPU, network, and memory resources for each zone can be controlled from the global zone, ensuring fair access to system resources. Disk space access is usually controlled by ZFS (with quotas and reservations if needed), as well as mounting of filesystem resources with NFS or lofs. As with other forms of virtualization, each zone is isolated from the other zones – zones cannot see processes or resources used in other zones. The low marginal cost of a zone allows large systems have tens or even hundreds of zones without significant overhead. The theoretical limit to the number of zones on a single platform is 8,192.

Different releases of (Open)Solaris used different packaging distribution method for the global zone. OpenIndiana global zones use the "ipkg" zone brand and are based on IPS Packaging.

An easy way to implement zones is to use a separate ZFS file system as the zone root's backing store. File systems are easy to create in ZFS and zones can take advantage of the ZFS snapshot and clone features. Due to the strong isolation between zones, sharing a file system must be done with traditional file sharing methods (eg NFS).

When each zone is created  it comes with a minimal set of packages, and from there you can add and use most packages and applications as required.

Quick Setup Example

For each zone (in a simple configuration), you really only need a few bits of info.

  • The zone's name – something you can remember it by. For this example I'm naming the zone, example_zone;
  • The NIC – which physical or virtual network cards the zone will use exclusively or share. For this example I'm using e1000g0;
  • An IP address the zone will use – 192.168.254.200 for this example of shared networking (in exclusive networking the zone sets its own IP address from inside, and can use DHCP);
  • The mount point in the global zone for the zone's file system. For this example I'm using /export/example_zone.

As a user with Primary Administrator role, you create the zone with:

# zonecfg -z example_zone

This begins a configuration dialog, similar to the following:

zonecfg commandExplanation
createThis puts you inside the zone configuration program where you can change and update settings particular to the zone specified with -z.
zonecfg break different resource groups of data, you add a new resource with add.

add net  

set physical = e1000g0

set address = 192.168.254.200

end

The most important resource is a virtual network card, this is added with add net, then details are added and then end closes the editing of this resource.

This example configures networking in the ip-type=shared mode (default).
You specify the NIC or VNIC and the IP address here.
You can also optionally set a defrouter IP address, if it is different from one used by the global zone.

set zonepath=/export/example_zone

Then tell the zone where its root filesystem will be create and mounted in the global zone

verify

Then verify the changes to generally check that no mistakes were made.

commit

exit

Then commit the changes and exit the zone configuration program.

Now all you have to do is install and boot your zone, the install process download the basic packages from your IPS repository and then boot performs a virtual hardware boot of your new zone.

While it is possible to infinitely complicate things (to use different IPS repositories, etc.), the simple installation method is:

# zoneadm -z example_zone install
# zoneadm -z example_zone boot

After booting zone for the first time, you can configure it manually or you can set up zone configuration beforehand in a /etc/sysding.conf file (look at sysding(1M) and /lib/svc/method/sysding for details). To login to the zone type:

# zlogin example_zone

Note, that on first zone boot sysding(1M) will set root's password to NP. Before this happened you will not be able to login to zone with zlogin, so this command will not work on early startup stage.

Once done you can log in locally with zlogin example_zone (you will get a login prompt), or you can ssh in via the IP address you provided to zone config.

That is it, your zone is now up and running; as zones start with a minimal configuration, you will likely be missing many of the niceties you would expect. All are available via IPS packaging, for example if you miss the editor nano, then from your example_zones command prompt type:

# pkg install nano

In general you are likely to want to install lots of of packages depending on what your using the zone for.

  • No labels

1 Comment

  1. If you want to have a zone with a full GNOME GUI, you can do the following:

    • install the zone with "zoneadm -z <zonename> -e slim_install" to install the GUI support

    Remove "hal" service dependency from the "gdm" service configuration

    • svccfg export gdm >gdm.xml
    • Edit "gdm.xml" and comment out "hal" dependency
    • svccfg import gdm.xml
    • svcadm refresh gdm

    Allow the "dbus" service to be started in the zone

    • Edit /lib/svc/method/svc-dbus and comment out the section with the if block that reads "if smf_is_nonglobalzone; then"

    Enable XDMCP to allow remote graphical login by following the rules on Oracle 11 Solaris Desktop Administrator's Guide

    • Edit /etc/gdm/custom.conf and add "Enable=true" to the [xdmcp] section
    • Restart gdm service with "svcadm restart gdm"
    • Enabled xvnc-inetd with "inetadm -e xvnc-inetd"