The Distribution Constructor (the software that produces the final ISO Image) does not work inside a local zone.
OpenSolaris' (and its descendants') local zones can have at least two approaches to networking: a shared IP stack and an exclusive IP stack. There are pros and cons to each of them – they are tools for different jobs somewhat.
If your build machine is a VM on a platform which requires the hypervisor host to "know" the VM's MAC addresses – such as VirtualBox running on Solaris with the "bridged networking" mode – your build zone must use a NIC defined by "VM hardware". You still have two options though:
If you plan to use VirtualBox bridged networking on a VM under Solaris-related OSes, see the VirtualBox User Guide for more details on this setup – in particular, the host Solaris machine is also encouraged to use VNICs with explicitly defined MAC addresses in order to attach the VM NICs to them.
The example below creates the zone root under
rpool ZFS pool.
If your machine has other pools, perhaps bigger and/or more performant i.e. by using L2ARC caches, you may want to use a different pool for zone data (by further delegating whole datasets or
lofs-mounting individual paths – such as your building workspaces), or for the whole zone roots altogether. As an alternative to loopback-mounting, the NFS client in a local zone can use the global zone's NFS server, but that is likely to be slow for compilation in particular (especially if
sync is enabled without a fast ZIL).
(Note that it may be officially unsupported to hold local zone roots separately from OS roots in some OpenSolaris descendants; as well as that earlier OpenSolaris releases officially disapproved of local zones using their host global zone's NFS server – although it "just worked").
The united ZFS properties will be inherited down the road to actual local zone datasets hosted under this container dataset.
This guide below assumes you are on a LAN with DHCP, so we will create a CrossBow VNIC on your primary network interface (assumed to be
e1000g0 here, adapt as necessary). You can skip this step if the local zone would use shared networking or a dedicated (VM-)hardware NIC.
If needed, you can also specify binding of the VNIC to a particular VLAN of your external network with
$ pfexec zonecfg -z zone1 create set autoboot=true set zonepath=/zones/build/zone1 set ip-type=shared add net set address=192.168.1.181/24 set physical=e1000g0 set defrouter=192.168.1.1 end exit
NOTE: It may be possible to save on internet traffic by installing from a locally configured mirror of OpenIndiana IPS repositories set as defaults for your system. This is not explored below, but you're welcome to try that and write a howto chapter here
$ pfexec zoneadm -z zone1 install A ZFS file system has been created for this zone. Publisher: Using openindiana.org (http://pkg.openindiana.org/dev/ ). Image: Preparing at /zones/build/zone1/root. Cache: Using /var/pkg/publisher. Sanity Check: Looking for 'entire' incorporation. Installing: Packages (output follows) Packages to install: 136 Create boot environment: No Create backup boot environment: No Services to change: 4 DOWNLOAD PKGS FILES XFER (MB) Completed 136/136 28050/28050 149.9/149.9 PHASE ACTIONS Install Phase 41423/41423 PHASE ITEMS Package State Update Phase 136/136 Image State Update Phase 2/2 Note: Man pages can be obtained by installing pkg:/system/manual Postinstall: Copying SMF seed repository ... done. Postinstall: Applying workarounds. Done: Installation completed in 267.072 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process.
Zone creation as described above can fail for a number of reasons, sometimes obscure (i.e. you followed different steps, or your current OS deviates from one used in making this how-to).
Check the commands' output for errors, such as inability to mount the created dataset (before the
This should automatically create and populate the local zone's dataset hierarchy:
### Review defined zones $ pfexec zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / ipkg shared - zone1 installed /zones/build/zone1 ipkg excl ### Review zone config in the way good for cloning $ pfexec zonecfg -z zone1 export create -b set zonepath=/zones/build/zone1 set brand=ipkg set autoboot=true set ip-type=exclusive add net set physical=vnic0 end ### ### UNSUPPORTED methods follow ### $ pfexec grep zone1 /etc/zones/index zone1:installed:/zones/build/zone1:4a79102b-8029-67e7-b394-c8d4000d5950 $ pfexec cat /etc/zones/zone1.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE zone PUBLIC "-//Sun Microsystems Inc//DTD Zones//EN" "file:///usr/share/lib/xml/dtd/zonecfg.dtd.1"> <!-- DO NOT EDIT THIS FILE. Use zonecfg(1M) instead. --> <zone name="zone1" zonepath="/zones/build/zone1" autoboot="true" brand="ipkg" ip-type="exclusive"> <network physical="vnic0"/> </zone>
You might want to clone another zone later with these basic packages, or roll back to current conditions:
$ pfexec zfs snapshot -r rpool/zones/build/zone1@initialPackages
This step answers questions you would otherwise have to answer manually via a console-based wizard upon the first boot. Now the wizard would get its answers from the file (it may still ask questions not covered in the file, i.e. if the future versions of the wizard define new questions).
After boot you may also want to make sure that in the zone's
/etc/hosts file the static IP address (set by the global zone) would be associated with the zone's network name (short hostname and full FQDN).
$ pfexec zoneadm -z zone1 boot
You can detach from the console by issuing "
~." without quotes (or "
~~." over chained Unix
Once the zone has booted, you can copy the DNS resolution settings from GZ into the local zone (if networking is the same, and if your
sysidcfg or DHCP setups did not take care of that successfully):
$ pfexec cp /etc/resolv.conf /etc/nsswitch.conf /zones/build/zone1/root/etc/
You can now
zlogin into the local zone with:
Note that the zone's internetworking is possibly subject to external firewalls on your LAN, and/or access to a proxy server, etc.
You might want to clone another zone from these presets along with working networking:
$ pfexec zlogin zone1 init 5 $ pfexec zfs snapshot -r rpool/zones/build/zone1@initialNetSetup $ pfexec zoneadm -z zone1 boot
You can follow the illumos and OpenIndiana subproject guides on setting up the recommended environments (compilers, source code repositories, etc.):
Finally, you'd likely want to define the build-user account in the local zone, perhaps using his common home directory from the global zone via
lofs-mounting or NFS client and
automounter. For that user you may want to define the
sudo access rules and/or RBAC to elevate privileges, perhaps to install the built software, etc. (see HOW-TO Setup referential build zone for OpenIndiana Addon Consolidations for more details on that).
You might want to clone another zone from these presets now, so as to instantly start working in the clone: