Skip to end of metadata
Go to start of metadata


For a number of obscure reasons (such as personal preferences, or remote console problems), an interactive installer might not do what you want.

One particular situation is when you'd like to install OpenIndiana (or another OpenSolaris derivate OS with a LiveCD/LiveUSB distro) alongside your existing OS in the same ZFS root pool.

Another situation is if you'd like to set up a "split-root" system with several filesystem datasets comprising your system's Boot Environment so that you can separately compress, quota or clone your system components. This particular task is detailed in illumos bugtracker bug #829 and article Advanced - Split-root installation (however, that conversion can be done after a successful installation of OI to the HDD in a single rootfs dataset).

In these cases you still have a chance to use the LiveCD media to bootstrap your OpenIndiana system the way you want.

NOTE that this procedure can fail in many ways and requires understanding of the steps, of typical Solaris on-disk layout, of ZFS administration, etc. to recover from failures or seek other workarounds.


In many of the examples below, samples of commands to run as "root" (or via pfexec) are prefixed with a ":;" prompt.
Basically this prepends a call of "true" to the sample command and does nothing, unlike the typically used "hash" (#) prompt which acts as a comment and precludes the copypasted commands from running.


If you want to do everything manually, you don't need to run the official installer, as in the example below. In fact, you can only either use the installer or do stuff by hand, because the installer creates the rpool as part of its procedure.

NOTE: This could also be used with a local X11 GUI, but my problematic box has an unsupported video card.

For me it was a starting point which failed for reasons outside its control, but illustrated what structure should be achieved (and initialized the rpool, which includes slicing my disk):

Apparently the installer failed during boot-archive creation? Who knows... who cares? (wink)

Manual installation of OpenIndiana, procedure


The ZFS pool "rpool" is (in the example above) available courtesy of the gui-installer, but just as well could have been made manually, as shown below.

You may find more details in Advanced - Creating aligned rpool partitions and Advanced - Creating an rpool manually so this page just provides a short overview.

Steps below were recreated from "zpool history" and active dataset attributes:

This is also one of the places where you can customize stuff for the whole rpool, like require "-o checksum=sha256 -o copies=3" etc. for the paranoid – or for known-bad hardware (wink)
NOTE1: The root dataset (rpool) and the BE dataset (rpool/ROOT/openindiana) should not be compressed (checked, this is still valid as of oi_151a).
NOTE2: Since oi_151a8, you may enable compression=lz4 on root datasets – but earlier builds of GRUB and illumos kernel (i.e. older Live Media) won't be able to mount these roots.

If you used the installer, you were requested for a non-root user login credentials. Without it the system is unusable since "root" is an RBAC role (not a user account) by default, that user's home dataset is created below:

Note that the installer also sets "mountpoint" explicitly for datasets under rpool/export, but this is not quite required.

This is also the time when you can create and mount sophisticated dataset hierarchies. I for example make a "rpool/SHARED/var" structure with some components of "/var/*" separated into datasets for better quota management, and they are shared between BEs by virtue of "/etc/vfstab". This is an advanced topic I wrote of in detail at Advanced - Split-root installation, so if you don't know why or how you'd do this monstrosity – just don't do it (wink) (though there are certain benefits, especially for space-constrained systems).


In order to speed up the installation, I got the installer image stored in the HDD rpool.

Note that the copy below can also be done directly from the LiveCD or LiveUSB with "dd", but the point of my excercise was to avoid interactions with the slow unreliable virtual device provided by a jKVM applet (wink)

So I copied over network, which should get set up. If you're lucky, OpenIndiana's NWAM service and your LAN's DHCP would take care of this (and your LAN driver would be provided by the booted LiveCD image). Inserting a custom driver (as was needed for one of my two Marvell Yukon cards on another box) can be done with USB or floppy or reimaged installation ISO, that's out of scope here.

If NWAM doesn't work for you, or you have no DHCP in this network segment, you should set up the IP addresses manually (if you're following me closely, you'd note that I actually did this before remotely running the gui-install):

I created a special dataset for distribution storage:

Then I copied the oi_151a LiveCD ISO images into it; here it's done by rsync from "myotherhost" (in numerical IP address form) which is the admin workstation.
Depending on your network (in)capabilities, any other way including "cp" or "cat" over NFS, or "netcat" over raw sockets, can be used:

Here I got the "desktop edition" livecd image. I don't need the graphics environment on the server (so I'll disable the "gdm" SMF service after boot), but many programs will need the libraries anyway.

If you can, check the checksums, i.e.:


Now I'll try to overlay-mount the local image over paths provided in the gui-installer example above.

First I added a loop device with the newly copied ISO image:

Then I overlay-mounted it on top of the installer's image:

Then I "made local" other CD-originated paths:

Note that there is also a path named "/mnt/misc/opt" mounted from "/mnt/misc/opt"; this got automatically redirected for me from the local image.

Resulting mounts:


Now that you've created the storage structure and got a good source image, there's some rsync'ing to do. The Live-running root is a good source for general filesystem structure, as well as the device list matching your hardware. The CD image or its "/usr" might be better sources of installed software.

We use "rsync -x" to copy only the contents of current FS and not descend into sub-FSes like "/var/run" or "/tmp".

Here goes:


Non-root user

The LiveCD clone has the "jack:jack" user embedded. You may want to edit "/a/etc/passwd" and "/a/etc/shadow" to replace him with your username and perhaps preferred UID (i.e. same as in your other LAN hosts).

See also RBAC "/a/etc/user_attr" and/or "/a/etc/sudoers" setup to allow privilege elevation. This is important to be able to administer your system!

If you want "jack" to remain in place, you should change his password after reboot, and perhaps copy his "/jack/" home directory from livecd running image to "/a/jack/" on the HDD.

Static network access

Non-DHCP networking, if required, should be set up in "/a/etc/hosts", "/a/etc/hostname.DRIVERINSTANCE(:ALIAS)" files, "/a/etc/netmasks", "/a/etc/defaultrouter", "/a/etc/defaultdomain", "/a/etc/resolv.conf" and "/a/etc/nsswitch.conf" – to name a few.

You might also want proper setup of DNS, maybe NTP or syslogging – but that can all be done after reboot, in a comfortable new system (smile)

Comments in /a/etc/motd

Modify as you like, if you like your systems to report - where have you logged into!? (wink)


Prepare the new OS to (re-)detect hardware, and precreate its boot-archive:

Boot system (GRUB) support

Prepare for actual bootup:

Set up a GRUB menu in "/rpool/boot/grub/menu.lst". I have copied this from another similar host (along with a number of GRUB resource files), but you can find them in LiveCD/s /boot/grub/ path as well:

Set up "boot signs" to point GRUB to bootable pools:

I customized my GRUB menu.lst (adding serial console support and removing splash image; sercon might require additional configuration elsewhere in the BE image), YMMV:


NOTE That technically the "findroot" line is not required for "current disk" (the pool has been located anyway) and if you omit the "bootfs" line, GRUB will use the bootfs attribute value from your rpool to locate your "current" BE.

A small Linux habit – have "/etc/grub.conf" pointing to the menu. Not required/used by OpenSolaris itself:

If you did any custom filesystem datasets without automount (split out "/var" or such), or added swap datasets – time to edit "/a/etc/vfstab"...

Prepare low-level SMF services

In the new root BE you should set up some SMF services. For example, livecd variants of services should be replaced with HDD "default" instances.
I embedded an init-script to do this, but still got some recovery to do manually. In hindsight, perhaps I should've disabled-enabled the services in the running livecd BE and rsynced "/etc/svc/" SMF repository to the hdd BE, somewhat like this:

  • NOTE: I later learned that svccfg (interactive or scripted via stdin) may be used with an explicitly specified repository /a/etc/svc/repository.db to manipulate the after-boot services of the installation right from the Live environment, though in a pretty bulky way (see the restarter property group).

I also like my SSH access to start up ASAP, i.e. if the "/export" path is unmountable due to a "polluted" (not empty) directory, the "svc://filesystem/local" service breaks and makes default SSH inaccessible, while I like all consoles to be available so as to remotely repair bugs like these. YMMV, but here's the codez:

Finally, reboot and hope all works; have your console access ready:


If you're asked for recovery login, and have not yet changed the root password, enter "root" as username and "openindiana" as the default (livecd) password.
If you're offered normal login, use your unprivileged account (like "jack:jack") and elevate privileges with "pfexec", "sudo", "su", "sudo su -" or such.

While debugging boot problems, you might want to use the "-v -m verbose" options to the kernel in GRUB, to report the kernel's and SMF's boot activities on /dev/console.

Some programs might not work in the installed environment, because the active PAM configuration contradicts the preconfigured password hashes in some way. It is recommended to redefine the root and unprivileged user passwords. Note that if you install other PAM modules later, notably for the kCIFS server to authenticate CIFS users, you'd need to redefine the passwords again, in order for them to work with these new mechanisms.