Child pages
  • Migrating from an OpenSolaris SXCE
Skip to end of metadata
Go to start of metadata


The following page describes some steps that may be useful to streamline migration of a legacy SVR4-based Solaris distribution, such as OpenSolaris SXCE, into a native OpenIndiana installation. The instructions are being developed (as notes from the battlefield) and tested on a number of systems running SXCE ranging between snv_117 and snv_129 upgrading into oi_151a4, and may be incomplete. Even the rough high-level overview provided today is quite bulky already.

Also note that this procedure may be or not be suitable for Solaris releases, and if migration into native OpenIndiana per se is not your goal, you may have better chances with P2V or V2V migration into branded zones (s8, s9, s10 brands as applicable); in this case you may first have to update the Solaris release to the specific version supported by the corresponding branded zone engine.


The OpenSolaris project over its lifetime at Sun Microsystems delivered a number of distributions, the longest-surviving of which were OpenSolaris SXCE (Solaris eXpress Community Edition), based on SVR4 packaging like Solaris 10 and many versions before it, and OpenSolaris Indiana ("the" OpenSolaris) based on IPS packaging – and a direct predecessor to the OpenIndiana distribution. The OpenSolaris SXCE project was discontinued by Sun after release of build snv_130-based ISO images, and OpenSolaris Indiana was last released as the 2009.06 distribution based on build 111b, with pre-compiled upgrades published through to build 134b, when public development was severed by Oracle.

As it stands, due to closer compatibility in administrative techniques to the large deployed base of SVR4-based Sun Solaris 10 and earlier systems, the SXCE distribution gained larger popularity at its time, and some of its more stable builds remain in production today. Still, shops can not remain on an aging unsupported OS forever, and one of the options to move forward and gain new features (as well as bug fixes, security fixes, and now a well-documented procedure to rebuild the OS and fix something themselves for those who desire to) is to migrate to OpenIndiana. Unfortunately, there is no "wizard" for such migration, and unlike OpenSolaris Indiana which can be upgraded with IPS techniques, updating OpenSolaris SXCE requires a very manual installation and migration of settings and installed software due to significant changes in packaging of both global and local zones.
NOTE: it was not tested whether it is possible or practical to migrate SXCE into Indiana of the same build, and then upgrade the resulting system using IPS. That technique also sounds promising.

Also note that these instructions assume that the boot environment is in a ZFS root pool, and that it has adequate free space for the new OS image. If the old system is running with an UFS root, it is possible to migrate it into a ZFS root first (using SXCE after approximately build 103 or close to that, and Solaris 10u6 or so) using LiveUpgrade or manual file copying, but the process may require an available partition (spare disk, breaking of mirrors, etc.).

The author of this page is doing the migration for a number of systems he helps support, and these notes are published to help himself and others to replicate the process in the future. If you find these notes lacking, or would like some remote assistance, feel free to contact me.

Outline of the migration process

The process aims to install an OpenIndiana-based (OI) root filesystem alongside with the old SXCE root, migrate core settings and software, and first of all prepare the global-zone system to seem the same as the old environment while running OI. After that some subset of this routine can be applied to migrate the local zones (in my tests it has not proven possible to directly execute SXCE local zones on an OI host, nor automatically upgrade them to IPS). For a system with local zones, the migration is likely to incur a considerable downtime of end-user services while the zones are being migrated and tested.

It is not possible to install OI with its LiveMedia wizard to an existing root pool, because the installer starts by creating a new rpool of the current version in the provided partition. However, it is possible to copy an existing OI environment's files to a dataset (or dataset hierarchy) created in the existing root pool, and merge this copy with the existing operating environment's files which define its "identity" (including networking settings and configured device names). This has been tested at least with snv_117 and a ZFSv15 root pool, including the older snv_117 GRUB.

Installation of the OI image onto an existing rpool

  • Create the OI dataset (i.e. rpool/ROOT/oi_151a4) and populate it with an OI image, perhaps as detailed in these instructions:
  • zfs snapshot the copied OI image before you break it too bad (wink)
  • Mount the copied OI image into /a
  • Add the new BE into /rpool/boot/grub/menu.lst to allow selecting it for boots. It is advisable not to use beadm and similar tools during the migration, so they don't have a chance to make the old BEs unbootable.

Merging devices

The following steps are needed for the new OI-based global zone boot environment to "know" the same devices as were configured in SXCE with the same device names. It may be very inconvenient to have the NICs or HBAs to change names between reboots...

  • Make a snapshot of current root:

    :; SNAPNAME=preMigration
    :; BENAME=snv_117
    :; zfs snapshot -r rpool/ROOT/$BENAME@$SNAPNAME
  • Go into the snapshot and copy the device driver links and databases to the new BE:

    :; cd /.zfs/snapshot/$SNAPNAME
    :; rm -rf /a/dev /a/devices /a/etc/dev /a/etc/devices
    :; cp -prf dev /a/dev
    :; cp -prf devices /a/devices
    :; cd etc
    :; cp -prf dev /a/etc/dev
    :; cp -prf devices /a/etc/devices
    :; cp -prf dladm /a/etc/dladm
    :; for F in path_to_inst name_to_major \
       name_to_sysnum minor_perm driver_aliases driver_classes hostid \
       vfstab rtc_config system \
       ### TODO more files here? \
       ; do \
       [ ! -f "/a/etc/$F.oi" ] && mv -f /a/etc/"$F" /a/etc/"$F".oi; \ 
       cp -pf "$F" /a/etc/"$F.snv"; \
       cp -pf "$F" /a/etc/"$F"; \
    :; touch /a/reconfigure

    The last touching step ensures that the new BE would try to detect hardware upon its boot and create the missing links for possible new drivers.

  • Copy over the files in /etcpath_to_inst, ...

  • Merge the OI and SNV versions of these files in /etc:, ... to ensure that each number is assigned only once, and each driver only has one number

  • Merge your possible existing customizations to driver configurations in /kernel/drv/*.conf; for example, on our systems the e1000g configuration is enhanced to allow Jumbo Frames.

  • You may want to delay copying over /etc/zfs/zpool.cache until you're done configuring the basic environment of the new BE, so as to not delay its bootups and to not risk the data on secondary pools needlessly.
  • If your server has a HW watchdog, you may need to copy over files of the BMC driver (that was a closed driver and is absent from OI), and install FreeIPMI into the new BE.

Migrate networking settings and system identity

This step should ensure that your system (GZ or LZ) is accessible with the same IP address, SSH server keys, and those settings which are not in SMF

  • Check if /etc/motd has any info about the system, and copy those lines into the new BE

  • Copy over /etc/hostname* or /etc/dhcp.* as appropriate to configure the new BE's networking interfaces after boot

  • Copy over (if exist) /etc/defaultdomain, /etc/nodename, /etc/hostid, /etc/gateways, /etc/defaultrouter, /etc/resolv.conf, /etc/nsswitch.conf

  • Inspect and merge /etc/passwd, /etc/shadow, /etc/group, /etc/user_attr

  • /etc/inet/hosts, netmasks, inetd.conf, ntp.conf, ntp.keys, services
  • /etc/inet/dhcpsvc.conf (for DHCP servers)
  • inspect /etc/default/*
  • /etc/ssh (copy host keys, merge config)
  • /etc/ipf or /etc/opt/ipf (for private builds of IPFilter)
  • /etc/mail
  • /etc/dumpadm.conf
  • /etc/auto_*
  • /etc/dfs/dfstab (for NFS servers)
  • ... TODO: some more system settings

Go over /var

  • /var/spool/cron
  • /var/spool/*mqueue, /var/mail if not in separate datasets
  • databases (mysql, pgsql, slocate, etc.) if not in separate datasets

In-house unpackaged software

Usually in /opt, /usr/local with pieces in /etc and /var.

Third-party SVR4 packages

For software not delivered with OI and IPS (perhaps after you reboot into the new BE and pkg install needed additions), you can script up processing of /var/sadm/install/contents registry to pick up the package files and copy them over to new BE (as well as the text lines of this registry and package info artifacts in /var/sadm/pkg) for packages that you need – like third-party SMC* or in-house packages, and run around your original system to find package-dependent data files, logs and configuration files.

It might be somewhat expensive, but safe and quick, to copy over whole /opt and /usr/local directories into the new BE.

SMF services

svcadm export $service > /a/var/svc/manifest/.../$service.xml to save the customized or third-party service configurations into XML files, then import them in the new BE with svcadm import < .../$service.xml.


Revise and copy/merge the files and links in /etc/init.d, /etc/rc?.d/.

Prepare to reboot into new BE

bootadm update-archive -R /a

Unmount the new root dataset (hierarchy).

Set the canmount=noauto, mountpoint=/ for new root; inherit canmount and mountpoint for its optional children.

Snapshot (recursively) the new root.

reboot -p to "reboot via PROM" (BIOS and POST on x86), to see the GRUB menu and select the new BE's option to test how it goes.

Check the new running BE

It is possible for the first boot to be puzzled and complain about networking, while it gets acquainted with new hardware and device links; svcadm restart network/physical:default on the system console should not complain though.

If your server has a HW watchdog, inspect that its driver is still known and the SMF service is running, and that the deadman clock is being reset (i.e. bmc-watchdog -g) before it is too late and the server gets reset by BIOS.

Filesystems defined in /etc/vfstab and those automounted by ZFS should be mounted; data pools may need to be reimported when you're done with the OS. Use format and/or cfgadm to verify that all of your disks are seen, and at the same device node names.

Import SMF manifests exported from the old BE.

Check svcs to inspect possible failed services and to disable those you don't need.

pkg install the packages you may require, and pkg image-update the new system to current baseline.

What to do next

When you're satisfied with the new global zone, set it to be the default in GRUB's menu.lst file and proceed to creation of local zones and similar migration of zone data from the old system.

  • No labels