Child pages
  • Advanced - Split-root installation

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 5.3

Note: you might want to scroll down to "How do I do this?" for initial set-up if you're less interested in the theory and more in practice, and to "Upgrades" for subsequent pkg upgrade's of the setup.

Another note: For other enhanced OI setup ideas see Advanced - Creating an rpool manually, Advanced - Manual installation of OpenIndiana from LiveCD mediaAdvanced - ZFS Pools as SMF services and iSCSI loopback mounts, Zones as SMF services or Using host-only networking to get from build zones and test VMs to the Internet. Not all of these articles are applicable and limited just to only what it says on the label (wink)

What are we doing?

Note

The configuration below used to be relatively fragile in setup, so you should not do it without practice on remotely-accessed computers without some means of access to the console (be it IPMI or a colleague who can help as your hands and eyes over the phone). It is also not a required setup, though may be desired (and beneficial) in a number of cases.

Details on this procedure are tracked as illumos/OI issue #829, and hopes for its automation and support in the installer – as #4354.

Warning

This document may contain typos or factual errors. Test before you try. While great care has been taken to verify the sample commands, by making a full installation and split-rooting it by copy-pasting this page's instructions to verify it completely, some errors may still lurk... code is code (wink)

UPDATES: This has now been fully walked-through for modifying a fresh installation without leaving the LiveCD environment (in VirtualBox) by copy-pasting the commands from this page into a root's shell (bash via jack's " sudo su -"). Afterwards it was also tested that the instructions for beadm-cloning and using that as a multi-filesystem source, the lofs variant for a single-dataset origin, and "diving into snapshots", for a subsequent split-root procedure all worked. Updating the resulting alternate BEs (with reenabled compression after a beadb beadm create cloning) also worked.

Only networked cloning remains to test (sending of OS image from an origin system over rsync into the prepared split-root hierarchy on the target system); but since this boils down to different rsync parameters within an overall same methodology – I don't expect any specific problems there. Just remember to think about what you copy-pase into where (wink)

UPDATE 20141130..20141208: Procedure was verified and worked "as is" with OmniOS bloody-151013 (installation from last week's USB image, in-place updated after boot to include split-root setup, beadm-new.sh and pkg -R /a ... update to install the latest bloody bits).

What?

It may be desirable for a number of reasons to install the OI global zone not as a single uncompressed dataset (as was required until recently – before LZ4 compression became supported in GRUB and rpool with oi_151a8 dev-release), but as an hierarchy of datasets with separate /usr, /var, /opt and maybe other datasets. While some such datasets contain parts of the OS installation, others like /var/mail or /var/logs contain "usual" data which you may want shared (not cloned) between the different BE's (Boot Environments). This way whenever you reboot into one BE or another, such as during development or tests of new releases (and perhaps switching back to a "stable" BE for some reason), your computer's logged history would be appended to the same file regardless of the BE switcheroo.

...

  • One problem for the split-root setup (if you want to separate out the /usr filesystem) is that OpenIndiana brings /sbin/sh as a symlink to ../usr/bin/i86/ksh93. Absence of the system shell (due to not-yet-mounted /usr) causes init to loop and fail early in OS boot.
    When doing the split you must copy the ksh93 binary and some libraries that it depends on from /usr namespace into the root dataset (/sbin and /lib accordingly), and fix the /sbin/sh symlink. The specific steps are detailed below, and may have to be repeated after system updates (in case the shell or libraries are updated in some incompatible fashion).

    Note

    My earlier research-posts suggested replacement of /sbin/sh with bash; however, this has the drawback that the two shells are slightly different in syntax, and several SMF methods need to be adjusted. We have to live with it now – ksh93 is the default system shell, it just happens to be inconveniently provided in a non-systematic fashion. Different delivery of ksh93 and the libraries it needs is worthy of an RFE for packagers (tracked as issue #4351).

  • Another (rather cosmetic) issue is that many other programs are absent in the minimized root without /usr, ranging from df, ls, less and cat to svc* SMF-management commands, vi and so on. I find it convenient to also copy bash and some of the above commands from /usr/bin into /sbin, though this is not strictly required for system operation – it just makes repairs easier (wink) 

  • A much more serious consequence of the absence of programs from /usr is that some SMF method scripts which initialize the system up to the "single-user milestone", including implementations both default and nwam implementations of svc:/network/physical, rely on some programs from /usr. The rationale is that network-booted miniroot images carry the needed files, and disk-based roots are expected to be "monolithic". It is possible to fix some of those methods (except NWAM in the default setup, at least), but a more reliable and less invasive solution is to mount the local ZFS components of the root filesystem hierarchy (and thus guarantee availability of proper usr) before other methods are executed. This is detailed below as the svc:/system/filesystem/root-zfs:default service with fs-root-zfs script as its method.
    NOTE for readers of earlier versions of the document: this script builds on my earlier customizations of the previously existing filesystem methods; now the these legacy scripts don't need many modifications (I did add just the needed checks whether a filesystem has already been mounted).

  • Separation of /var/tmp into a shared dataset did not work for me, at least some time in the past – some past (before the new fs-root-zfs service) – some existing services start before filesystem/minimal completes (which mounts such datasets) and either the /var/tmp dataset can not mount into a non-empty mountpoint, or (if -O is used for overlay mount) some programs can't find the temporary files which they expect.
    It is possible that with the introduction of fs-root-zfs this would work correctly, but this is not thoroughly tested yet.

  • Likewise, separation of /root home directory did not work well: in case of system repairs it might be not mounted at all and things get interesting (wink)
    It may suffice to mount a sub-directory under /root from a dataset in the shared hierarchy, and store larger files there, or just make an rpool/export/home/root and symlink to it from under /root (with the latter being individual to each BE).

  • Cloning BE's with beadm currently does not replicate the original datasets' "local" ZFS attributes, such as compression or quota or (ref)reservation. If you use pkg image-update to create a new BE and update the OS image inside it, you're in for surprise: newly written data won't be compressed as you expected it to be – it will inherit compression settings from rpool/ROOT (uncompressed or LZ4 are likely candidates). While fixing beadm in this behaviour is a worthy RFE as well (issue numbers #4355 for pkg and #3569 for beadm and zfs), currently you should work around this by creating the new BE manually, re-applying the (compression) settings to the non-boot datasets (such as /usr), mounting the new BE, and providing the mountpoint to pkg commands. An example is detailed below.
    Note that the bootable dataset (such as rpool/ROOT/oi_151a8) must remain with the settings which are compatible with your GRUB's bootfs support (uncompressed until recently, or with lz4 since recently).  

  • Finally, proper mounting of hierarchical roots requires modifications to some system SMF methods. Patches and complete scripts are provided along with this article, though I hope that one day they will be integrated into illumos-gate or OI distribution (issue number #4352), and manual tweaks on individual systems will no longer be required.

...

The patched fs-root script (earlier) or the replacement fs-root-zfs script (later) introduces optional console logging (enable by touching /.debug_mnt in the root of a BE), and enhances the case for ZFS-mounted root and usr filesystems by making sure that the mountpoints of sub-datasets of the root filesystem are root-based and not something like /a/usr (for all child datasets), and mounts /usr with overlay mode (zfs mount -O – this – takes care of the issue number #997 at least for the rootfs components) – too often have mischiefs like these two left an updated system unbootable and remotely inaccessible. It also verifies that the mounted filesystem is "sane" (a /usr/bin directory exists), and with that in place – restarts (if online) or clears (if in maintenance state) the networking SMF services svc:/network/physical:default or svc:/network/physical:nwam, and svc:/network/iptun:default. The SMF method scripts for the latter rely on /usr and these services are dependencies for the filesystem/root (see issue number #4361). Doing the service restart after making sure /usr is available seems like the "cleanest" and most effective solution.

The fs-usr script deals with setup of swap and dump, and the patch is minor (verify that dumpadm exists, in case sanity of /usr was previously overestimated). For non-ZFS root filesystems in global zone, the script takes care of re-mounting the / and /usr filesystems read-write according to /etc/vfstab, and does some other tasks.

...

While the described patches (see fs-root-zfs.patch for the new solution, or reference fs-splitroot-fix.patch for the earlier solution) are not strictly required (i.e. things can work if you are super-careful about empty mountpoint directories and proper mountpoint attribute values, and the system does not unexpectedly or by your mistake reboot while you are in mid-procedure, or if you use legacy mountpoints and fix up /etc/vfstab in each new BE), they do greatly increase the chances of successful and correct boot-ups in the general case with dynamically-used boot environments, shared datasets and occasional untimely reboots. Also, some networking initialization scripts (notably NWAM) do expect /usr and maybe even /var to be mounted before they run, and the existing filesystem methods (which would mount /usr) happen to depend on them, However, physical:default does run successfully (most of the time, missing just the cut command which can be replaced by a ksh93 builtin implementation).

Specifying which bootfs children or shared datasets to mount

There are several ways to specify which datasets should be mounted as part of the dedicated or shared split-root hierarchy. In the context of descriptions below, the "bootfs children" are filesystem datasets contained within the root filesystem instance requested for current boot via GRUB (explicitly, or defaulting to the value of the ZFS pool's bootfs attribute).

  • "Legacy" filesystem datasets with mountpoint=legacy which are explicitly specified in the /etc/vfstab file located inside this bootfs. This allows to pass mount-time options (such as the overlay mount, before it was enforced by the fixed fs-* scripts):

    Code Block
    rpool/ROOT/oi_151a8/usr      -       /usr            zfs     -       no      - 
    rpool/SHARED/var/adm         -       /var/adm        zfs     -       yes     - 

    A drawback of this method for bootfs children is that the file must be updated after each cloning or renaming of the boot environment to match the actual ZFS dataset full name for the particular bootfs.

  • For bootfs children with specified mountpoint paths (and, for the new fs-root-zfs method, a canmount value other than "off"), mounting happens automatically: for /usr as a step in filesystem/root service, for others as a step in filesystem/minimal service.
    Typically the bootfs children specify canmount=noauto, because after BE cloning the rpool would provide multiple datasets with the same mountpoints, causing errors (conflicts) of automatic mounts during pool imports.
    NOTE: Specifying canmount=off for such datasets with un-fixed old service method implementations in place would log errors due to inability to zfs mount such datasets; however, for datasets other than /usr, the return codes are not checked, so this should not cause boot failures.
    • The filesystem methods can use /etc/vfstab to locate over a dozen paths for mounting (backed by any of the supported filesystem types), many of which are not used in the default installations. Those which might be used in practice with ZFS include /usr, /var, /var/adm and /tmp; these blocks in the method scipts also include logic to mount such child datasets of the current bootfs if they exist and a corresponding path was not explicitly specified in /etc/vfstab.
      Extensions added by me into the fixed scripts (earlier solution) or provided as the new fs-root-zfs method, allow to mount such paths (except /usr and /var) also from a number of other locations as "shared" datasets – if they were not found as children of the current bootfs.

  • For possibly "shared" datasets, other than the explicitly specified short list (above), the legacy filesystem methods only offer the call to "zfs mount -a" from filesystem/local (way after the "single-user" milestone). This implies specified (non-"legacy") mountpoint paths and canmount=on; other datasets are not mounted automatically.
    Extensions provided as the new fs-root-zfs method allow to mount datasets with such attribute values from $rpool/SHARED (where the $rpool name is determined from the currently mounted root filesystem dataset). This ensures availability of active shared datasets as part of the split-root filesystem hierarchy early in boot. In particular, following the "auto-mounting" requirements allows to use datasets with a specified mountpoint path and canmount=off as "containers" for the shared datasets to inherit the parent container's path automatically (i.e. a non-mounting /var node).

Below you can find a screenshot with examples of the non-legacy datasets, both children of the root and shared ones. There is no example of a "legacy" dataset passed through /etc/vfstab because I can't contrive a rational case where that would be useful today (smile) 

Examples?

The examples below assume that your currently installed and configured OS resides in rpool/ROOT/openindiana and you want to relocate it into rpool/ROOT/oi_151a8 with a hierarchy of compressed sub-datasets for system files (examples below use variables to allow easy upgrades of the procedure to different realities), and shared files like logs and crash dumps will reside in a hierarchy under rpool/SHARED.

...

This particular system also spit off /usr/local in order to allow easy creation of clones delegated into local zones – so as to provide modifiable sets of unpackaged programs with little storage overhead. This is not a generally needed scenario (smile)

Anchor
How do I do this?
How do I do this?
How do I do this?

Now we're down to the dirty business ;)

...

Now that you have the new root filesystem, prepare it for children, using your selection of sub-datasets. These will be individual to each OS installation, cloned and updated along with their BE. Generally this includes all locations with files delivered by "system" packages, which are likely to be updated in the future.  Also included below is /opt/local as the path used by Joyent PKGSRC releases usable on most illumos distributions and likely to consume lots of space.

To follow the example settings defined above:

Code Block
:; cd "$BENEW_MNT" && for D in \
    usr var opt usr/local var/pkg opt/local \
   ; do mkdir "$D" && /bin/chmod S+ci "$D" && \
        zfs create -o canmount=noauto -o compression=gzip-9 "$BENEW_DS/$D" && \
        zfs mount -O "$BENEW_DS/$D" || break; \
   done
 
:; /bin/df -k | grep " $BENEW_MNT"
### Example listing:
rpool/ROOT/oi_151a8           8193024     34     3738737     1%    /a
rpool/ROOT/oi_151a8/usr       8193024     32     3738737     1%    /a/usr
rpool/ROOT/oi_151a8/var       8193024     31     3738737     1%    /a/var
rpool/ROOT/oi_151a8/opt       8193024     31     3738737     1%    /a/opt
rpool/ROOT/oi_151a8/usr/local 8193024     31     3738737     1%    /a/usr/local
rpool/ROOT/oi_151a8/var/pkg   8193024     31     3738737     1%    /a/var/pkg

In the example above, mountpoint directories are protected from being written into by being made immutable. Note that this requires the Solaris (not GNU) chmod, and that this does not work in Solaris 10 (if you backport the procedure).procedure – which mostly works). Also note that /var/pkg is relevant for IPS-based distributions like OpenIndiana, and you might want to omit it when applying the procedure to some other OS in the Solaris family.

Also note that at this point the sub-datasets inherit the /a prefix in their mountpoints, and will fail to mount "as is" with the currently default scripts (fs-root and fs-minimal), unless you later unmount this tree and change the rootfs to use mountpoint=/.

...

Code Block
:; cd "$BENEW_MPT"/var && \
   for D in adm cores crash log mail spool/clientmqueue spool/mqueue ; do \
     mkdir -p "$D" && /bin/chmod S+ci "$D"; \
     zfs create -o canmount=on "$RPOOL_SHARED/var/$D"; \
   done
 
### Verify success of the previous operation(s) before proceeding
:; /bin/df -k | grep " $BENEW_MNT" 
 
:; for D in cores crash ; do \
     zfs set quota=5G "$RPOOL_SHARED/var/$D" ; \
     zfs set com.sun:auto-snapshot=false "$RPOOL_SHARED/var/$D" ; \
   done
 
:; for D in spool/clientmqueue spool/mqueue ; do \
     zfs set quota=2G "$RPOOL_SHARED/var/$D" ; done

NOTE: Don't blindly split off /var/tmp like this, at least not unless you are ready to test this as much as you can. It was earlier known to fail, though it may work better now dependent on the distribution features, SMF dependency order and other such variables. It actually works on my system, but I am not ready to "guarantee" this for others. Since the problem was that in legacy setups some services wrote into this directory before the dedicated dataset was mounted (thus either blocking the mount, or losing access to written files), now there should be no problem since mounting is done before other services as enforced by SMF dependencies – unless you store your /var/tmp on a non-root pool and then that pool import fails at boot. If you do find that the temporary directories over dedicated ZFS datasets (whether as /var/tmp or in some differently-named paths perhaps stored on a separate user-data pool) work well for you, consider adding some security and performance options into the mix, for example:

Code Block
:; mkdir "$BENEW_MPT"/var/tmp && /bin/chmod S+ci "$BENEW_MPT"/var/tmp
:; zfs create -o canmount=on -o setuid=off -o devices=off -o sync=disabled -o atime=off "$RPOOL_SHARED"/var/tmp
:; chmod 1777 "$RPOOL_SHARED"/var/tmp
### Set quota or don't set it, as you see fit

The example above creates the immutable mountpoint directories in the rootfs hierarchy's version of /var, then creates and mounts the datasets into the new hierarchy's tree. Afterwards some typically acceptable quotas (YMMV) are set to protect the root file system from overfilling with garbage. Also, zfs/auto-snapshot service is forbidden to make autosnaps of the common space-hogs /var/cores and /var/crash, so that deletion of files from there to free up rpool can proceed unhindered.

...

All of the examples use rsync – it does the job well, except maybe for lack of support for copying ZFS/NFSv4 ACLs until (allegedly) rsync-3.0.10, which is not relevant for a default installation. Flags used below include:

  • -x – single-filesystem traversal (only copy objects from source filesystem, don't dive into sub-mounts – verify – you should manually verify and ensure that mountpoints like /tmp or /proc should ultimately exist on targets);
  • -avPHK – typical recursive replication with respect for soft- and hard-links and verbose reports;
  • -z – if you copy over a slow network link, this would help by applying compression to the transferred data (not included in examples below);
  • The rsync program is executed in a loop, so if something breaks (i.e. out of memory on LiveCD environment) it would pick up and proceed until success.

You also have an option to initiate the rsync process from either the source system (where the original data tree resides) or from the new system (on which the split-root structure is formed and written). The choice depends on networking (routing, firewalls, etc.) among other things, either way is possible and this is in essence a feasible step in the way to clone pre-installed systems. Single-system copying is just an edge case here, where origin and target are the same and networking may be avoided (the $RSH variable is empty). 

BE cloning

This is example is for systems with beadm applicable to the selected source dataset (i.e. the source BE resides in the currently active origin rpool).

Prepare the source file tree; basically this allows to use a clone of the current root into which no run-time additions would land, and without user datasets and other overlays mounted inside:

Code Block
### On origin system - optionally clone the current BE 
### (if it is the source) and mount the selected BE
:; beadm create -e "$BEOLD" "$BEOLD-split"
:; beadm mount "$BEOLD-split" "$BEOLD_MNT.split" && \
   /bin/df -k | grep " $BEOLD_MNT.split"
 
### Verify that all needed filesystems are indeed mounted, and
### if any extras are there (like zone roots) - unmount them or
### define exclusions in the rsync command. Note "-x" is off.
 

Run on source or single system:

Code Block
### Initiate copying from the origin system (target is the SSH/RSH/RSYNC server or local system)
:; cd "$BEOLD_MNT.split/" && while ! eval rsync -avPHK $RSH ./ "$TGT$BENEW_MNT/" ; do sleep 1; done
 

... OR run on target system:

Code Block
### Initiate copying from the target system (origin is the SSH/RSH/RSYNC server)
:; cd "$BENEW_MNT/" && while ! eval rsync -avPHK $RSH "$SRC$BEOLD_MNT.split/" ./ ; do sleep 1; done 
Snapshot-diving

In this example you can use ZFS snapshots as the read-only sources for rsync copy process. One substantial difference is that for any child datasets of the origin system (and note that this refers to the origin – which may indeed have no child datasets, or might have a separate var child) you have to reiterate separate rsync runs.

Prepare the source:

Code Block
### On origin system
:; zfs snapshot -r "$BEOLD_DS@$BENEW-split"
 

Run on source or single system:

Code Block
### Initiate copying from the origin system (target is the SSH/RSH/RSYNC server or local system)
:; cd "$BEOLD_MNT/.zfs/snapshot/$BENEW-split" && \
   while ! eval rsync -avPHK $RSH ./ "$TGT$BENEW_MNT/" ; do sleep 1; done

 
### Rinse and repeat for child datasets of origin, like /var, if any, i.e.:
:; for D in var ; do \
   cd "$BEOLD_MNT/$D/.zfs/snapshot/$BENEW-split" && \
    while ! eval rsync -avPHK $RSH ./ "$TGT$BENEW_MNT/$D/"; do sleep 1; done; \
   done

 ... OR run on target system:

Code Block
### Initiate copying from the target system (origin is the SSH/RSH/RSYNC server)
:; cd "$BENEW_MNT/" && \
   while ! eval rsync -avPHK $RSH "$SRC$BEOLD_MNT/.zfs/snapshot/$BENEW-split/" ./; do sleep 1; done 
 
### Rinse and repeat for child datasets of origin, like /var, if any, i.e.:
:; cd "$BENEW_MNT/" && for D in var ; do \
    cd "$BEOLDwhile ! eval rsync -avPHK $RSH "$SRC$BEOLD_MNT/$D/.zfs/snapshot/$BENEW-split/" && \
    while ! eval rsync -avPHK $RSH ./ "$TGT$BENEW_MNT/"./$D/"; do sleep 1; done; \
   done
lofs-mounting mounting

This allows to use lofs as a means of producing an unmodified source filesystem without interference of overlay-mounts. Historically this is the approach which helped migrate from UFS roots onto ZFS.

Prepare the source:

Code Block
### On origin system - lofs-mount the active root filesystem
:; mkdir /mnt/root
:; mount -F lofs -o nosub "$BEOLD_MNT" /mnt/root
 

Run on source or single system:

Code Block
### Initiate copying from the origin system (target is the SSH/RSH/RSYNC server or local system)
:; while ! eval rsync -xavPHK $RSH --exclude=/mnt/root /mnt/root/ "$TGT$BENEW_MNT/"; do sleep 1; done
 

### Initiate copying from the target system (origin is the SSH/RSH/RSYNC server)
:; while ! eval rsync -xavPHK $RSH --exclude=/mnt/root "$SRC/mnt/root/" "$BENEW_MNT/"; do sleep 1; done
 
## For optional subsequent datasets/filesystems (i.e. var), on origin
:; for D in var; do \
    mount -F lofs -o nosub "$BEOLD_MNT/$D" "/mnt/root/$D" && \
    while ! eval rsync -xavPHK $RSH --exclude=/mnt/root "/mnt/root/$D/" "$TGT$BENEW_MNT/$D/"; do sleep 1; done ; \
   done

 ... OR run on target system:

Code Block
### Initiate copying from the target system (origin is the SSH/RSH/RSYNC server)
:; while ! eval rsync -xavPHK $RSH --exclude=/mnt/root "$SRC/mnt/root/" "$BENEW_MNT/"; do sleep 1; done

Tuning the split-root OS image

Now that you are done replicating the source filesystem image, don't rush to boot it. There are some more customizations to make which ensure that it would actually work.

Snapshot first

Just in case you mess up in the steps below, have something to roll back to: 

...

Code Block
### Use chroot into the new BE so that ldd would be limited by local 
### namespace, including our subsequent changes to it in /lib
:; chroot "$BENEW_MNT" "/usr/bin/bash"
:; cd "/sbin" && ls -la sh

lrwxrwxrwx   1 root     root          20 Jul 21 15:41 sh -> ../usr/bin/i86/ksh93 
### So here it is - a symlink out of the root filesystem into /usr filesystem
 
### Part one: copy the binary file
:; cp -pf ../usr/bin/i86/ksh93 .
:; mv sh sh.orig
:; ln -s ksh93 sh
 
### Part two: copy its dependency libraries
:; for F in /sbin/ksh93; do ldd "$F" | awk '{print $NF}' | egrep '^/usr/lib/' | sed 's,^/usr/lib/,,' | \
   while read L; do echo "/usr/lib/$L" && cp -pf "../usr/lib/$L" "../lib/$L"; done; done

/usr/lib/libshell.so.1
/usr/lib/libcmd.so.1
/usr/lib/libdll.so.1
/usr/lib/libast.so.1
/usr/lib/libsum.so.1
 
### Optionally copy other useful programs - not required and may add headache in later OS upgrades
#:; cp ../usr/bin/bash .
 
### Exit the chroot
:; exit
 
### Snapshot again
:; zfs snapshot -r "$RPOOL_SHARED@postsplit-02"
:; zfs snapshot -r "$BENEW_DS@postsplit-02"

...

For the oi_151a8 release and several releases before it, the system-provided scripts did not change, so the full scripts can be the easier choice to download: fs-root-zfsfs-root and fs-minimal. As described above, the fs-root-zfs script includes all the logic needed to detect and mount the local ZFS-based root filesystem hierarchy (and skips any non-ZFS filesystems and mountpoints under them), and the existing method scripts are just slightly fixed to expect that the paths they try to manage may have already been mounted. Also, unlike the earlier existing scripts, the fs-root-zfs script explicitly mounts the shared datasets ($rpool/SHARED) early in the system initialization to ensure the complete root filesystem hierarchy to other methods, such as network initialization scripts.

For other releases and distributions it may be worthwhile to get the patches as fs-root-zfs.patch and apply them.

...

Code Block
### Make backups
:; cd "$BENEW_MNT/lib/svc/method" && for F in fs-root-zfs fs-root fs-minimal; do
    N="$F.orig-oi_151a8$BEOLD"; [ -f "$F" -a -f "$N" ] || cp -pf "$F" "$N"; done; ls -la fs-*
### Verify success of the previous operation(s) before proceeding
 
### REPLACEMENT SCRIPTS
### For oi_151a8 (and in fact many other releases) it may be suitable to replace the scripts
### For other releases, including those newer than this post (Nov 2013) verify contents first
:; for F in fs-root-zfs fs-root fs-minimal; do \
    rm -f "$F.splitroot"; wget -O "$F.splitroot" \
      "http://wiki.openindiana.org/download/attachments/27230229/$F" && \
    cat "$F.splitroot" > "$F"; chmod +x "$F"; done; ls -la fs-* 
 
### OR - PATCH OLD SCRIPTS 
### In case of patch-files - try to apply them. This is probably more portable and future-proof
### (i.e. if your distribution has deviated from the specific script versions used in the example
### above, and/or some different patches were applied).
:; wget -O "fs-root-zfs.patch" \
     "http://wiki.openindiana.org/download/attachments/27230229/fs-root-zfs.patch" && \
   gpatch --dry-run -p4 < "fs-splitrootroot-fixzfs.patch" && \
   gpatch -p4 < "fs-splitrootroot-fixzfs.patch" && chmod +x "fs-root-zfs"; ls -la fs-*
 
### Also get the manifest for the new service
:; wget -O "$BENEW_MNT/lib/svc/manifest/system/filesystem/fs-root-zfs.xml" \
   http://wiki.openindiana.org/download/attachments/27230229/fs-root-zfs.xml
  
### Install the service manifest into the new BE right away
echo "repository $BENEW_MNT/etc/svc/repository.db
import $BENEW_MNT/lib/svc/manifest/system/filesystem/fs-root-zfs.xml
exit" | svccfg

The scripts include an ability to log all the decisions done regarding mounting or not mounting specific datasets, fixing mountpoints, etc. which go to the console (physical or serial, per your setup and kernel boot-time parameters), as well as into SMF (check /var/svc/log/system-filesystem-root-zfs:default.log or /etc/svc/volatile/system-filesystem-root-zfs:default.log for copies of the relevant entries). To enable such logging just go:

Code Block
:; touch "$BENEW_MNT/.debug_mnt"
Fixing /etc/vfstab

Verify that $BENEW_MNT/etc/vfstab does not reference filesystems which you expect to mount automatically – such as the shared filesystems or non-legacy children of the rootfs du-jour. A reference to rpool/swap is okay:

Code Block
:; cat "$BENEW_MNT/etc/vfstab" | egrep "$RPOOL|$OPOOL"

...

Code Block
### Don't block the mountpoints we are trying to release
:; cd /
 
### On a Live-Media system you can just unmount everything:
:; zfs umount -a
 
### VARIANT A:
### On a working system which did not use rpool/SHARED yet, you should
### go over all the datasets of the new hierarchy:
:; /bin/df -k | egrep "^($BENEW_DS|$RPOOL_SHARED)" | awk '{print $NF}' | sort -r | \
   while read M; do echo "=== $M"; umount -f "$M"; done
 
### VARIANT B:
### On a working system which did use rpool/SHARED (i.e. you are rebalancing a
### split-root configuration), you should not unmount the used shared datasets:
:; /bin/df -k | egrep "^($BENEW_DS)" | awk '{print $NF}' | sort -r | \
   while read M; do echo "=== $M"; umount -f "$M"; done
 
### Fix the new roots AFTER the new hierarchy datasets are unmounted successfully
:; zfs set mountpoint="/" "$BENEW_DS"
:; zfs set mountpoint="/var" "$RPOOL_SHARED/var"
 
:; zfs set org.openindiana.caiman:install=ready "$BENEW_DS"

...

If you are doing this all in a LiveCD environment, it makes sense to verify that there are no conflicts in mountpoints. Note that the LiveCD also places a hold on the dump and swap volumes (at least, if it has just created the installation), and these resources must be freed to export actually export the rpool:

Code Block
:; zpool export $RPOOL
pool busy
 
:; dumpadm -d none
 
:; swap -l
swapfile             dev    swaplo   blocks     free
/dev/zvol/dsk/rpool/swap 96,2         8  4188152  4188152
:; swap -d /dev/zvol/dsk/$RPOOL/swap 
 
:; zpool export $RPOOL
:; zpool import -N -R /a -f $RPOOL && zfs mount -O $BENEW_DS && \
   zfs mount -O $BENEW_DS/var && zfs mount -a

beadm clone fixup and package upgrading

Anchor
Upgrades
Upgrades
Upgrades

As discussed earlier, this hierarchy also requires (or benefits from) a bit of special procedure to upgrade the installation. While it is customary to have the pkg command create all needed BE datasets and proceed with the upgrade in the newly cloned BE, we'd need to reenable compression and maybe some other attributes first.Environment variables are

Don't forget to verify (or just redo) the copying of /sbin/sh and its related libraries, especially if they have changed, revise the patched filesystem method scripts and other customizations discussed above (as well as others you do on your systems).

beadm clone fixup and package upgrading

In order for your OS updates to enjoy the disk-space savings, the compression attributes should be appropriately applied to the cloned datasets. Unfortunately, current beadm clone does not take care of that. The most simple approach is to create and fix the new BE first, then use it as a target for the package upgrades.

The three mini-chapters below go from the most-automated to the most-manual description of essentially the same procedure (inverse order of evolution as examples from this page got scripted). Generally the first snippet should be used in practice, while the others are more of interest for further development of audit of the procedure. The concluding mini-chapter covers destruction of such BEs as they become un-needed, because it also becomes slightly more complicated.

The environment variables involved in the procedures or scripts below are similar to ones used in the manual above, but there are less since of them set, since we are playing within one rpool (no networked copies are implied).

Script-automated rootfs/BE cloning and upgrading

The most automated help can be received from beadm-upgrade.sh which automates BE cloning with beadm-clone.sh (described below) and then issues IPS and PKGSRC package updates in the new BE:

  • For a fully-automatic job, download the scripts:

    Code Block
    :; wget -O /root/beadm-clone.sh "https://github.com/jimklimov/illumos-splitroot-scripts/raw/master/bin/beadm-clone.sh" && \
       chmod +x /root/beadm-clone.sh
    :; wget -O /root/beadm-upgrade.sh "https://raw.githubusercontent.com/jimklimov/illumos-splitroot-scripts/master/bin/beadm-upgrade.sh" && \
       chmod +x /root/beadm-upgrade.sh
  • Run the upgrader (optionally pre-set and export the envvars described all around this text); the script prints the variables it is going to use and pauses before proceeding (press ENTER to go on):

    Code Block
    :; /root/beadm-upgrade.sh 

    If all was ok – activate (copy-paste the BE name from last lines of output of beadm-upgrade.sh) and gracefully reboot:

    Code Block
    :; beadm activate "$BENEW" && \
       init 6
Script-automated rootfs/BE cloning

The attached script beadm-clone.sh (Git master: beadm-clone.sh) automates most of the logic described in the text below, and uses the same environment variables. You can execute it as a shell script as well as just "source" it into your current (root) shell – but beware that it can exit upon errors; execution requires that you "export" the envvars you need, while "sourcing" would set whatever remains to guesswork in the shell context which remains current and would not redefine them in subsequent runs.

As a point-and-shoot solution that requires no pre-configuration, it can clone the currently running BE suffixing it with a timestamp.

In a second layer of usability it may suffice that you only set BEOLD and/or BENEW and it should guess the rest.

For just the BE cloning with the script do:

  • Download the script:

    Code Block
    :; wget -O /root/beadm-clone.sh "https://github.com/jimklimov/illumos-splitroot-scripts/raw/master/bin/beadm-clone.sh" && \
       chmod +x /root/beadm-clone.sh
  • Source it into the current shell so it sets all the variables as it goes (by default it will propose a new BE name based on the first token of the current BE before a separator such as the dash character, and suffix it with current timestamp); the script prints the variables it is going to use and pauses before proceeding (press ENTER to go on):

    Code Block
    :; . /root/beadm-clone.sh 

    Alternately, don't source but rather run the script and copy-paste the reported variable values into your shell.

  • When the script is done cloning and has reported no errors, copy-paste the suggestions from the end of its output, i.e.:

    Code Block
    :; pkg -R "$BENEW_MNT" image-update --deny-new-be --no-backup-be && \
       touch "$BENEW_MNT/reconfigure" && \
       bootadm update-archive -R "$BENEW_MNT" && \
       beadm umount "$BENEW"
    
    :; TS="`date -u "+%Y%m%dZ%H%M%S"`" && \
       zfs snapshot -r "$RPOOL_SHARED@postupgrade-$TS" &&\
       zfs snapshot -r "$BENEW_DS@postupgrade-$TS"
  • If all was ok – activate and gracefully reboot:

    Code Block
    :; beadm activate "$BENEW" && \
       init 6

    Hopefully, everything goes up nicely and quickly, and a `df -k /` would show the new root dataset (wink)

Manual rootfs/BE cloning

If explicit control over the procedure is desired (or if it is problematic to download the script and you'd rather copy-paste code), you can define everything as detailed below:

Code Block
:; RPOOL="rpool"
:; RPOOL_ROOT="$RPOOL/ROOT"
:; RPOOL_SHARED="$RPOOL/SHARED"
 
:; RPOOLALT=""      ### For the currently running system
:; BEOLD="oi_151a8"
:; BEOLD_MPTBENEW="/oi_151a9"

:; BENEWBEOLD_MPT="oi_151a9/"
:; BENEW_MPT="/a"

:; BEOLD_DS="$RPOOL_ROOT/$BEOLD"
:; BENEW_DS="$RPOOL_ROOT/$BENEW"
:; BEOLD_MNT="$RPOOLALT$BEOLD_MPT"
:; BENEW_MNT="$RPOOLALT$BENEW_MPT"

So, we clone the current BE (or the one from which we want to upgrade):

Code Block
:; beadm create -e "$BEOLD" "$BENEW" && \
   beadm mount "$BENEW" "$BENEW_MNT"
 
:; df -k | tail -20

...

Code Block
:; EXCLUDE_ATTRS='org.opensolaris.libbe:uuid|canmount|mountpoint'

### If you use this procedure for something else (i.e. cloning zones)
### you might want to not exclude any attributes. Then use this value:
#:; EXCLUDE_ATTRS='^$'
 
### Verify custom attributes other than those known to be set by beadm
:; zfs list -H -o name -r "$BEOLD_DS" | while read Z; do \
    S="`echo "$Z" | sed "s,^$BEOLD_DS,,"`"; echo "=== '$S'"; \
    zfs get all "$BEOLD_DS$S" | egrep ' (local|received)'; \
    echo ""; zfs get all "$BENEW_DS$S" | egrep ' (local|received)'; \
   done | egrep -v "$EXCLUDE_ATTRS"
### If any custom attributes pop up in the newly created BE, these should
### probably also be excluded from replication (EXCLUDE_ATTRS) before proceeding
 
:; zfs list -H -o name -r "$BEOLD_DS" | while read Z; do \
    S="`echo "$Z" | sed "s,^$BEOLD_DS,,"`"; echo "=== '$S'"; \
    zfs get all "$BEOLD_DS$S" | egrep ' (local|received)' | \
    egrep -v "$EXCLUDE_ATTRS" | while read _D A V _T; do \
      echo "$A=$V"; zfs set "$A=$V" "$BENEW_DS$S"; \
   done; done
 
=== ''
compression=lz4
=== '/opt'
compression=gzip-9
=== '/usr'
compression=gzip-9
=== '/usr/local'
compression=gzip-9
=== '/var'
compression=gzip-9

...

  • The official method is beadm activate which updates the GRUB menu and possibly does other housekeeping; when it is done, you should gracefully(warning) reboot (when time comes):

    Code Block
    :; beadm activate "$BENEW"
    Activated successfully
     
    :; init 6

    In particular, the updated GRUB menu entries allow you to easily fall back and boot an older BE, without hacking at the console to enter the bootfs you want as active. 

  • Still, if you are oldschool and rely on default bootfs (referenced from GRUB menu as the default choice), just update it and reboot (when time comes), and hope that this suffices (smile) in the release du-jour:

    Code Block
    :; zpool set bootfs="$BENEW_DS" "$RPOOL"
    :; init 6

Good luck! (smile)

Don't forget to verify (or just redo) the copying of /sbin/sh and its related libraries, especially if they have changed, revise the patched filesystem method scripts and other customizations discussed above (as well as others you do on your systems). 

Removing a BE prepared by the above procedure

It is perfectly possible that you don't get everything the way you wanted on the first attempt, and would like to retry. An update attempt might not find any packages to update and the new BE is thus useless. 

In these or any similar cases you should use the "-s" flag to beadm destroy because after the procedure above (and/or after some life-time on a system with Time-Slider or equivalent technology), the new BE contains several snapshots which block "normal" removal:

Code Block
:; beadm destroy -s "$BENEW"

If an update was successful and well-tried in practice, so you no longer need an old BE... do be careful in its removal:

Warning

WARNING: Before doing recursive ZFS removals (which is what beadm destroy -s should be doing), remember that this action can impact all child datasets that are not blocked by being mounted and files open, by running zfs send sessions or by a zfs hold, for example. Beside the snapshots and sub-datasets in the hierarchy which you do intend to remove, such "children" may include ZFS clones such as newer BE's.

There is a difference between zfs destroy -r and zfs destroy -R commands that lies essentially just in this aspect – whether clones are also removed.

Do verify first what your particular OS distribution and version does to destroy old BEs, or resort to destruction of datasets snapshot-by-snapshot (and mind that beadm destroy dataset@snapshot syntax does offer a means to automate that). Alternately, consider using zfs promote to ensure that a newer clone is considered to be the master (inspect zfs list -o origin,name -r rpool/ROOT output to see the current relationships between datasets on your system).

On simply using "pkg upgrade"

Unfortunately, if you've issued a simple pkg upgrade call which results in a cloned BE automatically (due to package flags requiring a reboot), the new BE would currently have default compression and other per-dataset settings. Still, you have a chance to catch the new BE and fix it as early as you can "in flight".

Note that the package upgrade first refreshes its catalog of the packages in repositories, then downloads the new files into a local area (under /var/pkg in the current rootfs, apparently), and only then does it create a new cloned BE based on the current one. The BE name would be generated at this time; if your current one ended with a number (like oi_151a8-20140101) this number would be increnmented into a unique available number (like into oi_151a8-20140102, so don't expect current dates to be used automagically). For names without a number, one would just be appended (i.e. openindiana-1 for a default installation's first substantial upgrade).

If you figure out the expected BE name, you can leave the following loop running in an alternate shell to catch the creation of the new BE and to fix its dataset attributes:

Code Block
:; RPOOL="rpool" ; RPOOL_ROOT="$RPOOL/ROOT"
:; BEOLD="oi_151a8" ; BEOLD_DS="$RPOOL_ROOT/$BEOLD"
:; EXCLUDE_ATTRS='org.opensolaris.libbe:uuid|canmount|mountpoint'
:; BENEW="oi_151a9" 
:; BENEW_DS="$RPOOL_ROOT/$BENEW" ; while ! zfs list -r "BENEW_DS"; do \
   sleep 1; done; sleep 1; zfs list -H -o name -r "$BEOLD_DS" | while read Z; do \
     S="`echo "$Z" | sed "s,^$BEOLD_DS,,"`"; echo "=== '$S'"; \
     zfs get all "$BEOLD_DS$S" | egrep ' (local|received)' | egrep -v "$EXCLUDE_ATTRS" | \
     while read _D A V _T; do echo "$A=$V"; zfs set "$A=$V" "$BENEW_DS$S"; done; done;

Do not despair if you've lost the moment or mis-guessed the $BENEW name, and an uncompressed clone was instantiated – you can just destroy it and redo the process (possibly, with pkg upgrade --no-refresh) using the same new package data that you've already downloaded, so the process should be cheap and fast now (that is, if you did not specify an alternate root with -R so that the current BE's /var/pkg repositroy was used to cache the package data). On the upside, now you know what $BENEW name the system would actually use (or you'd have a new chance to enforce one with --be-name).

Why bother for upgrades?

To conclude with an example, I have re-tested this procedure with the OI Hipster distribution which has frequent re-rolls of packages. For example, last night some 579 packages became obsolete, involving about 265MB of downloads and 3700 replaced files. An upgrade with re-enabled gzip-9 and one without (still inheriting lz4 from the root) differed by about 100MB in the /usr child dataset alone... that's a 1/3 difference relative to the download size (and even more in comparison to lack of compression), including less pressure from those smaller blocks of the OS files onto system caches. Just to give a tangible example:

Code Block
# zfs list -o compression,refcompressratio,refer,logicalreferenced,name -r rpool/ROOT | egrep '/usr$'
gzip-9 2.91x 1.18G 3.38G rpool/ROOT/hipster-20140214/usr
gzip-9 2.90x 1.21G 3.43G rpool/ROOT/hipster-20140416/usr
gzip-9 2.90x 1.21G 3.43G rpool/ROOT/hipster-20140417-gzip9/usr
lz4    2.70x 1.30G 3.43G rpool/ROOT/hipster-20140417-lz4/usr
off    2.25x 1.55G 3.43G rpool/ROOT/hipster-20140417-nocomp/usr

Note that "nocomp" still has the compression in place for files that were not changed since the original dataset from the day before, only the new files are not compressed.

HTH,
//Jim Klimov

An earlier note about naïve split-rooting approach which sometimes failed me - and why

Note

It was recently discovered that NWAM network auto-configuration does not work with split-root config based on earlier modifications of fs-root, fs-usr and fs-minimal scripts (hopefully fixed with the recent rehaul to fs-root-zfs as the single solution for this use-case).

Tracing the system scripts has shown that a substantial part of them depends on availability of /usr or even more (in case of NWAM – rather on filesystem/minimal with a proper /var tree), yet services like network/physical are dependencies needed for startup of filesystem/root (which mounts and guarantees to provide the /usr). Most of the methods "broken" in this manner can be amended to use ksh93 builtins and shell constructs instead of external programs and rely only on /sbin (after relocation of ksh93 as /sbin/sh); other solutions are also possible and are now being discussed in the mailing list and the issue tracker. The legacy network method "for servers" (svc:/network/physical:default) happens to work successfully with both static configurations and DHCP, that's why the error was not found for years (wink)