Child pages
  • Advanced - ZFS Pools as SMF services and iSCSI loopback mounts

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Depending on multi-user-server suggestion was wrong as it created the loop

...

  • Get the SMF wrapping script mount-main-pool and manifest mount-main-pool.xml:

    Code Block
    :; wget -O /lib/svc/method/mount-main-pool \
       http://wiki.openindiana.org/download/attachments/27230301/mount-main-pool
    
    :; mkdir -p /var/svc/manifest/network/iscsi
    :; wget -O /var/svc/manifest/network/iscsi/mount-main-pool.xml \
       http://wiki.openindiana.org/download/attachments/27230301/mount-main-pool.xml

    Don't mind the "iscsi" part of the naming – this is historical due to the second phase of this setup. 

  • Edit the method script. This file, as it is now, is tuned for my installation, and too much is hardcoded.
    Script logic: the main data pool named pool contains a /pool/tmp directory (or automountable child dataset). The method script verifies that this directory exists; if not – the pool can be imported on start (waits for listing and status to complete, and logs the results; then mounts all ZFS filesystems (note – not only from this pool), and only then does the method script complete), if yes – it can be exported (loops until success) on stop.
    In order to protect the tested directory from bogusly appearing on the root filesystem (of the rpool) you can use an immutable mountpoint (detailed below).
    The script includes several anti-import precautions: except for disablement of the service (as it depends on non-existance of the file /etc/zfs/noimport-pool), a delay-file /etc/zfs/delay-pool which can contain the timeout (in seconds) or just exist (defaults to 600 sec), and an automatic lock-file to prevent subsequent imports of pools that can not complete and hang or crash your system.
    Also note that here the import is done without a cachefile and with an alternate root (even if / by default). For larger pools made of many vdevs, you can speed up the imports by using an alternate cachefile=/etc/zfs/zpool-main.cache or something like that, just not the default one.
    You can also touch /etc/zfs/noautomount-$POOL in order to avoid auto-mounting of filesystem datasets (zfs mount -a) at the end of the routine; the pool is initially imported without automounting anything at all.
    You might want to add different options and/or logic at your taste.
    TODO: Replace hardcoding with config-file and/or SMF attribute modifiable configuration. 

  • Revise the manifest file. It currently sets a dependency on filesystem/local; you might want something else (such as svc:/milestone/multi-user-servernetwork/ssh:default) so that you can have a while to disable the pool-importing service. If revising dependencies, make sure to avoid loops (SMF commands should help here).
    Also the service depends on the absence of lock-files /etc/zfs/.autolock.pool (created and removed by the method script around import attempts) and /etc/zfs/noimport-pool (maintained by the user to optionally disable auto-import); the pool part in these filenames (or rather the complete filenames, as synthesized by default) should match what is defined for the service in the method script.
    It also defines smb/server and zones as dependent services so that these resources hosted on the data pool are only started when it is mounted; you might also want to add nfs/server, or set them to optional_all type of dependency, if your rpool also hosts some zones and/or files and can do so without a present data pool.

  • Install the SMF wrapping scripts:

    Code Block
    :; svccfg import /var/svc/manifest/network/iscsi/mount-main-pool.xml

    This creates the (disabled) service for main-pool importing, which calls the script above as the method script. 

  • Remove the pool in question from auto-import database by exporting it (NOTE: this unshares and unmounts all its filesystems in the process, will block or fail on any active users, over-mounted filesystem nodes, used volumes, etc.):

    Code Block
    :; POOL=$POOL_PHYS
    :; zpool export $POOL

    As a result, this pool should no longer be cached in /etc/zfs/zpool.cache for faster and automated imports at OS startup.

  • Protect the mountpoint from method script's test failures:

    Code Block
    :; df -k /$POOL
    :; ls -la /$POOL
    ### Make sure that the pool is exported and its mountpoint directory does not exist or is empty
     
    :; mkdir /$POOL
    :; /bin/chmod S+ci /$POOL

    The immutable mountpoint can not be written even by root, such as when an untimely zfs mount would try to create subdirectories without mounting the pool's root dataset first and break our setup. 

  • Enable the service, which should mount your pool, you can monitor the progress and ultimately the pool status in the service log:

    Code Block
    ### Not done before, so you have time to revise the steps instead of blind copy-pasting ;)
    :; chmod +x /lib/svc/method/mount-main-pool
     
    ### Temp-enable while we are testing
    :; svcadm enable -t mount-main-pool
    :; tail -f /var/svc/log/*mount-main-pool*log
     
    ### If all is ok, you may want to enable the service to start at boot... or maybe not.
    
    :; mkdir /$POOL/tmp
    :; svcadm enable mount-main-pool 

...