Child pages
  • How to migrate the root pool
Skip to end of metadata
Go to start of metadata
  • I recently wanted to migrate the root pool to a new device.  This turned out to be easy to do, using extisting facilities.  The original root pool was on an old 80-gig disk.  This system also had a data pool on a newer 1 TB disk.  Here's what the `format' command showed for them:

       0. c2t0d0 <Unknown-Unknown-0001 cyl 9726 alt 2 hd 255 sec 63>
/pci@0,0/pci1043,8389@11/disk@0,0
1. c2t2d0 <ATA-ST31000524AS-JC4B-931.51GB>
/pci@0,0/pci1043,8389@11/disk@2,0I
  • wanted to migrate the root pool to a new SSD.  The `format' command was available to prepare the SSD.  I could use the `zpool' command to create the pool on that new device, and `beadm' and `installgrub' to perform the migration.  That part worked out nicely.  I had to use a variety of commands to complete the migration.
  • Add the SSD: Just shut down the computer, install the SSD hardware, and boot the system.  Here's the new output from `format':
       0. c2t0d0 <Unknown-Unknown-0001 cyl 9726 alt 2 hd 255 sec 63>
/pci@0,0/pci1043,8389@11/disk@0,0
1. c2t1d0 <ATA-SanDiskSDSSDP06-0 cyl 9966 alt 2 hd 224 sec 56>
/pci@0,0/pci1043,8389@11/disk@1,0
2. c2t2d0 <ATA-ST31000524AS-JC4B-931.51GB>
/pci@0,0/pci1043,8389@11/disk@2,0
  • Prepare the SSD: Create the fdisk partition within `format':
    format> fdisk
No fdisk table exists. The default partition for the disk is:
a 100% "SOLARIS System" partition
Type "y" to accept the default partition, otherwise type "n" to edit the
partition table.
y

  • Create the slice:
    partition> 0
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
Enter partition id tag[unassigned]: root
Enter partition permission flags[wm]:
Enter new starting cyl[1]: 3
Enter partition size[0b, 0c, 3e, 0.00mb, 0.00gb]: $
partition> p
Current partition table (unnamed):
Total disk cylinders available: 9965 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 3 - 9964 59.59GB (9962/0/0) 124963328
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 - 9964 59.61GB (9965/0/0) 125000960
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 - 0 6.12MB (1/0/0) 12544
9 unassigned wm 0 0 (0/0/0) 0
partition> l
Ready to label disk, continue? y
  • Get the root pool version:
    # zpool get all rpool
NAME PROPERTY VALUE SOURCE
rpool size 74G -
...
rpool version 28 local

  • Try to create the new root pool, with a new pool name:
    # zpool create -o version=28 rpool1 c2t1d0s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c2t1d0s0 overlaps with /dev/dsk/c2t1d0s2

  • Try again with the force option:
    # zpool create -f -o version=28 rpool1 c2t1d0s0
# zpool list
NAME SIZE ALLOC FREE EXPANDSZ CAP DEDUP HEALTH ALTROOT
dpool 928G 85.6G 842G 2.50M 9% 1.00x ONLINE -
rpool 74G 7.05G 66.9G - 9% 1.00x ONLINE -
rpool1 59.5G 108K 59.5G - 0% 1.00x ONLINE -

  • Create the BE, on the new device with a new name:
    # beadm create -p rpool1 oi_151a6x
WARNING: menu.lst file /rpool1/boot/grub/menu.lst does not exist,
generating a new menu.lst file
Created successfully

  • Verify that it exists:
    # beadm list
BE Active Mountpoint Space Policy Created
oi_151a6 NR / 5.98G static 2012-09-13 16:33
oi_151a6x R - 4.15G static 2013-06-06 15:55
openindiana - - 13.5M static 2012-09-13 08:55

  • Install the boot blocks:
    # installgrub -m /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c2t1d0s0
Updating master boot sector destroys existing boot managers (if any).
continue (y/n)?y
stage2 written to partition 0, 277 sectors starting at 50 (abs 12594)
stage1 written to partition 0 sector 0 (abs 12544)
stage1 written to master boot sector

  • Change the BIOS boot order by shutting the system down and entering the BIOS setup.  Then put the SSD first in the boot order and reboot.
  • At this point, I upgraded to oi_151a7.  This confirmed that the new root pool was functional.  Here's the initial boot environment:
    # beadm list
BE Active Mountpoint Space Policy Created
oi_151a6 R - 6.01G static 2012-09-13 16:33
oi_151a6x NR / 4.33G static 2013-06-06 15:55
openindiana - - 13.5M static 2012-09-13 08:55

  • Upgrade:
    # pkg image-update --be-name oi_151a7
WARNING: The boot environment being modified is not the active one. Changes made in the active BE will not be reflected on the next boot.
Packages to update: 895
Create boot environment: Yes
Create backup boot environment: No
...
A clone of oi_151a6x exists and has been updated and activated.
On the next boot the Boot Environment oi_151a7 will be
mounted on '/'. Reboot when ready to switch to this updated BE.

  • Check the BEs again:
    # beadm list
BE Active Mountpoint Space Policy Created
oi_151a6 R - 6.01G static 2012-09-13 16:33
oi_151a6x N / 101K static 2013-06-06 15:55
oi_151a7 R - 5.31G static 2013-06-06 16:56
openindiana - - 13.5M static 2012-09-13 08:55

  • Shut down OS:
    # init 5
updating //platform/i86pc/boot_archive
updating //platform/i86pc/amd64/boot_archive

  • Press the `Power' button to reboot.  Confirm that the upgrade was successful.  Notice that there are still two active boot environments:
    $ beadm list
BE Active Mountpoint Space Policy Created
oi_151a6 R - 6.01G static 2012-09-13 16:33
oi_151a6x - - 16.8M static 2013-06-06 15:55
oi_151a7 NR / 5.33G static 2013-06-06 16:56
openindiana - - 13.5M static 2012-09-13 08:55

  • Some of the old root pool is still in use.  My home directory was on rpool/export/home/mills .  To simplify this migration, I decided to move it to the data pool.  First, create new filesystems on the data pool:
    # zfs create dpool/export
# zfs create dpool/export/home

  • My home directory in the /etc/passwd file was automounted to /home/mills from /export/home/mills .  The first thing I did was to copy it to /dpool/export/home/mills using `cpio'.  Then I edited /etc/passwd to change my home directory to /dpool/export/home/mills .  After that change, it was no longer automounted.  After a reboot, I confirmed that the old root pool was no longer needed for my home directory:
    # zfs unmount rpool/export/home/mills
# zfs unmount rpool/export/home
# zfs unmount rpool/export

  • Still, there are a few pieces left:
    # zfs list | egrep 'dump|swap'
rpool/dump 895M 65.5G 895M -
rpool/swap 952M 65.8G 637M -

  • To move the dump device, first get the properties of the old one:
    $ zfs get all rpool/dump | egrep 'SOURCE|local'
NAME PROPERTY VALUE SOURCE
rpool/dump volsize 895M local
rpool/dump checksum off local
rpool/dump compression off local
rpool/dump refreservation none local
rpool/dump dedup off local

  • Create another one on rpool1:
    # zfs create -o checksum=off -o compression=off -o refreservation=none -o dedup=off -V 895M rpool1/dump

  • Try to move it:
    # dumpadm -d /dev/zvol/dsk/rpool1/dump
dumpadm: dump device /dev/zvol/dsk/rpool1/dump is too small to hold a system dump
dump size 1812297728 bytes, device size 938475520 bytes
# dumpadm
Dump content: kernel pages
Dump device: /dev/zvol/dsk/rpool/dump (dedicated)
Savecore directory: /var/crash/ati
Savecore enabled: no
Save compressed: on

  • Expand the volume and try again:
    # dumpadm -d /dev/zvol/dsk/rpool1/dump

          Dump content: kernel pages
           Dump device: /dev/zvol/dsk/rpool1/dump (dedicated)
    Savecore directory: /var/crash/ati
      Savecore enabled: no
       Save compressed: on

  • Now, get the properties of the old swap device:
    $ zfs get all rpool/swap | egrep 'SOURCE|local'
NAME PROPERTY VALUE SOURCE
rpool/swap volsize 895M local
rpool/swap refreservation 952M local

  • Create a new one on rpool1:
    # zfs create -o refreservation=952M -V 895M rpool1/swap

  • Move the swap device by editing /etc/vfstab:
    o Move the swap device by editing /etc/vfstab:
root@ati:/etc# cp -p vfstab vfstab-
root@ati:/etc# ex vfstab
root@ati:/etc# diff vfstab- vfstab
12c12
< /dev/zvol/dsk/rpool/swap - - swap - no -
---
> /dev/zvol/dsk/rpool1/swap - - swap - no -

  • Reboot and confirm that rpool is no longer used:
    # dumpadm
Dump content: kernel pages
Dump device: /dev/zvol/dsk/rpool1/dump (dedicated)
Savecore directory: /var/crash/ati
Savecore enabled: no
Save compressed: on
# swap -l
swapfile dev swaplo blocks free
/dev/zvol/dsk/rpool1/swap 96,2 8 1832952 1832952
# beadm list BE
Active Mountpoint Space Policy Created
oi_151a6 R - 6.01G static 2012-09-13 16:33
oi_151a6x - - 16.8M static 2013-06-06 15:55
oi_151a7 NR / 5.34G static 2013-06-06 16:56
openindiana - - 13.5M static 2012-09-13 08:55
# zpool list
NAME SIZE ALLOC FREE EXPANDSZ CAP DEDUP HEALTH ALTROOT
dpool 928G 85.6G 842G 2.50M 9% 1.00x ONLINE -
rpool 74G 6.19G 67.8G - 8% 1.00x ONLINE -
rpool1 59.5G 7.17G 52.3G - 12% 1.00x ONLINE -

  • Export the pool and observe the result:
    # zpool export rpool
# zpool list
NAME SIZE ALLOC FREE EXPANDSZ CAP DEDUP HEALTH ALTROOT
dpool 928G 85.6G 842G 2.50M 9% 1.00x ONLINE -
rpool1 59.5G 7.18G 52.3G - 12% 1.00x ONLINE -
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
dpool 85.6G 828G 24K /dpool
dpool/export 83.8G 828G 22K /dpool/export
dpool/export/home 83.8G 828G 83.8G /dpool/export/home
dpool/opt 1.82G 828G 1.82G /dpool/opt
dpool/opt/local 21K 828G 21K /dpool/opt/local
rpool1 8.10G 50.5G 36.5K /rpool1
rpool1/ROOT 5.17G 50.5G 31K legacy
rpool1/ROOT/oi_151a6x 16.8M 50.5G 4.33G /
rpool1/ROOT/oi_151a7 5.16G 50.5G 4.27G /
rpool1/dump 2.00G 50.5G 2.00G -
rpool1/swap 952M 51.4G 16K -
# getent passwd mills
mills:x:107:10:Gary Mills:/dpool/export/home/mills:/bin/ksh
# beadm list
BE Active Mountpoint Space Policy Created
oi_151a6x - - 16.8M static 2013-06-06 15:55
oi_151a7 NR / 5.34G static 2013-06-06 16:56

  • I could have resumed automounting my home directory by changing the mount point of dpool/export to /export, but I decided to leave it the way it was.
  • Here's another upgrade, just to confirm that the new root pool was correct:
    # pkg image-update --be-name oi_151a8
Packages to remove: 16
Packages to install: 6
Packages to update: 879
Create boot environment: Yes
Create backup boot environment: No
DOWNLOAD PKGS FILES XFER (MB)
Completed 901/901 22745/22745 566.2/566.2
PHASE ACTIONS
Removal Phase 13844/13844
Install Phase 12382/12382
Update Phase 23637/23637
PHASE ITEMS
Package State Update Phase 1780/1780
Package Cache Update Phase 895/895
Image State Update Phase 2/2
...
root@ati:~# beadm list
BE Active Mountpoint Space Policy Created
oi_151a6x - - 16.8M static 2013-06-06 15:55
oi_151a7 N / 11.4M static 2013-06-06 16:56
oi_151a8 R - 8.76G static 2013-08-11 16:12
# bootadm list-menu
the location for the active GRUB menu is: /rpool1/boot/grub/menu.lst
default 2
timeout 30
0 oi_151a6x
1 oi_151a7
2 oi_151a8
# init 5

  • Press the power switch to reboot.  The upgrade was successful, completing the migration to a new device.

 

  • No labels

4 Comments

  1. Hi,

    I have also wanted to try SSD (Samsung SSD 840, 120 GB). My current rpool was on 160 GB HD 7200RPM. I have used slightly different approach, which worked (I am now writing this from system booted from SSD).

    First, I have created same partition layout as existing rpool had (slices 0, 2 and 8 - similar like in this example). BTW, I have attached SSD disk via USB docking station...

    Then I have created new pool (I have found disk ID using format and fdisk utilities in steps mentioned at the beginning of this page):

    $ pfexec zpool -f create RPOOL /dev/dsk/c6t0d0s0

    Next phase is to take recursive snapshot of rpool and to send (verbose=-v, recursive=-R) it to and receive it (keep structure=-d, force=-F) to new rpool (I have named it RPOOL):

    $ pfexec zfs snapshot -r rpool@Moove
    $ pfexec zfs send -vR rpool@Moove | pfexec zfs recv -Fd RPOOL

    BTW, my installation has user home directories on second HD, as well as /opt  directory. Boot disk (rpool in such environment) has occupied 26 GB of space and system took 28 minutes under normal activities to send/receive pool...

    Now, we need to make new disk bootable. Check (compare and set bootfs property of new root pool):

    $ zpool get bootfs rpool
    NAME   PROPERTY  VALUE                                 SOURCE
    rpool  bootfs    rpool/ROOT/oi151_181hipster_20131204  local
    
    $ zpool get bootfs RPOOL
    NAME   PROPERTY  VALUE                                 SOURCE
    RPOOL  bootfs    -                                     local
    
    $ pfexec zpool set bootfs=RPOOL/ROOT/oi151_181hipster_20131204 RPOOL

    After this, new pool has to be exported and grub installed:

    $ pfexec zpool export RPOOL
    $ pfexec installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c6t0d0s0

    Now, you can shutdown system and shuffle disks. If you have put SSD disk to same controller, nothing to do... But, if you have changed location of it, then you have to fix BIOS boot order.

    I found easy enough to boot system FIRST from latest /hipster USB text installation image (less than 1GG, easy to create - and my installation IS /hipster one) in order to import copy of rpool under new name:

    $ zpool inport -f RPOOL rpool
    $ init 6

    After you have started reboot, skip step of booting from removable device and your system should be started from SSD now.

    My impression is that all this is NOT enough to have all benefits of SSD disk usage...

    Actually, I could not say that systems is significantly faster than boot from normal HD, but it might be needed to do some optimizations.

    This is how I did moved rpool to SSD (pardon me on my mistakes in English).

    Regards.

    P.S. Resources used (beside this page):

    1. Predrag

      You might be able to do this via "zpool split" without using snapshots (I have not tried all these spteps ... yet)

      Lets's assume

      • you have a rpool that is a bare drive or a mirrored set with the drive or one of the members "c1t0d0s0"
      • you want to migrate the root pool to a new disk (same size or maybe bigger) to a new disk "c1t2d0s0" 
      • Note I'm not sure about any issues that might be caused via a 512 byte vs 4K disk sector mismatch
        so lets assume the sector sizes match on all the disks (old and new).

      Note "zpool split" is not documented in in the illumos man page Bug #2897

      Step 1 - I imaging  a "cleaner procedure' with out relying on snapshots might be doing something like the following:

       # zpool attach rpool c1t0d0s0 c1t2d0s0

      # zpool status rpool
      *** wait for resilver to complete **** 

      Step 2 - Now split off the new device it's a perfect clone (by default it takes the last one added - but we could specify c1t2d0s0 as the last arg)

      # zpool split rpool rpool2

       # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t2d0s0

      [Optional] You have your clone albiet with a different pool name, however what if your new drive is larger - your not using the space - but you can

      zfs list

      # zpool set autoexpand=on rpool2

      [Optional] not done yet look at how much space you can expand and use and then use your new space

      zpool get expandsize rpool2

      zpool online -e rpool2

      zfs list

      # zpool set autoexpand=off rpool2

      [Optional] at this point the new cloned disk may be bigger than the disks you cloned from if so no more using those old disks later as part of a mirror with the new disk

      Step 3. Time to set up the cloned disk to boot (we need to change it's pool name ) so shut down and power off

      # init 0

      Step 4. Remove the old OS drive (or drives) which is either the original stand alone disk or the entire original mirror set .

      Step 5. Boot from the latest /hipster USB text image - only way I know of to change the pool name back to 'rpool'

      Step 6. Now import the device and change its name from rpool2 to rpool

      # zpool import -f rpool2 rpool

      # init 6

      IMHO Step 1 & 2 make a perfect clone except for the pool name - it would be cool if there was a zpool command to rename the split e.g. rpool2 to rpool WITHOUT bringing it online as it would have a "name" conflict and then you remove it offsite as a hot spare OS clone backup without rebooting to a /hipster image to rename it.

       

  2. Procedure for SPARC (as reported by igork on #oi-dev): need install zfs boot block by: installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c2t1d0s0

  3. Actually, the safest and most straightforward way to move the root pool to a very different disk with, say, different blocksize, in an x86/x64 machine, is probably as follows:

    Assume you have

    1. An old server, with one disk in a root pool. Call that disk c1t0d0s0
    2. in the same network, another machine where you can free up enough space to store a snapshot of the entire root pool, and can nfs mount that storage space. Say you mount that nfs storage as /mnt/RootSnaps.
    3. do
      zpool get bootfs rpool (note down the bootfs value)
      zfs snapshot -r rpool@MoveSnap
      zfs send -R rpool@MoveSnap> /mnt/RootSnaps/rpool@MoveSnap

    4. Stop the server.
    5. Replace the old disk with the  new disk, and boot from a live medium that can A: nfs mount, B: create zpools, C: run installgrub and D: format/fdisk and all that.
    6. Set up the new disk with labels.
    7. create a pool called rpool on the new disk
    8. mount /mnt/MoveSnaps from wherever it is
    9. do
      zfs receive -Fd rpool < /mnt/MoveSnaps/rpool@MoveSnap
    10. when that is done, do
      zpool set bootfs=<bootfsvalue> #from step 3, above
      and last
      installgrub /boot/grub/{stage1,stage2} /raw/rdsk/cPtQd0s0
      where you found the proper device in the format/fdisk run in 6.
    11. Reboot from the new disk should now work, if the BIOS can boot from 4k blocks..

    This should, I think, avoid the zpool rename and other minor gotchas.

    This Web page is how to du that in Oracle Solaris. 

    https://docs.oracle.com/cd/E23824_01/html/821-1448/recover-4.html