Info about actually using the 4540 is at SunX4540ConfigSolaris

Migrating pre-installed X4540 Solaris 10 to boot from flash drive

NOTE: There's probably no need to do any of this manually except the partitioning stage. See Live Update and commands like lucreate and luactivate. I didn't find out about this functionality until after I started.

NOTE: Before you do all of this, be sure to read the referenced docs at the end about limiting writes to the flash drive and configure the system as such. In brief:
  • Eliminate swap
  • Set kernel crash dump to /var
  • Set /var to a non-flash location
  • Set /tmp to be ramfs
  • When you install d-cache don't let it log to /opt/d-cache/ or anywhere else on the flash
See the docs for how to do all of this, they explain in detail. Well...I doubt the d-cache docs explain anything but I'm not explaining it.

Assumptions we are working with:
  • A Compact Flash drive will wear out very quickly if you write to it much. No wear leveling as in modern SSD.<./li>
  • We are going to make /var mount either onto a network drive or part of a zpool. Tentatively I think it is best to move over to the flash, create 4 raidz2 pools, and mount /var to a pool volume. So we lose 5GB on one pool, but we fully utilize all space on all disks instead of losing 5GB on each one for the bit we steal to make /var outside a zpool. zpool disks use as much on each disk as the smallest disk.
  • It's probably easier to setup the drive using the already installed Solaris OS and copy over + re-install bootloader. Bootloader is GRUB, we know GRUB, right?
  • Solaris...is not Linux. It's not even FreeBSD. I think it's an OS but no doubt it could also be a medieval torture device.

Formatting/partitioning the flash drive

We do this from inside the preinstalled OS:
format

.....(list scrolls by).....

48. c8d0 <DEFAULT cyl 1989 alt 2 hd 255 sec 63>  FLASH
          /pci@0,0/pci-ide@4/ide@0/cmdk@0,0
Specify disk (enter its number): 48

format> partition


PARTITION MENU:
        0      - change `0' partition
        1      - change `1' partition
        2      - change `2' partition
        3      - change `3' partition
        4      - change `4' partition
        5      - change `5' partition
        6      - change `6' partition
        7      - change `7' partition
        select - select a predefined table
        modify - modify a predefined partition table
        name   - name the current table
        print  - display the current table
        label  - write partition map and label to the disk
        !<cmd> - execute <cmd>, then return
        quit

I chose to select a pre-defined table, there was only one. I then changed the '0' partition by modifying size to 1986c and tag root:
Note: Volume label is asked for when you write this at the end.
partition> print
Volume:  FLASH
Current partition table (original):
Total disk cylinders available: 1989 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders        Size            Blocks
  0       root    wm       3 - 1988       15.21GB    (1986/0/0) 31905090
  1 unassigned    wm       0               0         (0/0/0)           0
  2     backup    wu       0 - 1988       15.24GB    (1989/0/0) 31953285
  3 unassigned    wm       0               0         (0/0/0)           0
  4 unassigned    wm       0               0         (0/0/0)           0
  5 unassigned    wm       0               0         (0/0/0)           0
  6 unassigned    wm       0               0         (0/0/0)           0
  7 unassigned    wm       0               0         (0/0/0)           0
  8       boot    wu       0 -    0        7.84MB    (1/0/0)       16065
  9 alternates    wm       1 -    2       15.69MB    (2/0/0)       32130 

To label and write enter "label" at the prompt.

Then we finally get to make a filesystem:
root@thor01 # newfs /dev/rdsk/c8d0s0
newfs: construct a new file system /dev/rdsk/c8d0s0: (y/n)? y
Warning: 702 sector(s) in last cylinder unallocated
/dev/rdsk/c8d0s0:       31905090 sectors in 5193 cylinders of 48 tracks, 128 sectors
        15578.7MB in 325 cyl groups (16 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
 32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
Initializing cylinder groups:
......
super-block backups for last 10 cylinder groups at:
 30973344, 31071776, 31170208, 31268640, 31367072, 31457312, 31555744,
 31654176, 31752608, 31851040

Solaris didn't come with a separate /boot partition, and we want to end up with only a / partition on the flash drive + /var "somewhere else". To mount, we switch over to /dev/dsk. During the partition/newfs phase you use /dev/rdsk.
mount /dev/dsk/c8d0s0 /flashgordon

/dev/dsk/c8d0s0         15G    15M    15G     1%    /flashgordon

And that's all for today. Tomorrow's show: Copying it over, running grub, and switching var to "somewhere else".

Copying the filesystem and booting from flash drive

Ideally we'd do this while booted from the Solaris single-user shell available by booting from the OS DVD. However this environment does not initialize the DM (RAID1) volumes where our installation resides. So instead of figuring that out we're going to try "fssnap" in single user mode in combination with "ufsdump". Tar would probably work too.

Snapshot filesystems
I may have been able to skip this step. My reasoning was that it would assure me of a stable/unchanging filesystem to give to ufsrestore. However, running fssnap required shutting down ntp to achieve "write-lock" so I'm not sure it wouldn't have been stable anyways. You need to give it a backing store. The output is the location of the snapshot filesystem.

Snapshot root filesystem (vol1 is a ZFS vol that existed already):
root@thor01 # svcadm disable -t ntp
root@thor01 # fssnap -o bs=/vol1/root.snapshot /
/dev/fssnap/0

Restore filesystem to flash drive

Now we dump the snapshot onto the flash drive. We only care about moving "/" as /var is going to be later moved elsewhere (anywhere but the flash drive).

/dev/dsk/c8d0s0         15G    15M    15G     1%    /flashgordon

root@thor01 # cd /flashgordon/

root@thor01 # ufsdump f - /dev/fssnap/0 | ufsrestore xf -
  DUMP: Date of this level 9 dump: Fri Sep 04 11:31:37 2009
  DUMP: Date of last level 0 dump: the epoch
  DUMP: Dumping /dev/rfssnap/0 to standard output.
  DUMP: Mapping (Pass I) [regular files]
  DUMP: Mapping (Pass II) [directories]
  DUMP: Writing 32 Kilobyte records
  DUMP: Estimated 14693940 blocks (7174.78MB).
  DUMP: Dumping (Pass III) [directories]

Install GRUB, modify /etc/vfstab

Notice that we mount the flash disk with noatime to limit how much writing occurs. Read-only mounting could be something else to explore when everything is all set up.

installgrub -m /flashgordon/boot/grub/stage1 /flashgordon/boot/grub/stage2 /dev/rdsk/c8d0s0

Modify /etc/vfstab ON THE NEW FILESYSTEM NOT THE OLD ONE! DAMMIT! The disk is going to be c0d0s0 when we reboot after changing boot order in the bios, so use that.
/dev/dsk/c0d0s0 /dev/rdsk/c0d0s0 /              ufs     1       no      noatime
/dev/md/dsk/d10 /dev/md/rdsk/d10 /oldroot       ufs     1       no      -

Note: All of the below is what I THINK I should have done first based on my recovery from the failed first attempt. There was more to it than failing to modify the NEW vfstab instead of the old. I did all of the below from the Solaris failsafe environment or from a failed-boot "maintenance mode" but it would have been better to do it before rebooting the first time.

Create a new bootsig on the flash drive. Or, delete /boot/grub/bootsign/rootfs0 from old root fs and don't update anything. If I did it over I'd do that since bootsign did get copied from old root.
touch /flashgordon/boot/grub/bootsign/rootfs_flash

Update /flashgordon/boot/grub/menu.lst by adding/changing this line after the current Solaris boot entries.
findroot(rootfs_flash,0,a)

Now we have to update the various places where the path is set.

Update the bootpath in /flashgordon/boot/solaris/bootenv.rc file:
setprop bootpath '/pci@0,0/pci-ide@4/ide@0/cmdk@0,0:a'
OR "chroot /flashgordon /bin/bash" and run:
eeprom bootpath=/pci@0,0/pci-ide@4/ide@0/cmdk@0,0:a
(bootenv.rc simulates an eeprom on x86 solaris)

Update the /etc/system file (because the old system was a metaroot, we could try the metaroot command if not in rescue state and if chrooted to new root):
metaroot /dev/dsk/c0d0s0  

Otherwise, open /flashgordon/etc/system and set this line:
rootdev:/pci@0,0/pci-ide@4/ide@0/cmdk@0,0:a

Update the boot-archive:
bootadm update-archive -R /flashgordon

In /flashgordon/boot/grub/menu.lst, append a -r to the kernel line for the one we are going to boot. Devices will need to be reconfigured because we're going to re-order them. You can do this by editing at boot time also. Don't forget to remove the "-r" later if you put it into the file.
title Solaris 10 5/08 s10x_u5wos_10 X86 (VGA)
findroot (rootfs_flash,0,a)
kernel /platform/i86pc/multiboot -B console=text -r
module /platform/i86pc/boot_archive

Reboot!
root@thor01# reboot

On bootup, change the boot order of the drives in the BIOS so the flash disk comes first in ordering. Also, you can disable the LSI controllers so they are not available to boot from. It makes the boot go faster.

If it still isn't finding the flash drive at c0d0, then enter the rescue environment and fix devices. I don't think this will be required as long as you do a "reconfigure" boot:
devfsadm -r /a 

After all this, you too can have a flash-disk based Thor. It should have come preconfigured. I'll be sure to take up my complaints with Larry if there's anything left of Sun.

With that all said and done: DON'T DO WHAT I DID! Please explore the Live Update feature and consider using lucreate to make a new boot env on the flash disk. Make sure not to copy /var and tell it to keep the old one (or, get your new /var all set and tell it that).

Moving /var onto a zpool

Other steps are to create our zpools, move /var onto one of them, and install d-cache or something like that. Please see SunX4540ConfigSolaris for info on setting up the zpools. Once done, we move /var simply by copying, changing /etc/vfstab, and rebooting. At that point all disks will be free for ZFS usage.

So, after creating at least 1 zpool, let's make a zfs volume for /var and set a 10GB quota:
zfs create pool1/var
zfs set quota=10G pool1/var

Alternative way is to create an emulated block device. I didn't see any reason to go this route but it would let us treat our device as a plain old disk and make a UFS filesystem. Example:
zfs create -V 10gb pool1/var
newfs /dev/zvol/dsk/pool1/var
mount /dev/zvol/dsk/pool1/var /newvar

I did it with a regular ZFS volume. Now let's copy the files over from /var:
cp -RPp /var/* /pool1/var/

Though ZFS filesystems automatically mount at boot by default, they seem to do it AFTER vfstab. So some of the /var dependent mounts in vfstab will fail if you simply set the ZFS mountpoint to /var (I tried this by going to single user and unmounting the old /var - did not work on reboot).

So we can set the mountpoint to "legacy" and mount from vfstab on a reboot.
zfs set mountpoint=legacy pool1/var

vfstab:
pool1/var    -   /var     zfs    -     no     - 

fsck doesn't apply to ZFS, no fsck device.

Reboot to try it out, and we're done. See SunX4540ConfigSolaris to create the rest of the zpools using the disks now freed from the tyranny of running / and /var. Actually two disks, since they were both mirrored. This will also make zpool administration much easier since all disks are equal.


References: Solaris 10 collection: http://docs.sun.com/app/docs/prod/solaris.10?l=en&a=view

X4540 Collection: http://docs.sun.com/app/docs/prod/sf.x4540?l=en&a=view

OS Installation guide (Flash Installation documentation/tips): http://docs.sun.com/app/docs/doc/820-4893-13?l=en

-- BenMeekhof - 20 Aug 2009
Topic revision: r16 - 26 Apr 2012, BenMeekhof
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback