1 / 66

HP-UX Dynamic Root Disk, Solaris Live Upgrade and AIX Multibos

HP-UX Dynamic Root Disk, Solaris Live Upgrade and AIX Multibos. Dusan Baljevic Sydney, Australia. Cloning in Major Unix and Linux Releases. AIX Alternate Root and Multibos (AIX 5.3 and above) HP-UX Dynamic Root Disk (DRD) Linux Mondo Rescue, Clonezilla Solaris Live Upgrade.

oro
Download Presentation

HP-UX Dynamic Root Disk, Solaris Live Upgrade and AIX Multibos

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HP-UX Dynamic Root Disk, Solaris Live Upgrade and AIX Multibos Dusan Baljevic Sydney, Australia

  2. Cloning in Major Unix and Linux Releases AIX Alternate Root and Multibos (AIX 5.3 and above) HP-UX Dynamic Root Disk (DRD) Linux Mondo Rescue, Clonezilla Solaris Live Upgrade August 7, 2009 2

  3. HP-UX Dynamic Root Disk Features • Dynamic Root Disk (DRD) provides the ability to clone an HP-UX system image to an inactive disk. • Supported on HP PA-RISC and Itanium-based systems. • Supported on hard partitions (nPars), virtual partitions (vPars), and Integrity Virtual Machines (Integrity VMs), running the following operating systems with roots managed by the following Volume Managers (except as specifically noted for rehosting):  o HP-UX 11i Version 2 (11.23) September 2004 or later o HP-UX 11i Version 3 (11.31) o LVM (all O/S releases supported by DRD) o VxVM 4.1 o VxVM 5.0 August 7, 2009 3

  4. HP-UX DRD Benefit: Minimizing Planned Downtime Without DRD: Software management may require extended downtime With DRD: Install/remove software on the clone while applications continue running Install patcheson the clone;applicationsremain running lvol1lvol2lvol3 lvol1lvol2lvol3 lvol1lvol2lvol3 lvol1lvol2lvol3 boot disk clone disk boot mirror clone mirror Originalvg00(active) cloned vg00 (inactive/patched) Activate theclone to makechanges takeeffect lvol1lvol2lvol3 lvol1lvol2lvol3 lvol1lvol2lvol3 lvol1lvol2lvol3 boot disk clonedisk boot mirror clone mirror Original vg00 (inactive) cloned vg00 (active/patched) August 7, 2009 4

  5. HP-UX Dynamic Root Disk Features - continued • Product : DynRootDisk Version: A.3.3.1.221 (B.11.xx.A.3.4.x will be the current version number as of September 2009) • The target disk must be a single physical disk, or SAN LUN. • The target disk must be large enough to hold all of the root volume file systems. DRD allows the cloning of the root volume group even if the master O/S is spread across multiple disks (it is a one-way, many-to-one operation). • On Itanium servers, all partitions are created; EFI and HP-UX partitions are copied. This release of DRD does not copy the HPSP partition. • Copy of lvmtab on the cloned image is modified by the clone operation to contain information that will reflect the desired volume groups when the clone is booted. August 7, 2009 5

  6. HP-UX Dynamic Root Disk Features - continued • Only the contents of vg00 are copied. • Due to system calls DRD depends on, DRD expects legacy Device Special Files (DSFs) to be present and the legacy naming model to be enabled on HP-UX 11i v3 servers. HP recommends only partial migration to persistent DSFs be performed. • If the disk is currently in use by another volume group that is visible on the system, the disk will not be used. • If the disk contains LVM, VxVM, or boot records but is not in use, one must use the “-x overwrite” option to tell DRD to overwrite the disk. Already-created clones will contain boot records; the drd status command will show the disk that is currently in use as an inactive system image. August 7, 2009 6

  7. HP-UXDynamic Root Disk Features - continued • All DRD processes, including “drd clone” and “drd runcmd”, can be safely interrupted issuing Control-C (SIGINT) from the controlling terminal or by issuing kill –HUP <pid> (SIGHUP). This action causes DRD to abort processing. Do not interrupt DRD using the kill -9 <pid> command (SIGKILL), which fails to abort safely and does not perform cleanup. Refer to the “Known Issues” list on the DRD web page (http://www.hp.com/go/DRD) for cleanup instructions after drd runcmd is interrupted. • The Ignite server will only be aware of the clone if it is mounted during a make_*_recovery operation. August 7, 2009 7

  8. HP-UX Dynamic Root Disk Features - continued • DRD does not provide a mechanism for resizing file systems during a clone operation. • After the clone is created, one can manually change file system sizes on the inactive system without an immediate reboot: • The whitepaper, “Dynamic Root Disk: Quick Start & Best Practices” describes resizing file systems other than /stand. * • The whitepaper “Dynamic Root Disk: Quick Start & Best Practices” describes resizing the boot (/stand) file system on an inactive system image. • One can avoid multiple mounts and unmounts by using “drd mount” to mount the inactive system image before the first runcmd operation and “drd umount” to unmount the inactive system image after the last runcmd operation. ** • Supports root volume groups with any name (prior to version A.3.0, only vg00 was possible). August 7, 2009 8

  9. HP-UX Dynamic Root Disk Commands • The basic DRD commands are: drd clone drd runcmd drd activate drd deactivate drd mount drd umount drd status drd rehost drd unrehost August 7, 2009 9

  10. HP-UX Dynamic Root Disk Commands - continued • “drd runcmd” can run specific Software Distributor (SD) commands on the inactive system image only: swinstall swremove swlist swmodify swverify swjob • Three other commands can be executed by the drd runcmd command: view used to view logs produced by commands that were executed by drd runcmd. kctune used to modify kernel parameters. update-ux performs v3 to v3 OE updates August 7, 2009 10

  11. HP-UX Dynamic Root Disk Features – Dry Run • A simple mechanism for determining if a chosen target disk is sufficiently large is to run a preview: # drd clone -p -v -t <blockDSF> blockDSF is of the form:  * HP-UX 11i v2: /dev/dsk/cXtXdX * HP-UX 11i v3: /dev/disk/diskX • The preview operation includes the disk space analysis needed to see if the target disk is sufficiently large. August 7, 2009 11

  12. HP-UX Dynamic Root Disk versus Ignite-UX • DRD has several advantages over Ignite-UX net and tape images: * No tape drive is needed, * No impact on network performance will occur, * No security issues of transferring data across the network. • Mirror Disk/UX keeps an "always up-to-date" image of the booted system. DRD provides a "point-in-time“ image. The booted system and the clone may then diverge due to changes to either one. Keeping the clone unchanged is the Recovery scenario. DRD is not available for HP-UX 11.11, which limits options on those systems. August 7, 2009 12

  13. HP-UX Dynamic Root Disk Features - continued Dynamic Root Disk (DRD) provides ability to clone an HP-UX system image to an inactive disk, and then: * Perform system maintenance on the clone while the HP-UX 11i system is online. * Reboot during off-hours - significantly reducing system downtime. * Utilize the clone for system recovery, if needed. * Rehost the clone on another system for testing or provisioning purposes—on VMs or blades utilizing Virtual Connect, HP-UX 11i v3 LVM only; VMs with HP-UX 11i v2 LVM only. * Perform an OE Update on the clone from an older version of HP-UX 11i v3 to HP-UX 11i v3 update 4 or later. August 7, 2009 13

  14. HP-UX – Dynamic Root Disk and /etc/bootconf • Errors in /stand/bootconf can make the command drd deactivate to fail. * (This is no longer true in the current release) The /stand/bootconf file on the booted system should contain device files for just the booted disk and any of its mirrors not the clone target. The /stand/bootconf file that is created on the clone target WILL contain the device file of the target itself (or, on an IPF system, the device file of the HP-UX partition of the target). August 7, 2009 14

  15. HP-UX – Dynamic Root Disk – Rehosting • The initial implementation of drd rehost only supports rehosting of an LVM-managed root volume group on an Integrity virtual machine to another Integrity virtual machine, or an LVM-managed root volume group on a Blade with Virtual Connect I/O to another such Blade. • The rehost command does not enforce the restriction to blades and VMs, but other use of this command is not officially supported. • As of version A.3.3, rehosting support for HP-UX 11i v2 has been added. August 7, 2009 15

  16. HP-UX – Dynamic Root Disk – Rehosting on HP-UX 11.31 • After the clone and system information file have been created, the “drd rehost” command can be used to check the syntax of the system information file and copy it to /EFI/HPUX/SYSINFO.TXT in preparation for processing by auto_parms(1M) during the boot of the image. The following example uses the /var/opt/drd/tmp/newhost.txt system information file: SYSINFO_HOSTNAME=myhost SYSINFO_MAC_ADDRESS[0]=0x0017A451E718 SYSINFO_DHCP_ENABLE[0]=0 SYSINFO_IP_ADDRESS[0]=192.2.3.4 SYSINFO_SUBNET_MASK[0]=255.255.255.0 SYSINFO_ROUTE_GATEWAY[0]=192.2.3.75 SYSINFO_ROUTE_DESTINATION[0]=default SYSINFO_ROUTE_COUNT[0]=1 August 7, 2009 16

  17. HP-UX – Dynamic Root Disk – Rehosting on HP-UX 11.31 - continued • To check the syntax of the system information file, without copying it to the /EFI/HPUX/SYSINFO.TXT file, use the preview option of the drd rehost command: # drd rehost –p –f \ /var/opt/drd/tmp/newhost.txt • To copy it to the /EFI/HPUX/SYSINFO.TXT file, use the following command: # drd rehost –f /var/opt/drd/tmp/newhost.txt August 7, 2009 17

  18. HP-UX – Dynamic Root Disk Examples • # drd clone -t /dev/disk/disk8 -x overwrite=true • ======= 07/02/08 13:09:41 EST BEGIN Clone System Image (user=root) (jobid=syd59) • * Reading Current System Information • * Selecting System Image To Clone • * Selecting Target Disk • * Selecting Volume Manager For New System Image • * Analyzing For System Image Cloning • * Creating New File Systems • * Copying File Systems To New System Image • * Making New System Image Bootable • * Unmounting New System Image Clone • ======= 07/02/08 13:42:57 EST END Clone System Image succeeded. (user=root) (jobid=syd59) August 7, 2009 18

  19. HP-UX – Dynamic Root Disk Examples - continued # drd status ======= 07/02/08 13:45:42 EST BEGIN Displaying DRD Clone Image Information (user=root) (jobid=syd59) * Clone Disk: /dev/disk/disk8 * Clone EFI Partition: Boot loader and AUTO file present * Clone Creation Date: 07/02/08 13:09:46 EST * Clone Mirror Disk: None * Mirror EFI Partition: None * Original Disk: /dev/disk/disk7 * Original EFI Partition: Boot loader and AUTO file present * Booted Disk: Original Disk (/dev/disk/disk7) * Activated Disk: Original Disk (/dev/disk/disk7) ======= 07/02/08 13:45:51 EST END Displaying DRD Clone Image Information succeeded. (user=root) (jobid=syd59) August 7, 2009 19

  20. HP-UX – Dynamic Root Disk Examples - continued # drd activate ======= 07/02/08 13:48:03 EST BEGIN Activate Inactive System Image (user=root) (jobid=syd59) * Checking for Valid Inactive System Image * Reading Current System Information * Locating Inactive System Image * Determining Bootpath Status * Primary bootpath : 0/1/1/0.0x1.0x0 before activate. * Primary bootpath : 0/1/1/1.0x2.0x0 after activate. * Alternate bootpath : 0/1/1/1.0x2.0x0 before activate. * Alternate bootpath : 0/1/1/1.0x2.0x0 after activate. * HA Alternate bootpath : 0/1/1/0.0x1.0x0 before activate. * HA Alternate bootpath : 0/1/1/0.0x1.0x0 after activate. * Activating Inactive System Image ======= 07/02/08 13:48:15 EST END Activate Inactive System Image succeeded. (user=root) (jobid=syd59) August 7, 2009 20

  21. HP-UX – Dynamic Root Disk Examples - continued # drd_register_mirror /dev/dsk/c1t2d0 * # drd_unregister_mirror /dev/dsk/c2t3d0 ** # drd runcmd view /var/adm/sw/swagent.log # diff /var/spool/crontab/crontab.root \ /var/opt/drd/mnts/sysimage_001/var/spool/crontab/crontab.root August 7, 2009 21

  22. HP-UX – Dynamic Root Disk Examples - continued # /opt/drd/bin/drd mount # /usr/bin/bdf file system kbytes used avail %used Mounted on /dev/vg00/lvol3 1048576 320456 722432 31% / /dev/vg00/lvol1 505392 43560 411288 10% /stand /dev/vg00/lvol8 3395584 797064 2580088 24% /var /dev/vg00/lvol7 4636672 1990752 2625264 43% /usr /dev/vg00/lvol4 204800 8656 194680 4% /tmp /dev/vg00/lvol6 3067904 1961048 1098264 64% /opt /dev/vg00/lvol5 262144 9320 250912 4% /home /dev/drd00/lvol3 1048576 320504 722392 31% /var/opt/drd/mnts/sysimage_001 /dev/drd00/lvol1 505392 43560 411288 10% /var/opt/drd/mnts/sysimage_001/stand /dev/drd00/lvol4 204800 8592 194680 4% /var/opt/drd/mnts/sysimage_001/tmp /dev/drd00/lvol5 262144 9320 250912 4% /var/opt/drd/mnts/sysimage_001/home /dev/drd00/lvol6 3067904 1962912 1096416 64% /var/opt/drd/mnts/sysimage_001/opt /dev/drd00/lvol7 4636672 1991336 2624680 43% /var/opt/drd/mnts/sysimage_001/usr /dev/drd00/lvol8 3395584 788256 2586968 23% /var/opt/drd/mnts/sysimage_001/var August 7, 2009 22

  23. HP-UX – Dynamic Root Disk – Serial Patch Installation Example # swcopy -s /tmp/PHCO_38159.depot \* @ /var/opt/mx/depot11/PHCO_38159.dir # drd runcmd swinstall -s \ /var/opt/mx/depot11/PHCO_38159.dir PHCO_38159 August 7, 2009 23

  24. HP-UX – Dynamic Root Disk update-ux Issue * When executing “drd runcmd update-ux” on the inactive DRD system image, the command errors: ERROR: The expected depot does not exist at "<depot_name>" In order to use a directory depot on the active system image, you will need to create a loopback mount to access the depot. August 7, 2009 24

  25. HP-UX – Dynamic Root Disk update-ux Issue - continued Issue Resolution The following steps should be followed in order to update the clone from a directory depot that resides on the active system image.  The steps must executed as root, in this order: 1) Mount the clone using “drd mount” 2) Make the directory on the clone and loopback mount the depot.  The directory on the clone and the source depot must have the same name, in this case “/var/depots/0909_DCOE”, however the name can be whatever you chose: # mkdir -p  /var/opt/drd/mnts/sysimage_001/var/depots/0909_DCOE               # mount -F lofs /var/depots/0909_DCOE /var/opt/drd/mnts/sysimage_001/var/depots/0909_DCOE # drd runcmd update-ux -s /var/depots/0909_DCOE   August 7, 2009 25

  26. HP-UX – Dynamic Root Disk update-ux Issue - continued 3) Once your update has completed, unmount the loopback mount and then unmount the clone # umount –F lofs /var/depots/0909_DCOE /var/opt/drd/mnts/sysimage_001/var/depots/0909_DCOE # drd umount Updates from multiple-DVD Media Updates directly from media are not supported for DRD updates.  In order to update from media, you must copy the contents to a directory depot either on a remote server (easiest method) or to a directory on the active system. If it must be on the active system image you must first copy the media’s contents to a directory depot and then create the clone.  If you already have a clone, you can copy the depot and then loopback mount that depot to the clone (see instructions above).  August 7, 2009 26

  27. HP-UX – Dynamic Root Disk update-ux Issue - continued  To copy the software from the DVD’s, make a directory on a remote system or the active system image; mount the DVD media and swcopy its contents into the newly created directory.  Unmount the first disk and insert the second DVD to copy its contents into the directory.    # mkdir –p /var/software_depot/DCOE-DVD # mount /dev/disk/diskX /cdrom # swcopy -s /cdrom –x enforce_dependencies=false \* @/var/software_depot/DCOE-DVD # umount /cdrom # mount /dev/disk/diskX /cdrom // this is DVD 2 # swcopy -s /cdrom –x enforce_dependencies=false \* @/var/software_depot/DCOE-DVD August 7, 2009 27

  28. HP-UX – Dynamic Root Disk update-ux Issue - continued If the depot resides on a remote server (a system other than the one to be updated), proceed with the “drd runcmd update-ux” command and specify the location as the argument of the “-s” parameter: # drd runcmd update-ux -s <server_name>:/var/software_depot/DCOE-DVD <OE> If the depot resides in the root group of the system to be cloned, and the clone has not yet been created, create the clone and issue the  “drd runcmd update-ux “ command, specifying the location of the depot as it appears on the booted system: # drd runcmd update-ux –s /var/software_depot/DCOE-DVD <OE> If the depot resides on the system to be updated, in a location other than the root group, or if the clone has already been created, use the loopback mount instructions above. August 7, 2009 28

  29. Solaris Live Upgrade Features • Live upgrade is a feature of Solaris (since version 2.6) that allows the operating system to be cloned to an offline partition (or partitions), which can then be upgraded with new O/S patches, software, or even a new version of the operating system. The system administrator can then reboot the system on the newly upgraded partition. In case of problems, it is easy to revert back to the original partition/version via a single live upgrade command followed by a reboot. • Live upgrade is especially useful because Sun does not officially support installing O/S patches to active partitions - patching while in single user mode or to a non-active live upgrade partition. August 7, 2009 29

  30. Solaris Live Upgrade Features - continued • Live Upgrade requires multiple partitions on the boot drive – one set of partitions is "active" and the other is "inactive“) or on separate drives. These sets of partitions are "boot environments“ (BEs). • A slice where the root (/) file system is to be copied must be selected. Use the following guidelines when you select a slice for the root (/) file system. The slice must comply with the following: * Must be a slice from which the system can boot. * Must meet the recommended minimum size. * Cannot be a Veritas VxVM volume or a Solstice DiskSuite metadevice. * Can be on different physical disks or the same disk as the active root file system. * For sun4c and sun4m, the root file system must be less than 2 GB. August 7, 2009 30

  31. Solaris Live Upgrade Features - continued • The swap slice cannot be in use by any boot environment except the current boot environment or if the “-s” option is used, the source boot environment. The boot environment creation fails if the swap slice is being used by any other boot environment whether the slice contains a swap, UFS, or any other file system. • Typically, each boot environment requires a minimum of 350 to 800 MB of disk space, depending on the system software configuration. • When viewing the character interface remotely, such as over a tip line, set the TERM environment variable to VT220. Also, when using the Common Desktop Environment, set the value of the TERM variable to dtterm, rather than xterm. August 7, 2009 31

  32. Solaris Live Upgrade Features - continued • lucreate command allows you to include or exclude specific files and directories when creating a new BE. • Include files and directories with: -y includeoption -Y include_list_fileoption items with a leading + in the file used with the -zfilter_listoption • Exclude files and directories with: -x exclude option -f exclude_list_file option items with a leading – in the file used with the -z filter_listoption August 7, 2009 32

  33. Solaris Live Upgrade and Special Files • Files can change in the original boot environment (BE) after the BE is created but NOT YET activated. • On the first boot of a BE, data is copied from the source BE. • The list to copy is in /etc/lu/synclist. Example: /etc/default/passwd OVERWRITE /etc/dfs OVERWRITE /var/log/syslog APPEND /var/adm/messages APPEND August 7, 2009 33

  34. Solaris Live Upgrade Examples • The upgrade process of the new BE can be done in several ways (local, net, CD-ROM, flash). All four of these are done the same way except each one you specify a different path to the image through the -s flag. Examples: Local file: # luupgrade -u -n solenv2 -s /Solaris_10/path/to/os_image Net: # luupgrade -u -n solenv2 -s /net/Solaris_10/path/to/os_image CD-ROM: # luupgrade -u -n solenv2 -s /cdrom/Solaris_10/path/to/os_image Flash: # luupgrade -u -n solenv2 -s /path/to/flash.flar August 7, 2009 34

  35. Solaris Live Upgrade Examples # lucompare BE2 Determining the configuration of BE2 ... < BE1 > BE2 Processing Global Zone Comparing / ...  Links differ 01 < /:root:root:33:16877:DIR: 02 > /:root:root:30:16877:DIR:  Sizes differ 01 < /platform/sun4u/boot_archive:root:root:1:33188:REGFIL:76550144: 02 > /platform/sun4u/boot_archive:root:root:1:33188:REGFIL:76922880:  ... August 7, 2009 35

  36. Solaris Live Upgrade Examples # lucreate -c "solenv1" -m /:/dev/dsk/c0d0s3:ufs -n "solenv2“ * # lucreate -m /:/dev/md/dsk/d20:ufs,mirror \ -m /:/dev/dsk/c0t0d0s0:detach,attach,preserve \ -n nextBE ** # lucreate -m /:/dev/md/dsk/d10:ufs,mirror \ -m /:/dev/dsk/c0t0d0s0,d1:attach \ -m /:/dev/dsk/c0t1d0s0,d2:attach -n myserv2 *** August 7, 2009 36

  37. Solaris Live Upgrade Examples # lucurr BE1 # ludesc -n BE1 \ "Dusan BootEnvironment“ # ludesc -n BE1 Dusan BootEnvironment August 7, 2009 37

  38. Solaris Live Upgrade Examples # lufslist BE1 boot environment name: BE1 This boot environment is currently active This boot environment will be active on next system boot. Filesystem fstype device size Mounted on Mount Options ----------------------- -------- ------------ ------------------- -------------- /dev/zvol/dsk/rpool/swap swap 1073741824 - - rpool/ROOT/s10s_u6wos_07b zfs 5119809024 / - rpool/ROOT/s10s_u6wos_07b/var zfs 86450688 /var - rpool zfs 7493079552 /rpool - rpool/export zfs 95149568 /export - hppool zfs ? /hppool - rpool/export/home zfs 95129088 /export/home - August 7, 2009 38

  39. Clone Commands Compared August 7, 2009 39

  40. Clone Commands Compared August 7, 2009 40

  41. CloneCommandsCompared August 7, 2009 41

  42. Clone Commands Compared August 7, 2009 42

  43. AIX Alt_disk_install • The AIX alt_disk_install command allows a root sysadmin to create an alternate rootvg on another set of disk drives. The alternate rootvg can be configured by restoring a mksysb image to it while AIX continues to run from the primary rootvg, or the primary rootvg can be "cloned" to the alternate rootvg and updates and fixes can then be installed on the alternate rootvg while AIX continues to run. • When the system admin is ready, AIX can be rebooted from the alternate rootvg disks. Changes can be backed out by rebooting AIX from the original primary rootvg. • In AIX v.5.3, alt_disk_install has been replaced by alt_disk_copy alt_disk_mksysb alt_rootvg_op The alt_disk_install will continue to ship as a wrapper to the new commands, but it will not support any new functions, flags, or features.

  44. AIX Alt_disk_install Examples • Copy the current rootvg to an alternate disk. The following example shows how to clone the rootvg to hdisk1: # alt_disk_copy -d hdisk1 • Copy rootvg (hdisk1) to hdisk0, and then apply the updates to hdisk0: # alt_disk_copy -d hdisk0 -b update_all -l

  45. AIX Alt_disk_install Examples • Copy the current rootvg to two alternate disks: # alt_disk_copy -d hdisk2 hdisk3 -O • …assuming that hdisk2 and hdisk3 are the targets on which the copy should be placed. • Note that the -O flag is required when "cloning" (when planning to boot the rootvg copy on another LPAR or server), but can be detrimental when making a copy which will be booted on the same LPAR or server. • Before taking the target disks away from the existing AIX image, run command: # alt_rootvg_op -X • If a rootvg copy has been made for use on the same LPAR/server as the original rootvg (without the -O flag on alt_disk_copy), System Management Services can be used to switch between the primary and backup AIX rootvgs by shutting AIX down, booting to SMS mode, and selecting the disks from which to boot.

  46. AIX Multibos Features • multibos command (AIX 5.3 ML3) provides dual AIX boot from the same rootvg. One can run production on one boot image while installing, customizing or updating the other. • This is similar to AIX alt-disk-install, with one major difference: in alt-disk-install the boot images must reside on separate disks and separate rootvg's. The multibos capability allows both O/S images to reside on the same disk/rootvg. August 7, 2009 46

  47. MultiBOS (rootvg) Reboot

  48. AIX Multibos Features - continued • The multibos command allows the root level administrator to create multiple instances of AIX on the same rootvg. • The multibos setup operation creates a standby Base Operating System (BOS) that boots from a distinct boot logical volume (BLV). This creates two bootable sets of BOS on a given rootvg. The administrator can boot from either instance of BOS by specifying the respective BLV as an argument to the bootlist command or using system firmware boot operations. • Two bootable instances of BOS can be simultaneously maintained. The instance of BOS associated with the booted BLV is referred to as the active BOS. The instance of BOS associated with the BLV that has not been booted is referred to as the standby BOS. Currently, only two instances of BOS are supported per rootvg. August 7, 2009 48

  49. AIX Multibos Features - continued • The multibos command allows the administrator to access, install maintenance and technology levels for, update, and customize the standby BOS either during setup or in subsequent customization operations. • Installing maintenance and technology updates to the standby BOS does not change system files on the active BOS. This allows for concurrent update of the standby BOS, while the active BOS remains in production. August 7, 2009 49

  50. AIX Multibos Features - continued • The multibos command has the ability to copy or share logical volumes and file systems. By default, the BOS file systems (currently /, /usr, /var, and /opt,) and the boot logical volume are copied. The administrator can make copies of additional BOS objects (using the -L flag). • All other file systems and logical volumes are shared between instances of BOS. Separate log device logical volumes (for example, those that are not contained within the file system) are not supported for copy and will be shared. • The current rootvg must have enough space for each BOS object copy. BOS object copies are placed on the same disk or disks as the original. August 7, 2009 50

More Related