unixadmin.free.fr Handy Unix Plumbing Tips and Tricks

25août/15

Minimum NIM master levels for VIOS clients

The NIM master level for VIOS is also for me a good point of vue to know the AIX level vs VIOS ioslevel.

https://www-304.ibm.com/webapp/set2/sas/f/flrt/viostable.html

Minimum NIM master levels for VIOS clients

If using NIM to backup, install or update a VIOS partition, the NIM master must be greater than or equal to the levels shown below.

VIOS Release VIOS Level Minimum NIM master level
VIOS 2.2.6 VIOS 2.2.6.10 AIX 6100-09-10 7100-05-01 7200-02-01
VIOS 2.2.6.0 AIX 6100-09-10 7100-05-00 7100-02-00
VIOS 2.2.5 VIOS 2.2.5.30 AIX 6100-09-10 7100-05-01 7200-02-01
VIOS 2.2.5.20 AIX 6100-09-09 7100-04-04 7200-01-02
VIOS 2.2.5.10 AIX 6100-09-08 7100-04-03 7200-01-01
VIOS 2.2.5.0 AIX 6100-09-08 7100-04-03
VIOS 2.2.4 VIOS 2.2.4.50 AIX 6100-09-10 7100-05-01 7200-02-01
VIOS 2.2.4.40 AIX 6100-09-09 7100-04-04 7200-01-02
VIOS 2.2.4.30 AIX 6100-09-08 7100-04-03 7200-01-01
VIOS 2.2.4.23 AIX 6100-09-07 7100-04-02 7200-00-02
VIOS 2.2.4.22 AIX 6100-09-07 7100-04-02 7200-00-02
VIOS 2.2.4.21 AIX 6100-09-07 7100-04-02 7200-00-02
VIOS 2.2.4.20 AIX 6100-09-07 7100-04-02 7200-00-02
VIOS 2.2.4.10 AIX 6100-09-06 7100-04-01 7200-00-01
VIOS 2.2.4.0 AIX 6100-09-06 7100-04-01 7200-00-01
VIOS 2.2.3 VIOS 2.2.3.90 AIX 6100-09-09 7100-04-04 7200-01-02
VIOS 2.2.3.80 AIX 6100-09-08 7100-04-03 7200-01-01
VIOS 2.2.3.70 AIX 6100-09-07 7100-04-02 7200-00-02
VIOS 2.2.3.60 AIX 6100-09-06 7100-03-05
VIOS 2.2.3.50 AIX 6100-09-05 7100-03-05
VIOS 2.2.3.4 AIX 6100-09-04 7100-03-04
VIOS 2.2.3.3 AIX 6100-09-03 7100-03-03
VIOS 2.2.3.2 AIX 6100-09-02 7100-03-02
VIOS 2.2.3.1 AIX 6100-09-01 7100-03-01
VIOS 2.2.3.0 AIX 6100-09 7100-03
VIOS 2.2.2 VIOS 2.2.2.70 AIX 6100-08-07 7100-02-07
VIOS 2.2.2.6 AIX 6100-08-06 7100-02-06
VIOS 2.2.2.5 AIX 6100-08-05 7100-02-05
VIOS 2.2.2.4 AIX 6100-08-04 7100-02-04
VIOS 2.2.2.3 AIX 6100-08-03 7100-02-03
VIOS 2.2.2.2 AIX 6100-08-02 7100-02-02
VIOS 2.2.2.1 AIX 6100-08-01 7100-02-01
VIOS 2.2.2.0 AIX 6100-08 7100-02
VIOS 2.2.1 VIOS 2.2.1.9 AIX 6100-07-10 7100-01-10
VIOS 2.2.1.8 AIX 6100-07-09 7100-01-09
VIOS 2.2.1.7 AIX 6100-07-08 7100-01-07
VIOS 2.2.1.5 AIX 6100-07-05 7100-01-05
VIOS 2.2.1.4 AIX 6100-07-04 7100-01-04
VIOS 2.2.1.3 AIX 6100-07-02 7100-01-02
VIOS 2.2.1.1 AIX 6100-07-01 7100-01-01
VIOS 2.2.1.0 AIX 6100-07 7100-01
VIOS 2.2.0 VIOS 2.2.0.13 AIX 6100-06-05 7100-00-03
VIOS 2.2.0.12 AIX 6100-06-05 7100-00-03
VIOS 2.2.0.11 AIX 6100-06-03 7100-00-02
VIOS 2.2.0.10 AIX 6100-06-01 7100-00-01
VIOS 2.2.0.0 AIX 6100-06 7100-00
VIOS 2.1.3 VIOS 2.1.3.10 AIX 6100-05-02
VIOS 2.1.3.0 AIX 6100-05
VIOS 2.1.2 VIOS 2.1.2.13 AIX 6100-04-03
VIOS 2.1.2.12 AIX 6100-04-02
VIOS 2.1.2.11 AIX 6100-04-02
VIOS 2.1.2.10 AIX 6100-04-01
VIOS 2.1.2.0 AIX 6100-04
VIOS 2.1.1 VIOS 2.1.1.10 AIX 6100-03-01
VIOS 2.1.1.0 AIX 6100-03
VIOS 2.1.0 VIOS 2.1.0.10 AIX 6100-02-02
VIOS 2.1.0.1 AIX 6100-02-01
VIOS 2.1.0.0 AIX 6100-02
VIOS 1.5.2 VIOS 1.5.2.6 AIX 5300-08-08
VIOS 1.5.2.5 AIX 5300-08-05
VIOS 1.5.2.1 AIX 5300-08-01
VIOS 1.5.2.0 AIX 5300-08
30oct/14

Creating NIM resources on an NFS shared NAS device

You can use a network-attached storage (NAS) device to store your Network Installation Management (NIM) resources by using the nas_filer resource server.

NIM support allows the hosting of file-type resources (such as mksysb, savevg, resolv_conf, bosinst_data, and script) on a NAS device. The resources can be defined in the NIM server database, and can be used for installation without changing any network information or configuration definitions on the Shared Product Option Tree (SPOT) server.

The nas_filer resource server is available in the NIM environment, and requires an interface attribute and a password file. You must manually define export rules and perform storage and disk management before you use any NIM operations.

To create resources on a NAS device by using the nas_filer resource server, complete the following steps:

Define the nas_filer object. You can enter a command similar to the following example:

    # nim -o define -t nas_filer -a if1="find_net als046245.server.com 0" -a passwd_file=/export/nim/pswfile netapp1

Define a mksysb file that exists on the NAS device as a NIM resource. You can enter a command similar to the following example:

    # nim -o define -t mksysb -a server=netapp1 -a location=/vol/vol0/nim_lun1/client1.nas_filer NetApp_bkup1

Optional:
If necessary, create a new resource (client backup) on the NAS device. You can use the following command to create a mksysb resource:

    # nim -o define -t mksysb -a server=netapp1 -a location=/vol/vol10/nim_lun1/mordor05_bkup -a source=mordor05 -a mk_image=yes NetApp_mordor05

Optional:
If necessary, copy an existing NIM resource to the nas_filer object. You can use the following command to copy a mksysb resource.

    # nim -o define -t mksysb -a server=netapp1 -a location=/vol/vol10/nim_lun1/replicate_bkup -a source=master_backup NetApp_master_backup

SOURCE: IBM Knowledge Center

Remplis sous: AIX, NAS, NIM Aucun commentaire
29oct/14

Adding a nas_filer management object to the NIM environment

Follow the instructions to add a nas_filer management object.

If you define resources on a network-attached storage (NAS) device by using the nas_filer management object, you can use those resources without changing the network information and configuration definition changes on the Shared Product Object Tree (SPOT) server. To add a nas_filer object, the dsm.core fileset must be installed on the NIM master.

To add a nas_filer object from the command line, complete the following steps:

Create an encrypted password file that contains the login ID and related password on the NIM master to access the nas_filer object. The encrypted password file must be created by using the dpasswd command from the dsm.core fileset. If you do not want the password to be displayed in clear text, exclude the -P parameter. The dpasswd command prompts for the password. Use the following command as an example:

    # dpasswd -f EncryptedPasswordFilePath -U nas_filerLogin -P nas_filerPassword

Pass the encrypted password file in the passwd_file attribute by using the define command of the nas_filer object. Use the following command as an example:

    # nim -o define -t nas_filer -a passwd_file=EncryptedPasswordFilePath \
    -a if1=InterfaceDescription \
    -a net_definition=DefinitionName \
    nas_filerName

If the network object that describes the network mask and the gateway that is used by the nas_filer object does not exist, use the net_definition attribute. After you remove the nas_filer objects, the file that is specified by the passwd_file attribute must be removed manually.

Example
To add a nas_filer object that has the host name nf1 and the following configuration:

host name=nf1
password file path=/etc/ibm/sysmgt/dsm/config/nf1
network type=ethernet
subnet mask=255.255.240.0
default gateway=gw1
default gateway used by NIM master=gw_maste, enter the following command:

# nim -o define -t nas_filer -a passwd_file=/etc/ibm/sysmgt/dsm/config/nf1 \
-a if1="find_net nf1 0" \
-a net_definition="ent 255.255.240.0 gw1 gw_master" nf1

For more information about adding a nas_filer object, see the technical note that is included in the dsm.core fileset (/opt/ibm/sysmgt/dsm/doc/dsm_tech_note.pdf).

Remplis sous: AIX, NAS, NIM Aucun commentaire
27juil/12

Virtual I/O Server migration with NIM

Download VIOS DVD migration 2.1.3.10 via IBM FIX CENTRAL
or download ISO image

SERVEUR NIM : AIX 7100-01-04-1216

monter l'image ISO du DVD de migration VIOS 2.1.3.10

# loopmount -i /export/images/VIOS_2.1.3.10.iso -o "-V cdrfs -o ro" -m /mnt

Copier le contenu du répertoire installp du DVD de migration dans le filesystem lpp_source

# cp -pr /mnt/installp /export/lpp_source/lppsrc_vios_21310

Définition du lpp_source lpp_src_vios_21310

nim -o define -t lpp_source -a server=master -a location=/export/lpp_source/lpp_src_vios_21310 lpp_src_vios_21310

Définition du spot spot_vios_21310 à partir du lpp_source lpp_src_vios_21310

nim -o define -t spot -a server=master -a location=/export/spot -a source=lpp_src_vios_21310 -a installp_flags=-aQg spot_vios_21310

VIRTUAL I/O SERVER : IOSLEVEL 1.5.2.6-FP-11.1 SP-02

1. Paramétrer l'interface Ethernet pour l'installation NIM via le menu SMS.

***********************************************************************************

          Welcome to Base Operating System
                      Installation and Maintenance

Type the number of your choice and press Enter. Choice is indicated by >>>.

>>> 1 Start Install Now with Default Settings

    2 Change/Show Installation Settings and Install

    3 Start Maintenance Mode for System Recovery

    4 Configure Network Disks (iSCSI)

    5 Select Storage Adapters


    88  Help ?
    99  Previous Menu

>>> Choice [1]:1
***********************************************************************************
                          VIOS Migration Installation Summary

Disks:  hdisk1...

>>> 1   Continue with Install
                       +-----------------------------------------------------
    88  Help ?         |  WARNING: Base Operating System Installation will
    99  Previous Menu  |destroy or impair recovery of SOME data on the
                       |destination disk hdisk1.
>>> Choice [1]:1
***********************************************************************************
Migration menu preparation in progress.

        Please wait...


        Approximate     Elapsed time
     % tasks complete   (in minutes)


          0               0
***********************************************************************************
       Migration Confirmation

  Either type 0 and press Enter to continue the installation, or type the
  number of your choice and press Enter.

    1  List the saved Base System configuration files which will not be
       merged into the system.  These files are saved in /tmp/bos.
    2  List the filesets which will be removed and not replaced.
    3  List directories which will have all current contents removed.
    4  Reboot without migrating.

    Acceptance of license agreements is required before using system.
    You will be prompted to accept after the system reboots.

>>> 0  Continue with the migration.
   88  Help ?

+---------------------------------------------------------------------------
  WARNING: Selected files, directories, and filesets (installable options)
    from the Base System will be removed.  Choose 2 or 3 for more information.


>>> Choice[0]:0
***********************************************************************************

MIGRATION en cours .......

$ ioslevel
2.1.3.10-FP23
11avr/12

Extract mksysb from tape

Question
How to extract mksysb from tape to file for NIM Usage

Cause
There may be occasions where a machine's backup resides on tape but the mksysb needs to be transferred to a NIM server for remote installations. This document describes how to properly extract a mksysb image backed up onto tape and restore it to a file.

Answer
Extract Mksysb from tape to NIM resource

Introduction
Extract the /tapeblksz file from the tape
Using the 'lsmksysb' command to verify mksysb tape readability
Extract the mksysb image from the tape
Verify that the system recognizes extracted file as a mksysb image
Using the new mksysb file as a NIM resource

*For future reference, all further mentions using the word “media” will refer to Base AIX Installation DVDs, unless otherwise specified.
Furthermore, all references for any device (cdrom, ethernet, tape, etc) will always be cd0, ent0, rmt0, hdisk0, etc, unless otherwise noted. You may, depending on your environment, need to use other devices. Substitute as needed.

This document will describe how to extract a mksysb image that currently exists on tape and store it to disk. Additionally, this document provides an application for the extracted mksysb used for a NIM environment.

This resource was written with the presumption that a mksysb has already been written to tape.

(!) NOTE: Before beginning this procedure, please read through the following reference for a better understanding of the mksysb structure and layout on tape:

Reference 1: Creating a mksysb backup to tape

http://www.ibm.com/support/docview.wss?uid=isg3T1010809

This is an important step because different tape devices can be set to write the media at different block sizes. If the tape device is set to a different block size than the tape, it may not read the tape properly, or at all.

The tape device block size first has to be configured to read 512 byte blocks to find out what information the ./tapeblksz file has. This file holds the information concerning the block size setting when the tape was written. The block size had to be changed to 512 because the ./tapeblksz file is written to the second image of the mksysb in 512 byte blocks.

Change the block size of the tape to 512 byte blocks

# chdev -l rmt0 -a block_size=512
rmt0 changed

Rewind the tape after the block size change.

# tctl -f /dev/rmt0 rewind

Run the ‘restore’ command which will point to the second image of the mksysb backup and extract the tapeblksz file to your current working directory:

# restore -s2 -xqvf /dev/rmt0.1 ./tapeblksz
New volume on /dev/rmt0.1:
Cluster 51200 bytes (100 blocks).
Volume number 1
Date of backup: Thu May 7 15:44:07 2009
Files backed up by name
User root
x 10 ./tapeblksz
total size: 10
files restored: 1

Running a ‘cat’ against ./tapeblksz provides the block size at which the tape was created.

# cat tapeblksz
1024 NONE

Once the block size is obtained, change the tape block size to the size specified by the ./tapeblksz file. In this case it will need to be changed to 1024.

# chdev -l rmt0 -a block_size=1024
rmt0 changed

Be sure to rewind the tape after changing the block size.

# tctl -f /dev/rmt0 rewind

On the target system where the mksysb file will be extracted, be sure to find a location that has plenty of space to hold the mksysb file.

Mksysb files can be fairly large, so when moving them from one medium to a filesystem it is important to consider a few things.

First, check the ulimit for root to make sure the 'fsize' = >2Gb
# ulimit –a
-or-
# vi /etc/security/limits

Secondly, confirm that the filesystem being written to is large file enabled JFS or a JFS2 filesystem. The following output includes the ‘lsfs’ and ‘df’ commands to verify the filesystem format and space information.

# lsfs
Name Nodename MountPt VFS Size Options Auto Accounting
/dev/hd4 -- / jfs2 786432 -- yes no
/dev/hd1 -- /home jfs2 4456448 -- yes no
/dev/hd2 -- /usr jfs2 2883584 -- yes no
/dev/hd9var -- /var jfs2 262144 -- yes no
/dev/hd3 -- /tmp jfs2 262144 -- yes no
/proc -- /proc procfs -- -- yes no
/dev/hd10opt -- /opt jfs2 262144 -- yes no
/dev/fslv00 -- /lppbackup jfs2 100663296 rw yes no
/dev/fslv02 -- /mksysb jfs2 20971520 rw yes no

# df -g
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd4 0.38 0.35 7% 2189 3% /
/dev/hd2 1.38 0.11 92% 34296 54% /usr
/dev/hd9var 0.12 0.11 11% 443 2% /var
/dev/hd3 0.12 0.12 2% 41 1% /tmp
/dev/hd1 2.12 2.06 4% 38 1% /home
/proc - - - - - /proc
/dev/hd10opt 0.12 0.05 63% 1538 13% /opt
/dev/fslv00 48.00 19.17 61% 511 1% /lppbackup
/dev/fslv02 10.00 8.20 18% 4 1% /mksysb

The 'lsmksysb' command is a useful command to obtain information regarding the mksysb and verifies that the system acknowledges the mksysb image on the tape.

(!) NOTE: 'lsmksysb' is not a command that verifies whether a tape will be bootable or restore without issues. Reference the 'lsmksysb' man page or infocenter for more information on 'lsmksysb':

Reference 2: InfoCenter: lsmksyb command

http://pic.dhe.ibm.com/infocenter/aix/v6r1/index.jsp?topic=%2Fcom.ibm.aix.cmds%2Fdoc%2Faixcmds3%2Flsmksysb.htm

Running the command, '# lsmksysb -lf /dev/rmt0', will list out information about the mksysb including: the date the mksysb was taken, the oslevel, the size of the mksysb, the lv structure, etc.

# lsmksysb -lf /dev/rmt0

VOLUME GROUP: rootvg
BACKUP DATE/TIME: Thu May 7 15:42:48 CDT 2009
UNAME INFO: AIX shaevelbso 3 5 00059D5C4C00
BACKUP OSLEVEL: 5.3.7.0
MAINTENANCE LEVEL: 5300-07
BACKUP SIZE (MB): 7168
SHRINK SIZE (MB): 4358
VG DATA ONLY: no

rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
hd5 boot 1 1 1 closed/syncd N/A
hd6 paging 4 4 1 open/syncd N/A
hd8 jfs2log 1 1 1 open/syncd N/A
hd4 jfs2 3 3 1 open/syncd /
hd2 jfs2 11 11 1 open/syncd /usr
hd9var jfs2 1 1 1 open/syncd /var
hd3 jfs2 1 1 1 open/syncd /tmp
hd1 jfs2 17 17 1 open/syncd /home
hd10opt jfs2 1 1 1 open/syncd /opt
lg_dumplv sysdump 16 16 1 open/syncd N/A

At this point we know that the mksysb on the tape actually resides as the fourth image of the mksysb backup. Therefore, a command needs to be used to extract the fourth image from the mksysb tape and store it to a file on the system. The 'dd' command can be used to perform this operation.

To extract the mksysb from the tape, run the following, using the block size obtained from the ./tapeblksz file for the bs= value in the dd command.:

# dd if=/dev/rmt0.1 of=/mksysb/test.shaevel bs=1024 fskip=3
1462150+0 records in
1462150+0 records out

After the mksysb file has been extracted, ensure that the system still acknowledges the file as a mksysb:

If not in the directory already, make sure to change directory to where the mksysb file resides:

# cd /mksysb

Run the 'lsmksysb' command to list out the information about the backup.

# lsmksysb -lf test.shaevel

VOLUME GROUP: rootvg
BACKUP DATE/TIME: Thu May 7 15:42:48 CDT 2009
UNAME INFO: AIX shaevelbso 3 5 00059D5C4C00
BACKUP OSLEVEL: 5.3.7.0
MAINTENANCE LEVEL: 5300-07
BACKUP SIZE (MB): 7168
SHRINK SIZE (MB): 4358
VG DATA ONLY: no

rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
hd5 boot 1 1 1 closed/syncd N/A
hd6 paging 4 4 1 open/syncd N/A
hd8 jfs2log 1 1 1 open/syncd N/A
hd4 jfs2 3 3 1 open/syncd /
hd2 jfs2 11 11 1 open/syncd /usr
hd9var jfs2 1 1 1 open/syncd /var
hd3 jfs2 1 1 1 open/syncd /tmp
hd1 jfs2 17 17 1 open/syncd /home
hd10opt jfs2 1 1 1 open/syncd /opt
lg_dumplv sysdump 16 16 1 open/syncd N/A

The 'lsmksysb' output confirms that this file is a mksysb.

20juil/11

AIX NIM Master Tuning

Abstract: tunables that are occasionally required on the AIX NIM master

1) To support a high number (16 or more) simultaneous installs, you should consider:
increasing max_nimesis_threads

nim -o change -a max_nimesis_threads=60 master

2) no options tcp_sendspace, tcp_recvspace, rfc1323 should already be set in the default AIX install. Watch for them on ifconfig -a, and verify that use_isno is on.

# ifconfig en0
en0: flags=1e080863,4c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPR
T,64BIT,CHECKSUM_OFFLOAD(ACTIVE),LARGESEND,CHAIN>
inet 9.19.51.115 netmask 0xffffff00 broadcast 9.19.51.255
tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

# no -a | grep isno
use_isno = 1

# no -F -a | grep isno (restricted setting in 6.1. Use -F)
use_isno = 1

3) Consider setting global_export=yes. If you perform frequent simultaneous installs, when one install completes, the default behavior of the master is to unexport NFS exports, remove the completed client from the export lists and re-export the filesystems. During this interval, other "in-flight" client installs may see the message "NFS server not responding, still trying" on the client console.

As an alternative, set global_export. With no clients enabled for install:

# nim -o change -a global_export=yes master

In this configuration, resources are exported read-only for every enabled client, and held exported until the last client completes.

Before, exports list every specific client allowed to mount

# showmount -e
export list for bmark29:
/export/mksysb/image_53ML3 sq07.dfw.ibm.com,sq08.dfw.ibm.com
/export/53/lppsource_53ML3 sq07.dfw.ibm.com,sq08.dfw.ibm.com
/export/53/spot_53ML2/usr sq07.dfw.ibm.com,sq08.dfw.ibm.com

With global_export, exports are read-only for everyone

# exportfs
/export/mksysb/image_53ML3 -ro,anon=0
/export/53/lppsource_53ML3 -ro,anon=0
/export/53/spot_53ML3/usr -ro,anon=0

Realize, of course, anyone can mount these, even if they are not a NIM client
(read-only, AIX install content. Security issue? Probably not for most cases)

SOURCE:TD105569

Remplis sous: AIX, NIM Aucun commentaire
16mar/11

Producing debug output from a network boot image

Use these commands to create debug versions of the network boot images.

1. Use the Web-based System Manager or SMIT interfaces or run the following command:

      nim -Fo check -a debug=yes SPOTName

where SPOTName is the name of your SPOT.

2. Obtain the address for entering the debugger by doing the following:

      lsnim -a enter_dbg SPOTName

where SPOTName is the name of your SPOT. The displayed output will be similar to the following:

      spot1:
         enter_dbg = "chrp.mp 0x001840d4"
         enter_dbg = "chrp.up 0x00160b7c"
         enter_dbg = "rs6k.mp 0x001840d4"
         enter_dbg = "rs6k.up 0x00160b7c"
         enter_dbg = "rspc.mp 0x001840d4"
         enter_dbg = "rspc.up 0x00160b7c"

Write down the enter_dbg address for the client you are going to boot. For example, if your client is an chrp-uniprocessor machine, you would write down the address 160b7c.

3. Attach a tty device to your client system (port 1).

4. Set up and perform the NIM operation that will require the client to boot over the network. Boot the client over the network.

5. After the client gets the boot image from the SPOT server, the debug screen will appear on the tty. At the > prompt, enter:

      st Enter_dbg_Value 2

where Enter_dbg_Value is the number you wrote down in step 2 as your machine type's enter_dbg value. Specifying a 2 at the address of the enter_dbg value prints the output to your tty.

6. Type g (for go) and press Enter to start the boot process.

7. Use Ctrl-s to temporarily stop the process as you watch the output on the tty. Use Ctrl-q to resume the process.

8. To rebuild your boot images in non-debug mode, use the following command:

      nim - Fo check SPOTName

where SPOTName is the name of your SPOT.

If the boot image is left in debug mode, every time a client is booted from these boot images, the machine will stop and wait for a command at the debugger ">" prompt. If you attempt to use these debug-enabled boot images and there is not a tty attached to the client, the machine will appear to be hanging for no reason.

Remplis sous: AIX, NIM Aucun commentaire
10fév/10

Migrating to AIX 6.1 with nimadm

Minimize AIX migration downtime with NIM Alternate Disk Migration

By Chris Gibson

Introduction

Recently, I've been busy upgrading my entire AIX landscape from AIX 5.3 to AIX 6.1. My environment consists of close to 100 AIX LPARs. When tasked with such a challenge I always consider how I can best achieve this goal quickly, efficiently, and with minimal disruption to my customers.

The AIX OS provides the Network Installation Manager (NIM) to assist in administering and updating large numbers of AIX systems. A nice feature of this tool is the NIM Alternate Disk Migration (nimadm) facility. Using this tool, as you will soon see, allows you to perform your AIX migrations without the need for lengthy outages.

In this article I'll demonstrate the nimadm procedures we used to migrate our AIX systems. I'm going to assume that you are already very familiar with AIX and NIM. I'm also going to assume you already have a NIM master in your environment. If not, I recommend you review the documentation in the Resources section first.

Overview

Over the years, I've migrated to several new releases of the AIX OS. To do this I would have typically used one of the conventional methods. These methods consisted of either A) migration using the AIX installation DVD or B) migration using NIM. Method A is still possible, even in virtualized environments via the use of File-Backed devices. And method B is also perfectly viable by network booting the client LPAR and performing the migration using a NIM.

The downside with both of these methods is that they both require significant downtime on the LPAR while the migration takes place. This downtime could be anywhere from 30-45 minutes to several hours, depending on the system. This can be a concern in environments with tight outage windows.

The nimadm utility offers several advantages over a conventional migration. For example, a system administrator can usenimadm to create a copy of a NIM client's rootvg (on a spare disk on the client, similar to a standard alternate disk install alt_disk_install) and migrate the disk to a newer version or release of AIX. All of this can be done without disruption to the client (there is no outage required to perform the migration). After the migration is finished, the only downtime required will be a scheduled reboot of the system.

Another advantage is that the actual migration process occurs on the NIM master, taking the load off the client LPAR. This reduces the processing overhead on the LPAR and minimizes the performance impact to the running applications.

For customers with a large number of AIX systems, it is also important to note that the nimadm tool supports migrating several clients at once.

To summarize, these are the benefits of nimadm over other migration methods:

* Reduced downtime for the client. The migration is executed while the system is up and running as normal. There is no disruption to any of the applications or services running on the client; therefore, the upgrade can be done at a time convenient to the administrator. At a later stage, a reboot can be scheduled in order to restart the system at the later level of AIX.
* The nimadm process is very flexible and it can be customized using some of the optional NIM customization resources, such as image_data, bosinst_data, pre/post_migration scripts, exclude_files, and so on.
* Quick recovery from migration failures. All changes are performed on the rootvg copy (altinst_rootvg). If there are any serious problems with the migration, the original rootvg is still available and the system has not been impacted. If a migration fails or terminates at any stage, nimadm is able to quickly recover from the event and clean up afterwards. There is little for the administrator to do except determine why the migration failed, rectify the situation, and attempt the nimadm process again. If the migration completed but issues are discovered after the reboot, then the administrator can back out easily by booting from the original rootvg disk.

Preparation

There are a few requirements that must be met before attempting to use nimadm to migrate to AIX 6.1. I'll mention just some of these here. I recommend that you review the online documentation for nimadm or the IBM NIM Redbook for more information (see the Resources section at the end of this article).

* You must have a NIM master running AIX 6.1 or higher with the latest Technology Level or higher.
* The NIM master must have the bos.alt_disk_install.rte fileset installed in its own rootvg and in the SPOT that will be used for the migration. Both need to be at the same level. It is not necessary to install the alternate disk utilities on the client.
* The lpp_source and SPOT NIM resources that have been selected for the migration MUST match the AIX level to which you are migrating.
* The NIM master (as always) should be at the same or higher AIX level than the level you are migrating to on the client.
* The target client must be registered with the NIM master as a standalone NIM client.
* The NIM master must be able to execute remote commands on the client using rsh.
* Ensure the NIM client has a spare disk (not allocated to a volume group) large enough to contain a complete copy of its rootvg. If rootvg is mirrored, break the mirror and use one of the disks for the migration.
* Ensure the clients NIM master has a volume group (for example, nimadmvg) with enough free space to cater for a complete copy of the client's rootvg. If more than one AIX migration is occurring for multiple NIM clients, make sure there is capacity for a copy of each clients rootvg.

Local disk caching versus NFS

By default, the nimadm tool utilizes NFS for many of the tasks during the migration. This can be a problem on slower networks because NFS writes can be very expensive. To avoid using NFS, a Local Disk Caching option exists that can provide some performance advantages.

Local disk caching allows the NIM master to avoid having to use NFS to write to the client. This can be useful if the nimadm operation is not performing well due to an NFS write bottleneck.

If the Local Disk Caching function is invoked, then nimadm will create the client file systems in a volume group on the NIM master. It will then use streams (via rshd) to cache all of the data from the client to the file systems on the NIM master.

The advantages of local disk caching over NFS could be summarized as:

* Improved performance for nimadm operations on relatively slow networks.
* Improved performance for nimadm operations that are bottlenecked in NFS writes.
* Decreased CPU usage on the client.
* Client file systems not exported.
* Allows TCB enabled systems to be migrated with nimadm.

Some potential disadvantages of local disk caching are:

* Cache file systems take up space on the NIM master. You must have enough disk space in a volume group on the NIM master to host the client's rootvg file systems, plus some space for the migration of each client.
* Increased CPU usage on the NIM master.
* Increased I/O on the master. For best performance, use a volume group on the NIM master that does not contain the NIM resources being used for the AIX migration.

For performance reasons, we deploy Local Disk Caching with nimadm in our environment.

The nimadm command performs a migration in 12 phases. It is useful to have some knowledge of each phase before performing a migration.

1. The master issues the alt_disk_install command to the client, which makes a copy of the clients rootvg to the target disks. In this phase, the alternate root volume group (altinst_rootvg) is created.
2. The NIM master creates the cache file systems in the nimadmvg volume group. Some initial checks for the required migration disk space are performed.
3. The NIM master copies the NIM client's data to the cache file systems in nimadmvg. This data copy is done via rsh.
4. If a pre-migration script resource has been specified, it is executed at this time.
5. System configuration files are saved. Initial migration space is calculated and appropriate file system expansions are made. The bos image is restored and the device database is merged (similar to a conventional migration). All of the migration merge methods are executed, and some miscellaneous processing takes place.
6. All system filesets are migrated using installp. Any required RPM images are also installed during this phase.
7. If a post-migration script resource has been specified, it is executed at this time.
8. The bosboot command is run to create a client boot image, which is written to the client's alternate boot logical volume (alt_hd5).
9. All the migrated data is now copied from the NIM master's local cache file and synced to the client's alternate rootvg via rsh.
10. The NIM master cleans up and removes the local cache file systems.
11. The alt_disk_install command is called again to make the final adjustments and put altinst_rootvg to sleep. The bootlist is set to the target disk.
12. Cleanup is executed to end the migration.

If you are unable to meet the requirements for phases 1 to 10, then you should consider performing a conventional migration.

Before we move onto a nimadm example, I just want to add a few points for you to consider first.

* I recommended that you do not to make any changes to your system once the migration is underway, such as adding users, changing passwords, adding print queues, and the like. If possible, wait until the migration has finished and the system has been rebooted on the new version of AIX. If you must perform administration tasks prior to the reboot, you should take note of the changes and re-apply them to the system after it has been rebooted into AIX 6.1.
* We developed, tested, and verified our migration procedures several times before implementing them on our production systems. This allowed us time to verify that the steps were correct and that the AIX migrations would complete as expected. I recommend you do the same.
* If you have a multibos image in rootvg, remove it. AIX migrations are not supported with multibos enabled systems. Ensure all rootvg LVs are renamed to their legacy names. If necessary, create a new instance of rootvg and reboot the LPAR. For example:

    # multibos –sXp
    # multibos –sX
    # shutdown –Fr

Confirm the legacy LV names are now in use that is, not bos_.

    # lsvg -l rootvg | grep hd | grep open
    hd6           paging     80      160     2    open/syncd    N/A
    hd8           jfs2log    1       2       2    open/syncd    N/A
    hd4           jfs2       1       2       2    open/syncd    /
    hd2           jfs2       7       14      2    open/syncd    /usr
    hd3           jfs2       16      32      2    open/syncd    /tmp
    hd1           jfs2       1       2       2    open/syncd    /home
    hd9var        jfs2       8       16      2    open/syncd    /var
    hd7           sysdump    8       8       1    open/syncd    N/A
    hd7a          sysdump    8       8       1    open/syncd    N/A
    hd10opt       jfs2       8       16      2    open/syncd    /opt

Remove the old multibos instance.

    # multibos -R

Migrating to AIX 6.1 using nimadm

Let's use nimadm now to migrate an AIX system. Ensure that you document the system and perform a mksysb before performing any maintenance activity. You know this already, right? But I have to say it!

We will migrate a system from AIX 5.3 to AIX 6.1. The NIM master in this environment is running AIX 6.1 TL3 SP2. Our NIM client name is aix1 (running AIX 5.3 TL7 SP5 and migrating to AIX 6.1 TL3 SP1) and the NIM masters name is nim1.

Ensure that you read the AIX 6.1 release notes and review the documented requirements such as the amount of free disk space required.

Prior to a migration, it is always a good idea to run the pre_migration script on the system to catch any issues that may prevent the migration from completing successfully. You can find this script on the AIX 6.1 installation media.

Run this script, review the output (in /home/pre_migration), and correct any issues that it reports before migrating.

#./pre_migration
   
All saved information can be found in: /home/pre_migration.090903105452

Checking size of boot logical volume (hd5).
   
Your rootvg has mirrored logical volumes (copies greater than 1)
Recommendation:  Break existing mirrors before migrating.
   
Listing software that will be removed from the system.

Listing configuration files that will not be merged.
   
Listing configuration files that will be merged.
   
Saving configuration files that will be merged.
   
Running lppchk commands. This may take awhile.
   
Please check /home/pre_migration.090903105452/software_file_existence_check
for possible errors.
   
Please check /home/pre_migration.090903105452/software_checksum_verification
for possible errors.
   
Please check /home/pre_migration.090903105452/tcbck.output for possible errors.
   
All saved information can be found in: /home/pre_migration.090903105452
   
It is recommended that you create a bootable system backup of your system
before migrating.

I always take a copy of the /etc/sendmail.cf and /etc/motd files before an AIX migration. These files will be replaced during the migration and you will need to edit them again and add your modifications.

Commit any applied filesets. You should also consider removing any ifixes that may hinder the migration.

If rootvg is mirrored, I break the mirror and reduce it to a single disk. This gives me a spare disk that can be used for the migration.

To allow nimadm to do its job, I must temporarily enable rshd on the client LPAR. I will disable it again after the migration.

    # chsubserver -a -v shell -p tcp6 -r inetd
    # refresh –s inetd
    # cd /
    # rm .rhosts
    # vi .rhosts
    +
    # chmod 600 .rhosts

On the NIM master, I can now 'rsh' to the client and run a command as root.

    # rsh aix1 whoami
    root

At this point I'm ready to migrate. The process will take around 30-45 minutes; all the while the applications on the LPAR will continue to function as normal.

On the NIM master, I have created a new volume group (VG) named nimadmvg. This VG has enough capacity to cater for a full copy of the NIM clients root volume group (rootvg). This VG will be empty until the migration is started.

Likewise, on the NIM client, I have a spare disk which has enough capacity for a full copy of its rootvg.

On the master (nim1):

    # lsvg -l nimadmvg
    nimadmvg:
    LV NAME  TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT

On the client (aix1):

    # lspv
    hdisk0 0000273ac30fdcfc rootvg          active
    hdisk1 000273ac30fdd6e  None

The fileset bos.alt_disk_install.rte fileset is installed on the NIM master:

# lslpp -l bos.alt_disk_install.rte
  Fileset                      Level  State      Description
  ----------------------------------------------------------------------------
Path: /usr/lib/objrepos
  bos.alt_disk_install.rte   6.1.3.1  APPLIED    Alternate Disk Installation
                                                 Runtime

And it is also installed in the AIX 6.1 TL3 SP1 SPOT:

# nim -o showres 'spotaix61031'  | grep bos.alt_disk_install.rte
  bos.alt_disk_install.rte   6.1.3.1    C     F    Alternate Disk Installation

The nimadm command is executed from the NIM master.

 # nimadm -j nimadmvg -c aix1 -s spotaix61031 -l lppsourceaix61031 -d "hdisk1" –Y

Where:

* –j flag specifies the VG on the master which will be used for the migration
* -c is the client name
* –s is the SPOT name
* -l is the lpp_source name
* -d is the hdisk name for the alternate root volume group (altinst_rootvg)
* –Y agrees to the software license agreements for software that will be installed during the migration.

Now I can sit back and watch the migration take place. All migration activity is logged on the NIM master in the /var/adm/ras/alt_mig directory. For this migration, the log file name is aix1_alt_mig.log. Here's a sample of some of the output you can expect to see for each phase:

MASTER DATE: Mon Nov  9 14:29:09 EETDT 2009
CLIENT DATE: Mon Nov  9 14:29:09 EETDT 2009
NIMADM PARAMETERS: -j nimadmvg -c aix1 -s spotaix61031 -l lppsourceaix61031 -d hdisk1 -Y
Starting Alternate Disk Migration.

+----------------------------------------------------------------------+
Executing nimadm phase 1.
+----------------------------------------------------------------------+
Cloning altinst_rootvg on client, Phase 1.
Client alt_disk_install command: alt_disk_copy -j -i /ALT_MIG_IMD -M 6.1 -P1 -d "hdisk1"
Checking disk sizes.
Creating cloned rootvg volume group and associated logical volumes.
Creating logical volume alt_hd5.
Creating logical volume alt_hd6.
Creating logical volume alt_hd8.
Creating logical volume alt_hd4.
Creating logical volume alt_hd2.
Creating logical volume alt_hd9var.
Creating logical volume alt_hd3.
Creating logical volume alt_hd1.
Creating logical volume alt_hd10opt.
Creating logical volume alt_hd7.
Creating logical volume alt_local_lv.
Creating logical volume alt_varloglv.
Creating logical volume alt_nmonlv.
Creating logical volume alt_chksyslv.
Creating logical volume alt_hd71.
Creating logical volume alt_auditlv.
Creating logical volume alt_nsrlv.
Creating logical volume alt_hd11admin.
Creating /alt_inst/ file system.
Creating /alt_inst/admin file system.
Creating /alt_inst/home file system.
Creating /alt_inst/home/nmon file system.
Creating /alt_inst/nsr file system.
Creating /alt_inst/opt file system.
Creating /alt_inst/tmp file system.
Creating /alt_inst/usr file system.
Creating /alt_inst/usr/local file system.
Creating /alt_inst/usr/local/chksys file system.
Creating /alt_inst/var file system.
Creating /alt_inst/var/log file system.
Creating /alt_inst/var/log/audit file system.
Generating a list of files
for backup and restore into the alternate file system...
Phase 1 complete.

+----------------------------------------------------------------------+
Executing nimadm phase 2.
+----------------------------------------------------------------------+
Creating nimadm cache file systems on volume group nimadmvg.
Checking for initial required migration space.
Creating cache file system /aix1_alt/alt_inst
Creating cache file system /aix1_alt/alt_inst/admin
Creating cache file system /aix1_alt/alt_inst/home
Creating cache file system /aix1_alt/alt_inst/home/nmon
Creating cache file system /aix1_alt/alt_inst/nsr
Creating cache file system /aix1_alt/alt_inst/opt
Creating cache file system /aix1_alt/alt_inst/tmp
Creating cache file system /aix1_alt/alt_inst/usr
Creating cache file system /aix1_alt/alt_inst/usr/local
Creating cache file system /aix1_alt/alt_inst/usr/local/chksys
Creating cache file system /aix1_alt/alt_inst/var
Creating cache file system /aix1_alt/alt_inst/var/log
Creating cache file system /aix1_alt/alt_inst/var/log/audit

+----------------------------------------------------------------------+
Executing nimadm phase 3.
+----------------------------------------------------------------------+
Syncing client data to cache ...

+----------------------------------------------------------------------+
Executing nimadm phase 4.
+----------------------------------------------------------------------+
nimadm: There is no user customization script specified for this phase.

+----------------------------------------------------------------------+
Executing nimadm phase 5.
+----------------------------------------------------------------------+
Saving system configuration files.
Checking for initial required migration space.
Setting up for base operating system restore.
/aix1_alt/alt_inst
Restoring base operating system.
Merging system configuration files.
Running migration merge method: ODM_merge Config_Rules.
Running migration merge method: ODM_merge SRCextmeth.
Running migration merge method: ODM_merge SRCsubsys.
Running migration merge method: ODM_merge SWservAt.
Running migration merge method: ODM_merge pse.conf.
Running migration merge method: ODM_merge vfs.
Running migration merge method: ODM_merge xtiso.conf.
Running migration merge method: ODM_merge PdAtXtd.
Running migration merge method: ODM_merge PdDv.
Running migration merge method: convert_errnotify.
Running migration merge method: passwd_mig.
Running migration merge method: login_mig.
Running migration merge method: user_mrg.
Running migration merge method: secur_mig.
Running migration merge method: RoleMerge.
Running migration merge method: methods_mig.
Running migration merge method: mkusr_mig.
Running migration merge method: group_mig.
Running migration merge method: ldapcfg_mig.
Running migration merge method: ldapmap_mig.
Running migration merge method: convert_errlog.
Running migration merge method: ODM_merge GAI.
Running migration merge method: ODM_merge PdAt.
Running migration merge method: merge_smit_db.
Running migration merge method: ODM_merge fix.
Running migration merge method: merge_swvpds.
Running migration merge method: SysckMerge.

+----------------------------------------------------------------------+
Executing nimadm phase 6.
+----------------------------------------------------------------------+
Installing and migrating software.
Updating install utilities.
+----------------------------------------------------------------------+
            Pre-installation Verification...
+----------------------------------------------------------------------+
Verifying selections...done
Verifying requisites...done
Results...


…output truncated….

install_all_updates: Generating list of updatable rpm packages.
install_all_updates: No updatable rpm packages found.

install_all_updates: Checking for recommended maintenance level 6100-03.
install_all_updates: Executing /usr/bin/oslevel -rf, Result = 6100-03
install_all_updates: Verification completed.
install_all_updates: Log file is /var/adm/ras/install_all_updates.log
install_all_updates: Result = SUCCESS
Restoring device ODM database.

+----------------------------------------------------------------------+
Executing nimadm phase 7.
+----------------------------------------------------------------------+
nimadm: There is no user customization script specified for this phase.

+----------------------------------------------------------------------+
Executing nimadm phase 8.
+----------------------------------------------------------------------+
Creating client boot image.
bosboot: Boot image is 40952 512 byte blocks.
Writing boot image to client's alternate boot disk hdisk1.

+----------------------------------------------------------------------+
Executing nimadm phase 9.
+----------------------------------------------------------------------+
Adjusting client file system sizes ...
Adjusting size for /
Adjusting size for /admin
Adjusting size for /home
Adjusting size for /home/nmon
Adjusting size for /nsr
Adjusting size for /opt
Adjusting size for /tmp
Adjusting size for /usr
Adjusting size for /usr/local
Adjusting size for /usr/local/chksys
Adjusting size for /var
Adjusting size for /var/log
Adjusting size for /var/log/audit
Syncing cache data to client ...

+----------------------------------------------------------------------+
Executing nimadm phase 10.
+----------------------------------------------------------------------+
Unmounting client mounts on the NIM master.
forced unmount of /aix1_alt/alt_inst/var/log/audit
forced unmount of /aix1_alt/alt_inst/var/log
forced unmount of /aix1_alt/alt_inst/var
forced unmount of /aix1_alt/alt_inst/usr/local/chksys
forced unmount of /aix1_alt/alt_inst/usr/local
forced unmount of /aix1_alt/alt_inst/usr
forced unmount of /aix1_alt/alt_inst/tmp
forced unmount of /aix1_alt/alt_inst/opt
forced unmount of /aix1_alt/alt_inst/nsr
forced unmount of /aix1_alt/alt_inst/home/nmon
forced unmount of /aix1_alt/alt_inst/home
forced unmount of /aix1_alt/alt_inst/admin
forced unmount of /aix1_alt/alt_inst
Removing nimadm cache file systems.
Removing cache file system /aix1_alt/alt_inst
Removing cache file system /aix1_alt/alt_inst/admin
Removing cache file system /aix1_alt/alt_inst/home
Removing cache file system /aix1_alt/alt_inst/home/nmon
Removing cache file system /aix1_alt/alt_inst/nsr
Removing cache file system /aix1_alt/alt_inst/opt
Removing cache file system /aix1_alt/alt_inst/tmp
Removing cache file system /aix1_alt/alt_inst/usr
Removing cache file system /aix1_alt/alt_inst/usr/local
Removing cache file system /aix1_alt/alt_inst/usr/local/chksys
Removing cache file system /aix1_alt/alt_inst/var
Removing cache file system /aix1_alt/alt_inst/var/log
Removing cache file system /aix1_alt/alt_inst/var/log/audit

+----------------------------------------------------------------------+
Executing nimadm phase 11.
+----------------------------------------------------------------------+
Cloning altinst_rootvg on client, Phase 3.
Client alt_disk_install command: alt_disk_copy -j -i /ALT_MIG_IMD -M 6.1 -P3 -d "hdisk1"
## Phase 3 ###################
Verifying altinst_rootvg...
Modifying ODM on cloned disk.
forced unmount of /alt_inst/var/log/audit
forced unmount of /alt_inst/var/log
forced unmount of /alt_inst/var
forced unmount of /alt_inst/usr/local/chksys
forced unmount of /alt_inst/usr/local
forced unmount of /alt_inst/usr
forced unmount of /alt_inst/tmp
forced unmount of /alt_inst/opt
forced unmount of /alt_inst/nsr
forced unmount of /alt_inst/home/nmon
forced unmount of /alt_inst/home
forced unmount of /alt_inst/admin
forced unmount of /alt_inst
Changing logical volume names in volume group descriptor area.
Fixing LV control blocks...
Fixing file system superblocks...
Bootlist is set to the boot disk: hdisk1 blv=hd5

+----------------------------------------------------------------------+
Executing nimadm phase 12.
+----------------------------------------------------------------------+
Cleaning up alt_disk_migration on the NIM master.
Cleaning up alt_disk_migration on client aix1.

After the migration is complete, I confirm that the bootlist is set to the nst_rootvg disk.

    # lspv | grep rootvg
    hdisk0 0000273ac30fdcfc rootvg          active
    hdisk1 000273ac30fdd6e  altinst_rootvg  active

   
    # bootlist -m normal -o
    hdisk1 blv=hd5

At an agreed time, I reboot the LPAR and confirm that the system is now running AIX 6.1.

    # shutdown –Fr

; system reboots here…

    # oslevel –s
    6100-03-01-0921
   
    # instfix -i | grep AIX
        All filesets for 6.1.0.0_AIX_ML were found.
        All filesets for 6100-00_AIX_ML were found.
        All filesets for 6100-01_AIX_ML were found.
        All filesets for 6100-02_AIX_ML were found.
        All filesets for 6100-03_AIX_ML were found.

At this point, I would perform some general AIX system health checks to ensure that the system is configured and running as I'd expect. There is also a post_migration script that you can run to verify the migration. You can find this script in /usr/lpp/bos, after the migration.

You may want to consider upgrading other software such as openssl, openssh, lsof, etc at this stage.

The rsh daemon can now be disabled after the migration.

    # chsubserver -d -v shell -p tcp6 -r inetd
    # refresh –s inetd
    # cd /
    # rm .rhosts
    # ln -s /dev/null .rhosts

With the migration finished, the applications are started and the application support team verify that everything is functioning as expected. I also take a mksysb and document the system configuration after the migration.

Once we are all satisfied that the migration has completed successfully, we then return rootvg to a mirrored disk configuration.

    # lspv | grep old_rootvg
    hdisk0  000071da26fe3bd0      old_rootvg
    # alt_rootvg_op -X old_rootvg
    # extendvg –f rootvg hdisk0
    # mirrorvg rootvg hdisk0
    # bosboot -a -d /dev/hdisk0
    # bosboot -a -d /dev/hdisk1
    # bootlist -m normal hdisk0 hdisk1
    # bootlist -m normal -o
    hdisk0 blv=hd5
    hdisk1 blv=hd5

If there was an issue with the migration, I could easily back out to the previous release of AIX. Instead of re-mirroring rootvg (above), we would change the boot list to point at the previous rootvg disk (old_rootvg) and reboot the LPAR.

    # lspv | grep old_rootvg
    hdisk0  000071da26fe3bd0      old_rootvg
    # bootlist -m normal hdisk0
    # bootlist -m normal –o
    hdisk0 blv=hd5
    # shutdown –Fr

This is much simpler and faster than restoring a mksysb image (via NIM, tape, or DVD), as you would with a conventional migration method.

Conclusion

By using nimadm, we were able to reduce the overall downtime required when migrating our systems to AIX 6.1. We were also given a convenient way to back out a migration, had it been necessary to do so. I hope this provides you with some ideas on how to best migrate your systems to AIX 6.1, when the time comes.

Taggé comme: Aucun commentaire