unixadmin.free.fr just another IBM blog and technotes backup

25août/15

Minimum NIM master levels for VIOS clients

The NIM master level for VIOS is also for me a good point of vue to know the AIX level vs VIOS ioslevel.

https://www-304.ibm.com/webapp/set2/sas/f/flrt/viostable.html

Minimum NIM master levels for VIOS clients

If using NIM to backup, install or update a VIOS partition, the NIM master must be greater than or equal to the levels shown below.

VIOS Release VIOS Level Minimum NIM master level
VIOS 2.2.3 VIOS 2.2.3.50 AIX 6100-09-05 7100-03-05
VIOS 2.2.3.4 AIX 6100-09-04 7100-03-04
VIOS 2.2.3.3 AIX 6100-09-03 7100-03-03
VIOS 2.2.3.2 AIX 6100-09-02 7100-03-02
VIOS 2.2.3.1 AIX 6100-09-01 7100-03-01
VIOS 2.2.3.0 AIX 6100-09 7100-03
VIOS 2.2.2 VIOS 2.2.2.6 AIX 6100-08-06 7100-02-06
VIOS 2.2.2.5 AIX 6100-08-05 7100-02-05
VIOS 2.2.2.4 AIX 6100-08-04 7100-02-04
VIOS 2.2.2.3 AIX 6100-08-03 7100-02-03
VIOS 2.2.2.2 AIX 6100-08-02 7100-02-02
VIOS 2.2.2.1 AIX 6100-08-01 7100-02-01
VIOS 2.2.2.0 AIX 6100-08 7100-02
VIOS 2.2.1 VIOS 2.2.1.9 AIX 6100-07-10 7100-01-10
VIOS 2.2.1.8 AIX 6100-07-09 7100-01-09
VIOS 2.2.1.7 AIX 6100-07-08 7100-01-07
VIOS 2.2.1.5 AIX 6100-07-05 7100-01-05
VIOS 2.2.1.4 AIX 6100-07-04 7100-01-04
VIOS 2.2.1.3 AIX 6100-07-02 7100-01-02
VIOS 2.2.1.1 AIX 6100-07-01 7100-01-01
VIOS 2.2.1.0 AIX 6100-07 7100-01
VIOS 2.2.0 VIOS 2.2.0.13 AIX 6100-06-05 7100-00-03
VIOS 2.2.0.12 AIX 6100-06-05 7100-00-03
VIOS 2.2.0.11 AIX 6100-06-03 7100-00-02
VIOS 2.2.0.10 AIX 6100-06-01 7100-00-01
VIOS 2.2.0.0 AIX 6100-06 7100-00
VIOS 2.1.3 VIOS 2.1.3.10 AIX 6100-05-02
VIOS 2.1.3.0 AIX 6100-05
VIOS 2.1.2 VIOS 2.1.2.13 AIX 6100-04-03
VIOS 2.1.2.12 AIX 6100-04-02
VIOS 2.1.2.11 AIX 6100-04-02
VIOS 2.1.2.10 AIX 6100-04-01
VIOS 2.1.2.0 AIX 6100-04
VIOS 2.1.1 VIOS 2.1.1.10 AIX 6100-03-01
VIOS 2.1.1.0 AIX 6100-03
VIOS 2.1.0 VIOS 2.1.0.10 AIX 6100-02-02
VIOS 2.1.0.1 AIX 6100-02-01
VIOS 2.1.0.0 AIX 6100-02
VIOS 1.5.2 VIOS 1.5.2.6 AIX 5300-08-08
VIOS 1.5.2.5 AIX 5300-08-05
VIOS 1.5.2.1 AIX 5300-08-01
VIOS 1.5.2.0 AIX 5300-08
31juil/15

BACKUP VM uses NBD over SAN transport

Technote (troubleshooting)

Problem(Abstract)

Although "VMVSTORTRANSPORT SAN" is set in the datamovers options file, the backup uses NBD.
Symptom

Backup uses NBD transport

Cause

VMware vCenter 'SSL.VERSION' setting is set to TLSv1.

This option can be found in the VMware vSphere client -> Administration -> vCenter Server Settings -> Advanced Settings.

Environment

Windows/Linux datamover.

vCenter 5.1 or 5.5.

Diagnosing the problem

A Tivoli Storage Manager client trace will show the following:

vmvddksdk.cpp (1275): diskLibPlugin: 2015-07-08T15:32:32.100-07:00 [02832 error 'Default']
Cannot use advanced transport modes for VCENTER/moRef=vm-12345/snapshot-6789: Other error encountered: SSL Exception: error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure.

Resolving the problem

The SSL.VERSION parameter needs to either be set to 'ALL', or if TLS is a requirement, then the Tivoli Storage Manager client (datamover) must be upgraded to version 7.1.2.x. This version of client includes VDDK version 6.0 which has added support for TLS.

https://www.vmware.com/support/developer/vddk/vddk-600-releasenotes.html#whatsnew

Remplis sous: TSM Aucun commentaire
27juil/15

PowerHA Service Pack APAR content Matrix

Source : IBM Technote

Last Updated: 7/24/2015
Table View: (Links to list/description of APARs in each service pack)

PowerHA SystemMirror Version
V6.1
V7.1.0
V7.1.1
V7.1.2
V7.1.3
SP1 – 12/1/2009 SP1 – 9/1/2010 SP1 – 2/1/2012 SP1 – 11/1/2012 SP1 - 5/16/2014
SP2 – 5/1/2010 SP2 – 11/1/2010 SP2 – 3/1/2012 SP2 – 3/1/2013 SP2 - 11/10/2014
SP3 – 9/1/2010 SP3 – 12/1/2010 SP3 – 7/1/2012 SP3 – 7/1/2013 SP3 - 3/27/2015
SP4 – 1/1/2011 SP4 – 8/1/2011 SP4 – 12/1/2012 SP4 - 6/23/2014
SP5 – 4/1/2011 SP5 – 3/1/2012 SP5 – 5/1/2013 SP5 - 11/21/2014
SP6 – 8/1/2011 SP6 – 10/1/2012 SP6 – 2/19/2014 SP6 - 7/24/2015
SP7 – 12/1/2011 SP7 – 2/1/2013 SP7 - 8/26/2014
SP8 – 5/1/2012 SP8 – 6/1/2013 SP8 - 3/3/2015
SP9 – 8/1/2012 SP9 – 5/9/2014 SP9 - 5/27/2015
SP10 – 1/1/2013
SP11 – 5/1/2013
SP12 – 9/1/2013
SP13 - 7/30/2014
SP14 - 11/21/2014
SP15 - 4/27/2015
Remplis sous: HACMP Aucun commentaire
17juil/15

Determining rlimit (ulimit) values for a running process

Question

How can I find out what limits are set for a currently running process?

Answer

The easiest way is to download the pdump.sh script and run it against the process. The pdump tool can be downloaded from here:
ftp://ftp.software.ibm.com/aix/tools/debug/

There is no installation needed, only to change the permissions of the file so it can be executed:
$ chmod +x pdump.sh

Then run it against the process-id (PID) of the process you wish to examine. The pdump.sh script will create an output file containing information regarding that process.

# ./pdump.sh 3408030
The output file will contain the name of the process, the PID and the current date. For example:

pdump.tier1slp.3408030.13May2015-11.18.13.out

This is an ASCII text file and can be inspected with "more" or "view".

Determining the limit values
Limits in a process are kept in the user area or "uarea" of the process memory. This section in the pdump output starts with the title "Resource limits:"

Resource limits:
fsblimit....00000000001FFFFF

rlimit[CPU]........... cur 7FFFFFFF max 7FFFFFFF
saved_rlimit[CPU]..... cur 7FFFFFFFFFFFFFFF max 7FFFFFFFFFFFFFFF
rlimit_flag[CPU]...... cur INF max INF

rlimit[FSIZE]......... cur 3FFFFE00 max 3FFFFE00
saved_rlimit[FSIZE]... cur 000000003FFFFE00 max 000000003FFFFE00
rlimit_flag[FSIZE].... cur SML max SML

rlimit[DATA].......... cur 08000000 max 7FFFFFFF
saved_rlimit[DATA].... cur 0000000008000000 max 7FFFFFFFFFFFFFFF
rlimit_flag[DATA]..... cur SML max INF

rlimit[STACK]......... cur 02000000 max 7FFFFFFF
saved_rlimit[STACK]... cur 0000000002000000 max 0000000100000000
rlimit_flag[STACK].... cur SML max MAX

rlimit[CORE].......... cur 3FFFFE00 max 7FFFFFFF
saved_rlimit[CORE].... cur 000000003FFFFE00 max 7FFFFFFFFFFFFFFF
rlimit_flag[CORE]..... cur SML max INF

rlimit[RSS]........... cur 02000000 max 7FFFFFFF
saved_rlimit[RSS]..... cur 0000000002000000 max 7FFFFFFFFFFFFFFF
rlimit_flag[RSS]...... cur SML max INF

rlimit[AS]............ cur 7FFFFFFF max 7FFFFFFF
saved_rlimit[AS]...... cur 0000000000000000 max 0000000000000000
rlimit_flag[AS]....... cur INF max INF

rlimit[NOFILE]........ cur 000007D0 max 7FFFFFFF
saved_rlimit[NOFILE].. cur 00000000000007D0 max 7FFFFFFFFFFFFFFF
rlimit_flag[NOFILE]... cur SML max INF

rlimit[THREADS]....... cur 7FFFFFFF max 7FFFFFFF
saved_rlimit[THREADS]. cur 0000000000000000 max 0000000000000000
rlimit_flag[THREADS].. cur INF max INF

rlimit[NPROC]......... cur 7FFFFFFF max 7FFFFFFF
saved_rlimit[NPROC]... cur 0000000000000000 max 0000000000000000
rlimit_flag[NPROC].... cur INF max INF

The resource limit for each ulimit value is represented here. As values could be either 32-bit or 64-bit, the include file /usr/include/sys/user.h tells us how to read them:

/*
* To maximize compatibility with old kernel code, a 32-bit
* representation of each resource limit is maintained in U_rlimit.
* Should the limit require a 64-bit representation, the U_rlimit
* value is set to RLIM_INFINITY, with actual 64-bit limit being
* stored in U_saved_rlimit. These flags indicate what
* the real situation is:
*
* RLFLAG_SML => limit correctly represented in 32-bit U_rlimit
* RLFLAG_INF => limit is infinite
* RLFLAG_MAX => limit is in 64_bit U_saved_rlimit.rlim_max
* RLFLAG_CUR => limit is in 64_bit U_saved_rlimit.rlim_cur

So using this and our pdump output, we can view the value of NOFILE for example:

rlimit[NOFILE]........ cur 000007D0 max 7FFFFFFF
saved_rlimit[NOFILE].. cur 00000000000007D0 max 7FFFFFFFFFFFFFFF
rlimit_flag[NOFILE]... cur SML max INF

The rlimit_flag for NOFILE is set to SML, so the value is a 32-bit integer, and is stored in the rlimit.cur variable.

0x7d0 = 2000 decimal, so the limit for that user, picked up by the process when it started, is 2000.

Source : IBM Technote

Remplis sous: AIX Aucun commentaire
30juin/15

VIOS Adapter_reset on SEA LOAD SHARING

Technote (FAQ)

Question
How can I prevent network outage on SEA in loadsharing mode over physical adapter/LACP(8023ad link aggregation).

Answer
SEA load sharing is initiated by Backup SEA. In the VIOS levels (older than 2.2.4.0), SEA going to Backup state calls for adapter reset by default.
Some physical adapters may takes 30 sec or longer to complete adapter reset and LACP negotiation may take 30 sec for LACP negotiation. If SEA is configured with those physical adapters or LACP, network communication for the SEA in backup_sh state may be affected temporarily during a system reboot or cable pull/plug back in.

Changing value of "adapter_reset" attribute to "no" on a pair of SEA in loadsharing mode.

1. Login to padmin

2. Change to root prompt:
$ oem_setup_env

3. List the adapters:
# lsdev -Cc adapter

4. Find the Shared Ethernet Adapters
ent7 Available Shared Ethernet Adapter

5. Use the entstat command to list the components of the SEA:
# entstat -d ent7 | grep State
On SEA in primary loadsharing mode
State : PRIMARY_SH
On SEA in backup loadsharing mode
State : BACKUP_SH

6. Use the lsattr command to list attributes of the SEA
# lsattr -El ent7
adapter_reset yes

7. Change adapter_reset to "no". This change is dynamic and doesn't require reboot.
chdev -dev ent7 -attr adapter_reset=no

8. Use the lsattr command to confirm the change
# lsattr -El ent7
adapter_reset no

10mai/15

Common EFS Errors and Solutions

Question

This document is a collection of errors encountered when using EFS and solutions to those issues.

Answer

1) Problem: Can't enable EFS on the system
# efsenable -a
/usr/lib/drivers/crypto/clickext: A file or directory in the path name does not exist.
Unable to load CLiC kernel extension. Please check your installation.

Solution:
Install CLiC filesets from AIX Expansion Pack CD

$ installp -l -d clic.rte
Fileset Name                Level                     I/U Q Content
====================================================================
clic.rte.includes           4.3.0.0                    I  N usr
#   CryptoLite for C Library Include File

clic.rte.kernext            4.3.0.0                    I  N usr,root
#   CryptoLite for C Kernel

clic.rte.lib                4.3.0.0                    I  N usr
#   CryptoLite for C Library

2) Problem: Can't enable EFS on the system

# efsenable -a
Unable to load CLiC kernel extension. Please check your installation.
(Please make sure latest version of clic.rte is installed.)

Double-check that you have installed the correct version of the CLIC filesets for your Technology Level of AIX.

For AIX 6100-01 use clic.rte.4.3.0.0.I on the Expansion Pack CD
For aix 6100-02 use clic.rte.4.5.0.0.I on the Expansion Pack CD

AIX 6100-03 has been updated to include clic.rte on the base media set to prevent boot issues on systems with EFS enabled. Use clic.rte.4.6.0.1.I

For AIX 6100-04 use clic.rte.4.7.0.0.I which is also included in the base OS media.

2) Problem: Can't view user's key:

$ efskeymgr -v
Problem initializing EFS framework.
Please check EFS is installed and enabled (see efsenable) on you system.
Error was: (EFS was not configured)

Solution:
Enable EFS on the system:
# efsenable -a
and give root's password when it asks for root's initial keystore.

3) Problem: Can't enable encryption inheritiance on a directory.
# efsmgr -E testdir
or
Can't enable encryption on a specific file
# efsmgr -e myfile

Problem initializing EFS framework.
Please check EFS is installed and enabled on you system.
Error was: (EFS was not configured)

Solution:
Make sure CLiC filesets are installed
Enable EFS on the system
Enable EFS and RBAC on the filesystem:

# chfs -a efs=yes /myfilesystem

4) Problem: Have enabled EFS on a filesystem but get error mounting:

# mount /efstest
The CLiC library (libclic.a) is not available. Install clic.rte and run 'efsenable -a'.

Solution:
Install CLiC filesets
Enable EFS on the system
Remount the filesystem

5) Problem: No encryption algorithms show up!
# efsenable -q
List of supported algorithms for keystores:
1
2
3

List of supported ciphers for files:
1
2
3
4
5
6

Solution:
Install CLiC filesets

# efsenable -q
List of supported algorithms for keystores:
1  RSA_1024
2  RSA_2048
3  RSA_4096

List of supported ciphers for files:
1  AES_128_CBC
2  AES_192_CBC
3  AES_256_CBC
4  AES_128_ECB
5  AES_192_ECB
6  AES_256_ECB

Source: IBM Technote

Remplis sous: AIX Aucun commentaire
7jan/15

How to check for memory over-commitment in AME

Question

In LPARs that utilize the Power 7 (and later) feature of Active Memory Expansion (AME), assessing memory resources is a more complex task compared with dedicated memory systems. How can the memory in such a system be evaluated?

Answer

Introduction

Active Memory Expansion (AME) allows for the compression of memory pages to increase the system's effective virtual address space. At high usage, unused computational memory is moved to the compressed pool instead of being paged-out to paging space. This is typically employed in environments with excess CPU resources and are somewhat constrained on physical memory. Active Memory Expansion is feature that has been introduced in POWER7/POWER7+ systems with a minimum level of AIX 6.1 TL4 SP2.

AME Scenarios

After planning and configuring the system with the amepat tool, there are some scenarios that might require a change of AME configuration:

  1. Virtual memory exceeds Target Memory Expansion Size

  2. When this scenario is present, the system is over-committed and will start paging out to disk. From a configuration stand-point, rerun the amepat tool to either increase the Expansion Factor or to increase the size of physical memory.

  3. Virtual memory exceeds assigned physical memory and less than Target Memory Expansion Size (with no deficit)

  4. This is the ideal scenario when using AME as the compressed pool is able to satisfy the memory demands of the LPAR.

  5. Virtual memory exceeds assigned physical memory and less than Target Memory Expansion Size (with deficit)

  6. When the system is unable to compress memory pages to meet the Target Memory Expansion Size, there will be a deficit and pages that exceed the allocated memory are moved to paging space. Not all memory pages are subject to compression (pinned pages or client pages) and therefore, a deficit is present. Rerun the amepat tool to either decrease the Expansion Factor or to increase the size of physical memory.

  7. Virtual memory is below assigned physical memory

  8. While there isn't a problem with over-commitment with this setup, it is not benefiting from AME. Rerun the amepat tool to decrease the allocated physical memory and evaluate the current Expansion Factor.

Tools to use with a live example
The following tools on AIX can be used to determine the current status of an AME-enabled LPAR (with a live example from the IBM Redbook IBM PowerVM Virtualization Managing and Monitoring):

# amepat

  1. Comparing the Virtual Memory Size (MB) to the Target Expanded Memory Size, we find that the system is not over-committed logically.
  2. Due to the Deficit Memory Size (MB), the system will start utilizing paging space due to the inability to compress more memory.

# vmstat -c

Comparing the avm value (in 4k pages) to the tmem value (MB) will tell us if the system is logically over-committed.

  1. Observing the dxm will show us the deficit in 4k pages.

# svmon -O summary=AME

Comparing the virtual column to the size column shows no issue with logical memory over-commitment.

  1. The dxm column shows the deficit in 4k pages

For more information regarding AME, please refer to the IBM Redbook IBM PowerVM Virtualization Managing and Monitoring (sg247590):
http://www.redbooks.ibm.com/abstracts/sg247590.html

Source: IBM Technote

Taggé comme: Aucun commentaire
13nov/14

IBM AIX – From Strength to Strength – 2014

Un document intéressant qui résume les fonctionnalités et supports par version d'AIX, Vitual I/O Server et autres produits pour IBM POWER.

Lien permanent :
http://public.dhe.ibm.com/common/ssi/ecm/en/poo03022usen/POO03022USEN.PDF

POO03022USEN

Thank you Jay.

4nov/14

repair AIX IPL hang at LED value 518

Pour réparer un serveur AIX figé sur la LED 518 vous pouvez suivre la Technote IBM :
http://www-01.ibm.com/support/docview.wss?uid=isg3T1000131

Dans mon cas cela n'a pas suffit car le LVCB de hd2 était corrompu:
- hd2 LVCB corrompu
- /etc/filesystems corrompu pour hd2
- ODM corrompu dans CuAt

Démarrer en Maintenance mode via un DVD AIX, NIM ou une mksysb (Tape, DVD, ISO).
Choisissez l'Option 3 puis Access rootvg volume groupe, identifier le disque contenant les LV système (hd4, hd2 ...)

                           Welcome to Base Operating System
                      Installation and Maintenance

Type the number of your choice and press Enter. Choice is indicated by >>>.

     1 Start Install Now with Default Settings

     2 Change/Show Installation Settings and Install

=> 3 Start Maintenance Mode for System Recovery

     4 Make Additional Disks Available

     5 Select Storage Adapters


                            Maintenance

Type the number of your choice and press Enter.

=> 1 Access a Root Volume Group


Type the number of your choice and press Enter.

    0 Continue



                           Access a Root Volume Group

Type the number for a volume group to display the logical volume information
and press Enter.

   1)   Volume Group 00c8502e00004c0000000145e1f142ed contains these disks:
          hdisk0  10240        vscsi
 

                           Volume Group Information

 ------------------------------------------------------------------------------
    Volume Group ID 00c8502e00004c0000000145e1f142ed includes the following
    logical volumes:

         hd5         hd6         hd8         hd4         hd2      hd9var
         hd3         hd1     hd10opt   hd11admin    livedump
 ------------------------------------------------------------------------------

Choisir l'option 2 (Access this Volume Group and start a shell before mounting filesystems)

Type the number of your choice and press Enter.

    1) Access this Volume Group and start a shell
=> 2) Access this Volume Group and start a shell before mounting filesystems

Pendant l'import du VG rootvg on constate un message pas commun.

Importing Volume Group...
rootvg
Could not find "/" and/or "/usr" filesystems.
Exiting to shell.

Checker les filesystems et formater le Log device.

# fsck -y /dev/hd4
# fsck -y /dev/hd2
# fsck -y /dev/hd9var
# fsck -y /dev/hd3
# fsck -y /dev/hd1
# logform /dev/hd8
logform: destroy /dev/rhd8 (y)?y

Afficher le contenu du "Logical Volume Control Block" des LVs
On constate que le Label du LVCB de hd2 est corrompu

# getlvcb hd2 -AT
         AIX LVCB
         intrapolicy = c
         copies = 1
         interpolicy = m
         lvid = 00c8502e00004c0000000145e1f142ed.5
         lvname = hd2
         label = /usr/!+or
         machine id = 8502E4C00
         number lps = 165
         relocatable = y
         strict = y
         stripe width = 0
         stripe size in exponent = 0
         type = jfs2
         upperbound = 32
         fs = vfs=jfs2:log=/dev/hd8
         time created  = Fri May  9 17:04:23 2014
         time modified = Tue Nov  4 13:19:18 2014

Corriger le Label de hd2 via la commande putlvcb puis vérifier

# putlvcb -L /usr hd2

# getlvcb hd2 -AT
         AIX LVCB
         intrapolicy = c
         copies = 1
         interpolicy = m
         lvid = 00c8502e00004c0000000145e1f142ed.5
         lvname = hd2
         label = /usr
         machine id = 8502E4C00
         number lps = 165
         relocatable = y
         strict = y
         stripe width = 0
         stripe size in exponent = 0
         type = jfs2
         upperbound = 32
         fs = vfs=jfs2:log=/dev/hd8
         time created  = Fri May  9 17:04:23 2014
         time modified = Tue Nov  4 13:23:00 2014

A ce stade on ne peut pas se "chrooter" dans le disque système rootvg car le VG à été importé avec une valeur corrompue pour hd2 (/usr), Obligé de redémarrer en Maintenance mode

Choisissez l'option 2 et vérifier que l'erreur précédente ne s'affiche plus.

Type the number of your choice and press Enter.

   1) Access this Volume Group and start a shell
   2) Access this Volume Group and start a shell before mounting filesystems

  99) Previous Menu

    Choice [99]: 2
Importing Volume Group...
rootvg
Checking the / filesystem.

The current volume is: /dev/hd4
Primary superblock is valid.
Checking the /usr filesystem.

The current volume is: /dev/hd2
Primary superblock is valid.
Exit from this shell to continue the process of accessing the root
volume group.

Pour ce "chrooter" dans le disque rootvg et monter les filesystems taper "exit"

# exit

# df
Filesystem    512-blocks      Free %Used    Iused %Iused Mounted on
/dev/ram0         720896    319192   56%    11797    22% /
/proc             720896    319192   56%    11797    22% /proc
/dev/cd0               -         -    -         -     -  /SPOT
/dev/hd4          720896    319192   56%    11797    22% /
/dev/hd2         5406720    638528   89%    54526    39% /usr
/dev/hd3          294912    219096   26%       88     1% /tmp
/dev/hd9var      1015808    303160   71%     8987    17% /var
/dev/hd10opt     1015808    484192   53%     8860    13% /opt

On constate que le label /usr est corrompu

# lsvg -l rootvg
rootvg:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
hd5                 boot       2       2       1    closed/syncd  N/A
hd6                 paging     32      32      1    open/syncd    N/A
hd8                 jfs2log    1       1       1    open/syncd    N/A
hd4                 jfs2       22      22      1    open/syncd    /
hd2                 jfs2       165     165     1    open/syncd    /usr/!+or
hd9var              jfs2       31      31      1    open/syncd    /var
hd3                 jfs2       9       9       1    open/syncd    /tmp
hd1                 jfs2       1       1       1    closed/syncd  /home
hd10opt             jfs2       31      31      1    open/syncd    /opt
hd11admin           jfs2       8       8       1    closed/syncd  /admin
livedump            jfs2       16      16      1    closed/syncd  /var/adm/ras/livedump

# grep -p hd2 /etc/filesystems
/usr/!+or:
        dev             = /dev/hd2
        vfs             = jfs2
        log             = /dev/hd8
        mount           = automatic
        check           = false
        type            = bootfs
        vol             = /usr
        free            = false

# odmget -q 'name=hd2 and attribute=label' CuAt

CuAt:
        name = "hd2"
        attribute = "label"
        value = "/usr/!+or"
        type = "R"
        generic = "DU"
        rep = "s"
        nls_index = 640

Corriger les corruptions de l'ODM et du fichier /etc/filesystems

exporter la valeur ODM corrompu (hd2 + label) dans un fichier puis éditer et corriger le fichier

# odmget -q 'name=hd2 and attribute=label' CuAt > /tmp/label.odm

# export TERM=vt320
# export VISUAL=vi
# set -o vi
# vi /tmp/odm

CuAt:
        name = "hd2"
        attribute = "label"
        value = "/usr"
        type = "R"
        generic = "DU"
        rep = "s"
        nls_index = 640

Sauvegarder la classe ODM CuAt puis supprimer la valeur corrompue de la classe ODM CuAt

# cp /etc/objrepos/CuAt /tmp/CuAt
# odmdelete -q 'name=hd2 and attribute=label' -o CuAt
0518-307 odmdelete: 1 objects deleted.

Ajouter la nouvelle valeur à partir du fichier et vérifier l'ODM CuAt

# odmadd /tmp/label.odm
# odmget -q 'name=hd2 and attribute=label' CuAt

CuAt:
        name = "hd2"
        attribute = "label"
        value = "/usr"
        type = "R"
        generic = "DU"
        rep = "s"
        nls_index = 640

Sauvegarder l'ODM dans le Boot Logical Volume (hd5)

# savebase -v
saving to '/dev/hd5'
47 CuDv objects to be saved
120 CuAt objects to be saved
14 CuDep objects to be saved
8 CuVPD objects to be saved
356 CuDvDr objects to be saved
2 CuPath objects to be saved
0 CuPathAt objects to be saved
0 CuData objects to be saved
0 CuAtDef objects to be saved
Number of bytes of data to save = 19005
Compressing data
Compressed data size is = 6850
        bi_start     = 0x3600
        bi_size      = 0x1b20000
        bd_size      = 0x1b00000
        ram FS start = 0x9363b0
        ram FS size  = 0x114bc17
        sba_start    = 0x1b03600
        sba_size     = 0x20000
        sbd_size     = 0x1ac6
Checking boot image size:
        new save base byte cnt = 0x1ac6
Wrote 6854 bytes
Successful completion

Éditer et modifier le fichier /etc/filesystems pour /usr puis contrôler

# cp /etc/filesystems /etc/filesystems.bkp
# vi /etc/filesystems

# grep -p hd2 /etc/filesystems
/usr:
        dev             = /dev/hd2
        vfs             = jfs2
        log             = /dev/hd8
        mount           = automatic
        check           = false
        type            = bootfs
        vol             = /usr
        free            = false

Enfin synchroniser la mémoire sur les filesystems et redémarrer.

# sync; sync; sync; reboot
Remplis sous: AIX Aucun commentaire
30oct/14

Creating NIM resources on an NFS shared NAS device

You can use a network-attached storage (NAS) device to store your Network Installation Management (NIM) resources by using the nas_filer resource server.

NIM support allows the hosting of file-type resources (such as mksysb, savevg, resolv_conf, bosinst_data, and script) on a NAS device. The resources can be defined in the NIM server database, and can be used for installation without changing any network information or configuration definitions on the Shared Product Option Tree (SPOT) server.

The nas_filer resource server is available in the NIM environment, and requires an interface attribute and a password file. You must manually define export rules and perform storage and disk management before you use any NIM operations.

To create resources on a NAS device by using the nas_filer resource server, complete the following steps:

Define the nas_filer object. You can enter a command similar to the following example:

    # nim -o define -t nas_filer -a if1="find_net als046245.server.com 0" -a passwd_file=/export/nim/pswfile netapp1

Define a mksysb file that exists on the NAS device as a NIM resource. You can enter a command similar to the following example:

    # nim -o define -t mksysb -a server=netapp1 -a location=/vol/vol0/nim_lun1/client1.nas_filer NetApp_bkup1

Optional:
If necessary, create a new resource (client backup) on the NAS device. You can use the following command to create a mksysb resource:

    # nim -o define -t mksysb -a server=netapp1 -a location=/vol/vol10/nim_lun1/mordor05_bkup -a source=mordor05 -a mk_image=yes NetApp_mordor05

Optional:
If necessary, copy an existing NIM resource to the nas_filer object. You can use the following command to copy a mksysb resource.

    # nim -o define -t mksysb -a server=netapp1 -a location=/vol/vol10/nim_lun1/replicate_bkup -a source=master_backup NetApp_master_backup

SOURCE: IBM Knowledge Center

Remplis sous: AIX, NAS, NIM Aucun commentaire