unixadmin.free.fr just another IBM blog and technotes backup

7jan/15

How to check for memory over-commitment in AME

Question

In LPARs that utilize the Power 7 (and later) feature of Active Memory Expansion (AME), assessing memory resources is a more complex task compared with dedicated memory systems. How can the memory in such a system be evaluated?

Answer

Introduction

Active Memory Expansion (AME) allows for the compression of memory pages to increase the system's effective virtual address space. At high usage, unused computational memory is moved to the compressed pool instead of being paged-out to paging space. This is typically employed in environments with excess CPU resources and are somewhat constrained on physical memory. Active Memory Expansion is feature that has been introduced in POWER7/POWER7+ systems with a minimum level of AIX 6.1 TL4 SP2.

AME Scenarios

After planning and configuring the system with the amepat tool, there are some scenarios that might require a change of AME configuration:

  1. Virtual memory exceeds Target Memory Expansion Size

  2. When this scenario is present, the system is over-committed and will start paging out to disk. From a configuration stand-point, rerun the amepat tool to either increase the Expansion Factor or to increase the size of physical memory.

  3. Virtual memory exceeds assigned physical memory and less than Target Memory Expansion Size (with no deficit)

  4. This is the ideal scenario when using AME as the compressed pool is able to satisfy the memory demands of the LPAR.

  5. Virtual memory exceeds assigned physical memory and less than Target Memory Expansion Size (with deficit)

  6. When the system is unable to compress memory pages to meet the Target Memory Expansion Size, there will be a deficit and pages that exceed the allocated memory are moved to paging space. Not all memory pages are subject to compression (pinned pages or client pages) and therefore, a deficit is present. Rerun the amepat tool to either decrease the Expansion Factor or to increase the size of physical memory.

  7. Virtual memory is below assigned physical memory

  8. While there isn't a problem with over-commitment with this setup, it is not benefiting from AME. Rerun the amepat tool to decrease the allocated physical memory and evaluate the current Expansion Factor.

Tools to use with a live example
The following tools on AIX can be used to determine the current status of an AME-enabled LPAR (with a live example from the IBM Redbook IBM PowerVM Virtualization Managing and Monitoring):

# amepat

  1. Comparing the Virtual Memory Size (MB) to the Target Expanded Memory Size, we find that the system is not over-committed logically.
  2. Due to the Deficit Memory Size (MB), the system will start utilizing paging space due to the inability to compress more memory.

# vmstat -c

Comparing the avm value (in 4k pages) to the tmem value (MB) will tell us if the system is logically over-committed.

  1. Observing the dxm will show us the deficit in 4k pages.

# svmon -O summary=AME

Comparing the virtual column to the size column shows no issue with logical memory over-commitment.

  1. The dxm column shows the deficit in 4k pages

For more information regarding AME, please refer to the IBM Redbook IBM PowerVM Virtualization Managing and Monitoring (sg247590):
http://www.redbooks.ibm.com/abstracts/sg247590.html

Source: IBM Technote

Taggé comme: Aucun commentaire
13nov/14

IBM AIX – From Strength to Strength – 2014

Un document intéressant qui résume les fonctionnalités et supports par version d'AIX, Vitual I/O Server et autres produits pour IBM POWER.

Lien permanent :
http://public.dhe.ibm.com/common/ssi/ecm/en/poo03022usen/POO03022USEN.PDF

POO03022USEN

Thank you Jay.

4nov/14

repair AIX IPL hang at LED value 518

Pour réparer un serveur AIX figé sur la LED 518 vous pouvez suivre la Technote IBM :
http://www-01.ibm.com/support/docview.wss?uid=isg3T1000131

Dans mon cas cela n'a pas suffit car le LVCB de hd2 était corrompu:
- hd2 LVCB corrompu
- /etc/filesystems corrompu pour hd2
- ODM corrompu dans CuAt

Démarrer en Maintenance mode via un DVD AIX, NIM ou une mksysb (Tape, DVD, ISO).
Choisissez l'Option 3 puis Access rootvg volume groupe, identifier le disque contenant les LV système (hd4, hd2 ...)

                           Welcome to Base Operating System
                      Installation and Maintenance

Type the number of your choice and press Enter. Choice is indicated by >>>.

     1 Start Install Now with Default Settings

     2 Change/Show Installation Settings and Install

=> 3 Start Maintenance Mode for System Recovery

     4 Make Additional Disks Available

     5 Select Storage Adapters


                            Maintenance

Type the number of your choice and press Enter.

=> 1 Access a Root Volume Group


Type the number of your choice and press Enter.

    0 Continue



                           Access a Root Volume Group

Type the number for a volume group to display the logical volume information
and press Enter.

   1)   Volume Group 00c8502e00004c0000000145e1f142ed contains these disks:
          hdisk0  10240        vscsi
 

                           Volume Group Information

 ------------------------------------------------------------------------------
    Volume Group ID 00c8502e00004c0000000145e1f142ed includes the following
    logical volumes:

         hd5         hd6         hd8         hd4         hd2      hd9var
         hd3         hd1     hd10opt   hd11admin    livedump
 ------------------------------------------------------------------------------

Choisir l'option 2 (Access this Volume Group and start a shell before mounting filesystems)

Type the number of your choice and press Enter.

    1) Access this Volume Group and start a shell
=> 2) Access this Volume Group and start a shell before mounting filesystems

Pendant l'import du VG rootvg on constate un message pas commun.

Importing Volume Group...
rootvg
Could not find "/" and/or "/usr" filesystems.
Exiting to shell.

Checker les filesystems et formater le Log device.

# fsck -y /dev/hd4
# fsck -y /dev/hd2
# fsck -y /dev/hd9var
# fsck -y /dev/hd3
# fsck -y /dev/hd1
# logform /dev/hd8
logform: destroy /dev/rhd8 (y)?y

Afficher le contenu du "Logical Volume Control Block" des LVs
On constate que le Label du LVCB de hd2 est corrompu

# getlvcb hd2 -AT
         AIX LVCB
         intrapolicy = c
         copies = 1
         interpolicy = m
         lvid = 00c8502e00004c0000000145e1f142ed.5
         lvname = hd2
         label = /usr/!+or
         machine id = 8502E4C00
         number lps = 165
         relocatable = y
         strict = y
         stripe width = 0
         stripe size in exponent = 0
         type = jfs2
         upperbound = 32
         fs = vfs=jfs2:log=/dev/hd8
         time created  = Fri May  9 17:04:23 2014
         time modified = Tue Nov  4 13:19:18 2014

Corriger le Label de hd2 via la commande putlvcb puis vérifier

# putlvcb -L /usr hd2

# getlvcb hd2 -AT
         AIX LVCB
         intrapolicy = c
         copies = 1
         interpolicy = m
         lvid = 00c8502e00004c0000000145e1f142ed.5
         lvname = hd2
         label = /usr
         machine id = 8502E4C00
         number lps = 165
         relocatable = y
         strict = y
         stripe width = 0
         stripe size in exponent = 0
         type = jfs2
         upperbound = 32
         fs = vfs=jfs2:log=/dev/hd8
         time created  = Fri May  9 17:04:23 2014
         time modified = Tue Nov  4 13:23:00 2014

A ce stade on ne peut pas se "chrooter" dans le disque système rootvg car le VG à été importé avec une valeur corrompue pour hd2 (/usr), Obligé de redémarrer en Maintenance mode

Choisissez l'option 2 et vérifier que l'erreur précédente ne s'affiche plus.

Type the number of your choice and press Enter.

   1) Access this Volume Group and start a shell
   2) Access this Volume Group and start a shell before mounting filesystems

  99) Previous Menu

    Choice [99]: 2
Importing Volume Group...
rootvg
Checking the / filesystem.

The current volume is: /dev/hd4
Primary superblock is valid.
Checking the /usr filesystem.

The current volume is: /dev/hd2
Primary superblock is valid.
Exit from this shell to continue the process of accessing the root
volume group.

Pour ce "chrooter" dans le disque rootvg et monter les filesystems taper "exit"

# exit

# df
Filesystem    512-blocks      Free %Used    Iused %Iused Mounted on
/dev/ram0         720896    319192   56%    11797    22% /
/proc             720896    319192   56%    11797    22% /proc
/dev/cd0               -         -    -         -     -  /SPOT
/dev/hd4          720896    319192   56%    11797    22% /
/dev/hd2         5406720    638528   89%    54526    39% /usr
/dev/hd3          294912    219096   26%       88     1% /tmp
/dev/hd9var      1015808    303160   71%     8987    17% /var
/dev/hd10opt     1015808    484192   53%     8860    13% /opt

On constate que le label /usr est corrompu

# lsvg -l rootvg
rootvg:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
hd5                 boot       2       2       1    closed/syncd  N/A
hd6                 paging     32      32      1    open/syncd    N/A
hd8                 jfs2log    1       1       1    open/syncd    N/A
hd4                 jfs2       22      22      1    open/syncd    /
hd2                 jfs2       165     165     1    open/syncd    /usr/!+or
hd9var              jfs2       31      31      1    open/syncd    /var
hd3                 jfs2       9       9       1    open/syncd    /tmp
hd1                 jfs2       1       1       1    closed/syncd  /home
hd10opt             jfs2       31      31      1    open/syncd    /opt
hd11admin           jfs2       8       8       1    closed/syncd  /admin
livedump            jfs2       16      16      1    closed/syncd  /var/adm/ras/livedump

# grep -p hd2 /etc/filesystems
/usr/!+or:
        dev             = /dev/hd2
        vfs             = jfs2
        log             = /dev/hd8
        mount           = automatic
        check           = false
        type            = bootfs
        vol             = /usr
        free            = false

# odmget -q 'name=hd2 and attribute=label' CuAt

CuAt:
        name = "hd2"
        attribute = "label"
        value = "/usr/!+or"
        type = "R"
        generic = "DU"
        rep = "s"
        nls_index = 640

Corriger les corruptions de l'ODM et du fichier /etc/filesystems

exporter la valeur ODM corrompu (hd2 + label) dans un fichier puis éditer et corriger le fichier

# odmget -q 'name=hd2 and attribute=label' CuAt > /tmp/label.odm

# export TERM=vt320
# export VISUAL=vi
# set -o vi
# vi /tmp/odm

CuAt:
        name = "hd2"
        attribute = "label"
        value = "/usr"
        type = "R"
        generic = "DU"
        rep = "s"
        nls_index = 640

Sauvegarder la classe ODM CuAt puis supprimer la valeur corrompue de la classe ODM CuAt

# cp /etc/objrepos/CuAt /tmp/CuAt
# odmdelete -q 'name=hd2 and attribute=label' -o CuAt
0518-307 odmdelete: 1 objects deleted.

Ajouter la nouvelle valeur à partir du fichier et vérifier l'ODM CuAt

# odmadd /tmp/label.odm
# odmget -q 'name=hd2 and attribute=label' CuAt

CuAt:
        name = "hd2"
        attribute = "label"
        value = "/usr"
        type = "R"
        generic = "DU"
        rep = "s"
        nls_index = 640

Sauvegarder l'ODM dans le Boot Logical Volume (hd5)

# savebase -v
saving to '/dev/hd5'
47 CuDv objects to be saved
120 CuAt objects to be saved
14 CuDep objects to be saved
8 CuVPD objects to be saved
356 CuDvDr objects to be saved
2 CuPath objects to be saved
0 CuPathAt objects to be saved
0 CuData objects to be saved
0 CuAtDef objects to be saved
Number of bytes of data to save = 19005
Compressing data
Compressed data size is = 6850
        bi_start     = 0x3600
        bi_size      = 0x1b20000
        bd_size      = 0x1b00000
        ram FS start = 0x9363b0
        ram FS size  = 0x114bc17
        sba_start    = 0x1b03600
        sba_size     = 0x20000
        sbd_size     = 0x1ac6
Checking boot image size:
        new save base byte cnt = 0x1ac6
Wrote 6854 bytes
Successful completion

Éditer et modifier le fichier /etc/filesystems pour /usr puis contrôler

# cp /etc/filesystems /etc/filesystems.bkp
# vi /etc/filesystems

# grep -p hd2 /etc/filesystems
/usr:
        dev             = /dev/hd2
        vfs             = jfs2
        log             = /dev/hd8
        mount           = automatic
        check           = false
        type            = bootfs
        vol             = /usr
        free            = false

Enfin synchroniser la mémoire sur les filesystems et redémarrer.

# sync; sync; sync; reboot
Remplis sous: AIX Aucun commentaire
30oct/14

Creating NIM resources on an NFS shared NAS device

You can use a network-attached storage (NAS) device to store your Network Installation Management (NIM) resources by using the nas_filer resource server.

NIM support allows the hosting of file-type resources (such as mksysb, savevg, resolv_conf, bosinst_data, and script) on a NAS device. The resources can be defined in the NIM server database, and can be used for installation without changing any network information or configuration definitions on the Shared Product Option Tree (SPOT) server.

The nas_filer resource server is available in the NIM environment, and requires an interface attribute and a password file. You must manually define export rules and perform storage and disk management before you use any NIM operations.

To create resources on a NAS device by using the nas_filer resource server, complete the following steps:

Define the nas_filer object. You can enter a command similar to the following example:

    # nim -o define -t nas_filer -a if1="find_net als046245.server.com 0" -a passwd_file=/export/nim/pswfile netapp1

Define a mksysb file that exists on the NAS device as a NIM resource. You can enter a command similar to the following example:

    # nim -o define -t mksysb -a server=netapp1 -a location=/vol/vol0/nim_lun1/client1.nas_filer NetApp_bkup1

Optional:
If necessary, create a new resource (client backup) on the NAS device. You can use the following command to create a mksysb resource:

    # nim -o define -t mksysb -a server=netapp1 -a location=/vol/vol10/nim_lun1/mordor05_bkup -a source=mordor05 -a mk_image=yes NetApp_mordor05

Optional:
If necessary, copy an existing NIM resource to the nas_filer object. You can use the following command to copy a mksysb resource.

    # nim -o define -t mksysb -a server=netapp1 -a location=/vol/vol10/nim_lun1/replicate_bkup -a source=master_backup NetApp_master_backup

SOURCE: IBM Knowledge Center

Remplis sous: AIX, NAS, NIM Aucun commentaire
29oct/14

Adding a nas_filer management object to the NIM environment

Follow the instructions to add a nas_filer management object.

If you define resources on a network-attached storage (NAS) device by using the nas_filer management object, you can use those resources without changing the network information and configuration definition changes on the Shared Product Object Tree (SPOT) server. To add a nas_filer object, the dsm.core fileset must be installed on the NIM master.

To add a nas_filer object from the command line, complete the following steps:

Create an encrypted password file that contains the login ID and related password on the NIM master to access the nas_filer object. The encrypted password file must be created by using the dpasswd command from the dsm.core fileset. If you do not want the password to be displayed in clear text, exclude the -P parameter. The dpasswd command prompts for the password. Use the following command as an example:

    # dpasswd -f EncryptedPasswordFilePath -U nas_filerLogin -P nas_filerPassword

Pass the encrypted password file in the passwd_file attribute by using the define command of the nas_filer object. Use the following command as an example:

    # nim -o define -t nas_filer -a passwd_file=EncryptedPasswordFilePath \
    -a if1=InterfaceDescription \
    -a net_definition=DefinitionName \
    nas_filerName

If the network object that describes the network mask and the gateway that is used by the nas_filer object does not exist, use the net_definition attribute. After you remove the nas_filer objects, the file that is specified by the passwd_file attribute must be removed manually.

Example
To add a nas_filer object that has the host name nf1 and the following configuration:

host name=nf1
password file path=/etc/ibm/sysmgt/dsm/config/nf1
network type=ethernet
subnet mask=255.255.240.0
default gateway=gw1
default gateway used by NIM master=gw_maste, enter the following command:

# nim -o define -t nas_filer -a passwd_file=/etc/ibm/sysmgt/dsm/config/nf1 \
-a if1="find_net nf1 0" \
-a net_definition="ent 255.255.240.0 gw1 gw_master" nf1

For more information about adding a nas_filer object, see the technical note that is included in the dsm.core fileset (/opt/ibm/sysmgt/dsm/doc/dsm_tech_note.pdf).

Remplis sous: AIX, NAS, NIM Aucun commentaire
14oct/14

Daily Saving Time problem on AIX 7.1 and AIX 6.1

System time may not change properly at DST start/end dates on AIX 7.1 and AIX 6.1

AIX systems or applications that use the POSIX time zone format may not change time properly at Daylight Savings Time start or end dates. Applications that use the AIX date command, or time functions such as localtime() and ctime(), on these systems may be affected.

This problem is exposed on your system if you have both of these underlying conditions:
1. Your system is at one of the affected AIX levels (listed below)
2. Your system is using a POSIX format time zone and the system or an application on the system is using a custom DST setting.

Read this technote to check if you are exposed.

Possible Action Required:
http://www-01.ibm.com/support/docview.wss?uid=isg3T1013017

Remplis sous: AIX Aucun commentaire
10sept/14

Verify and test that a UDP port is open

Problem(Abstract)

How to verify that a UDP port is open and how to test that the port is working for a third party application.

Resolving the problem

Command to verify that a port is open to receive incoming connections.

#netstat -an |grep <port number>

Example: tftp uses port 69 to transfer data

#netstat -an |grep .69

Proto Recv-Q Send-Q Local Address Foreign Address (state)
udp 0 0 *.69 *.*

To capture the udp packets to prove that a specific port is being used you can either run the tcpdump command or the iptrace command.

#tcpdump "port #" (where # is the number of the port you are testing)

or
#startsrc -s iptrace -a "-a -p # /tmp/udp.port" (where # is the number of the port you are testing)
#stopsrc -s iptrace (stop iptrace command)
#ipreport -rnsC /tmp/udp.port /tmp/udp.port.out (format the iptrace binary to a text readable format)

Example: Start the packet capture.

#tcpdump 'port 69'

Then use tftp to transfer a file. This is an example of transferring the /etc/motd file from a system called dipperbso to a system called burritobso.

#tftp -p /etc/motd burritobso /tmp/motd

Example of the output:

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on en0, link-type 1, capture size 96 bytes
08:50:24.627840 IP dipperbso.52046 > burritobso.tftp: 21 WRQ "/tmp/motd" netascii

Source: IBM Technote

Remplis sous: AIX Aucun commentaire
9sept/14

AIX MPIO error log information

SC_DISK_PCM_ERR1 Subsystem Component Failure

The storage subsystem has returned an error indicating that some component (hardware or software) of the storage subsystem has failed. The detailed sense data identifies the failing component and the recovery action that is required. Failing hardware components should also be shown in the Storage Manager software, so the placement of these errors in the error log is advisory and is an aid for your technical-support representative.

SC_DISK_PCM_ERR2 Array Active Controller Switch

The active controller for one or more hdisks associated with the storage subsystem has changed. This is in response to some direct action by the AIX host (failover or autorecovery). This message is associated with either a set of failure conditions causing a failover or, after a successful failover, with the recovery of paths to the preferred controller on hdisks with the autorecovery attribute set to yes.

SC_DISK_PCM_ERR3 Array Controller Switch Failure

An attempt to switch active controllers has failed. This leaves one or more paths with no working path to a controller. The AIX MPIO PCM will retry this error several times in an attempt to find a successful path to a controller.

SC_DISK_PCM_ERR4 Array Configuration Changed

The active controller for an hdisk has changed, usually due to an action not initiated by this host. This might be another host initiating failover or recovery, for shared LUNs, a redistribute operation from the Storage Manager software, a change to the preferred path in the Storage Manager software, a controller being taken offline, or any other action that causes the active controller ownership to change.

SC_DISK_PCM_ERR5 Array Cache Battery Drained

The storage subsystem cache battery has drained. Any data remaining in the cache is dumped and is vulnerable to data loss until it is dumped. Caching is not normally allowed with drained batteries unless the administrator takes action to enable it within the Storage Manager software.

SC_DISK_PCM_ERR6 Array Cache Battery Charge Is Low

The storage subsystem cache batteries are low and need to be charged or replaced.

SC_DISK_PCM_ERR7 Cache Mirroring Disabled

Cache mirroring is disabled on the affected hdisks. Normally, any cached write data is kept within the cache of both controllers so that if either controller fails there is still a good copy of the data. This is a warning message stating that loss of a single controller will result in data loss.

SC_DISK_PCM_ERR8 Path Has Failed

The I/O path to a controller has failed or gone offline.

SC_DISK_PCM_ERR9 Path Has Recovered

The I/O path to a controller has resumed and is back online.

SC_DISK_PCM_ERR10 Array Drive Failure

A physical drive in the storage array has failed and should be replaced.

SC_DISK_PCM_ERR11 Reservation Conflict

A PCM operation has failed due to a reservation conflict. This error is not currently issued.

SC_DISK_PCM_ERR12 Snapshot™ Volume’s Repository Is Full

The snapshot volume repository is full. Write actions to the snapshot volume will fail until the repository problems are fixed.

SC_DISK_PCM_ERR13 Snapshot Op Stopped By Administrator

The administrator has halted a snapshot operation.

SC_DISK_PCM_ERR14 Snapshot repository metadata error

The storage subsystem has reported that there is a problem with snapshot metadata.

SC_DISK_PCM_ERR15 Illegal I/O - Remote Volume Mirroring

The I/O is directed to an illegal target that is part of a remote volume mirroring pair (the target volume rather than the source volume).

SC_DISK_PCM_ERR16 Snapshot Operation Not Allowed

A snapshot operation that is not allowed has been attempted.

SC_DISK_PCM_ERR17 Snapshot Volume’s Repository Is Full

The snapshot volume repository is full. Write actions to the snapshot volume will fail until the repository problems are fixed.

SC_DISK_PCM_ERR18 Write Protected

The hdisk is write-protected. This can happen if a snapshot volume repository is full.

SC_DISK_PCM_ERR19 Single Controller Restarted

The I/O to a single-controller storage subsystem is resumed.

SC_DISK_PCM_ERR20 Single Controller Restart Failure

The I/O to a single-controller storage subsystem is not resumed. The AIX MPIO PCM will continue to attempt to restart the I/O to the storage subsystem.

Taggé comme: Aucun commentaire
11avr/14

AIX OpenSSL Heartbleed Vulnerability CVE-2014-0160

Title: Security Bulletin: AIX is affected by a vulnerability in OpenSSL (CVE-2014-0160)

Summary: A security vulnerability has been discovered in OpenSSL.

Vulnerability Details

CVE-ID: CVE-2014-0160

DESCRIPTION: OpenSSL could allow a remote attacker to obtain sensitive information, caused by an error in the TLS/DTLS heartbeat functionality. An attacker could exploit this vulnerability to expose 64k of private memory and retrieve secret keys. This vulnerability can be remotely exploited, authentication is not required and the exploit is not complex. An exploit can affect the confidentially, but not integrity or availability.
CVSS Base Score: 5.0
CVSS Temporal Score: See http://xforce.iss.net/xforce/xfdb/92322
CVSS Environmental Score*: Undefined
CVSS Vector: (AV:N/AC:L/Au:N/C:P/I:N/A:N)

Warning: We strongly encourage you to take action as soon as possible as potential implications to your environment may be more serious than indicated by the CVSS score.
Affected Products and Versions
OpenSSL version 1.0.1.500 and above in the following AIX/VIOS releases:
AIX: 5.3, 6.1, 7.1 and VIOS 2.2.3.*

Remediation/Fixes

Product
AIX 5.3, 6.1, 7.1, VIOS 2.2.3.*
APAR
OpenSSL versions greater or equal to 1.0.1.500 N/a
Remediation / First Fix
ftp://aix.software.ibm.com/aix/efixes/security/openssl_ifix7.tar
(ifix: 0160_ifix.140409.epkg.Z)

This ifix disables the OpenSSL heartbeat option by compiling with
-DOPENSSL_NO_HEARTBEATS.

Note: AIX OpenSSL v0.9.8.xxxx and 12.9.8.xxxx are not vulnerable to this security vulnerability.

After applying the fix, additional instructions are needed for CVE-2014-0160

1) Replace your SSL Certificates.
You need to revoke existing SSL certificates and reissue new certificates. You need to be sure not to generate the new certificates using the old private key and create a new private key (ie using "openssl genrsa") and use that new private key to create the new certificate signing request (CSR).

2) Reset User Credentials
Users of network facing applications protected by a vulnerable version of OpenSSL should be forced to reset their passwords and should revoke any authentication or session related cookies set prior to the time OpenSSL was upgraded and force the user to re-authenticate.

Warning: Your environment may require additional fixes for other products, including non-IBM products. Please replace the SSL certificates and reset the user credentials after applying the necessary fixes to your environment.

Workarounds and Mitigations
None known

http://aix.software.ibm.com/aix/efixes/security/openssl_advisory7.doc

Remplis sous: AIX Aucun commentaire
7août/13

File Times in AIX

This technote discusses timestamps associated with files in filesystems on AIX.

In AIX each file has three different timestamps associated with it. These can be seen in the system include file /usr/include/sys/stat.h :

st_atime Time when file data was last accessed.
st_mtime Time when file data was last modified.
st_ctime Time when the file metadata was last changed.

All times recorded are in seconds since the Unix epoch. (Note for completeness there are also counters for these in nanoseconds)

Access Time (atime)
This is a timestamp recorded in the filesystem when the file was last opened for reading. The timestamp reflects when the open() on the file was performed, not necessarily when data was last read from it.

The access time can be viewed via ls using the -u flag.

Modification Time (mtime)
This denotes when the content of the file was most recently changed.

The modification time is what ls -l reports by default.

Change time (ctime)
This marks when a file's metadata was changed, such as permissions or ownership.

This time cannot be seen with the 'ls' command.

Other Notes
Some operating systems also include a "file creation" time, but AIX does not.

These times can be seen via commands such as 'ls' or 'find' with the appropriate arguments given to print out the value desired.

An easy way to view all three simultaneously is with the /usr/bin/istat command:

$ istat p.out
Inode 263 on device 10/8        File
Protection: rw-r--r--
Owner: 0(root)          Group: 0(system)
Link count:   1         Length 14682 bytes

Last updated:   Tue Sep 15 10:50:15 PDT 2009
Last modified:  Tue Sep 15 10:50:15 PDT 2009
Last accessed:  Tue Nov  3 12:01:12 PST 2009

So this file had its contents modified on Sep 15, and that is also the time the metadata for the file was changed. The file was read last on Nov 3.

Some utilities such as tar specifically modify a file's time values to record a different time than would normally be present. For example, the default behavior of tar when restoring a file is to create the file, then set the modification time back to what it was set to in the tar archive.

Mount Option to Not Update Access Time
For filesystems with a high rate of file access, performance can be improved by disabling the update of the access time stamp. This option can be added to a filesystem by using the "-o noatime" mount option, or permanently set using "chfs -a options=noatime".

SOURCE: 1012054

Remplis sous: AIX Aucun commentaire