Getting AIX RoCE to show up as ent’s in AIX and use as a regular network card

In order to fully use these cards and get them to show up as ent devices perform the following:

After the existing AIX RoCE file sets are updated with the new file sets, both the roce and the ent devices might appear to be configured. If both devices appear to be configured when you run the lsdev command on the adapters, complete the following steps:

1. Delete the roceX instances that are related to the PCIe2 10 GbE RoCE Adapter by entering the following command:

# rmdev -dl roce0[, roce1][, roce2,…]

2. Change the attribute of the hba stack_type setting from aix_ib (AIX RoCE) to ofed (AIX NIC + OFED RoCE) by entering the following command:

# chdev -l hba0 -a stack_type=ofed

3. Run the configuration manager tool so that the host bus adapter can configure the PCIe2 10 GbE RoCE Adapter as a NIC adapter by entering the following command:

# cfgmgr

5. Verify that the adapter is now running in NIC configuration by entering the following command:

# lsdev -Cc adapter

The following example shows the results when you run the lsdev command on the adapter when it is configured in the AIX NIC + OFED RoCE mode:

Figure 1. Example output of lsdev command on an adapter with the AIX NIC + OFED RoCE configuration

ent1 Available 00-00-01 PCIe2 10GbE RoCE Converged Network Adapter
ent2 Avaliable 00-00-02 PCIe2 10GbE RoCE Converged Network Adapter
hba0 Available 00-00 PCIe2 10GbE RoCE Converged Host Bus Adapter (b315506714101604)

You should no longer see roce0 even after running cfgmgr, you can now treat the card like a regular network card (ent)…

Difference between backing up IBM Virtual I/O server using backupios with and without -mksysb flag

From:  Technology Magazine 

http://www.techmagazinez.com/2013/11/difference-between-backing-up-ibm.html

Thursday, 14 November 2013 

Difference between backing up IBM Virtual I/O server using backupios with and without -mksysb flag 

On this article ,we just want to list the difference between backing up IBM Virtual I/O server using backupios with and without -mksysb flag . I have seen people make confusion specially in interview.

 

Backing up the Virtual I/O Server using backupios command without any flag to a remote file system  will create the nim_resources.tar image in the directory we specify.  When the -mksyb flag is used, the resources used by the installios command are not saved in the image. Therefore to restore a VIO server from this image can be used only with NIM. Whereas creating a “nim_resources.tar” for Virtual I/O Server allows this backup can be reinstalled from the HMC using the installios command.

 

Procedure

 

The backupios command creates a backup of the Virtual I/O server and places it onto a file system, bootable tape or DVD. You can use this backup to reinstall a system to its original state   after it   has been corrupted. If you create the backup on tape, the tape is bootable and includes the installation programs needed to install from the backup.

 

For more information http://pic.dhe.ibm.com/infocenter/powersys/v3r1m5/topic/iphcg/backupios.htm


Task 1:
Login with padmin privilege to VIO server

Task 2:
Change to root privilege using command

$ oem_setup_env

Task 3:
Create a mount directory where the backup image, will be written to

# mkdir /vios01/backup

Task 4:
Mount a filesystem from the NIM master on the mount directory /vios01/backup on VIOS01

# mount server1:/export/mksysb /vios01/backup

Task 5:
Run exit to go back to the padmin privilege for running backupios command

#exit

Task 6
. Run the backupios command with the –file option. Make sure to specify the path to the mounted directory

$ backupios –file /vios01/backup/`hostname`.mksysb -mksysb — will create a .mksyb image
$ backupios –file /vios01/backup/ — will create a “nim_resources.tar ”

 

Technology Magazine 

http://www.techmagazinez.com/2013/11/difference-between-backing-up-ibm.html

 

AIX 6.1 TL7 introduced a new flag for the ‘lspv’ command which shows the unique id (UUID) of disks in additional columns of the lspv output.

AIX 6.1 TL7 introduced a new flag for the ‘lspv’ command which shows
the unique id (UUID) of disks in additional columns of the lspv
output.

This new ‘lspv -u’ is particularly useful in VIO environments using
VSCSI because the VIO client LPAR hdisk UDID contains the real UDID
from the VIO server hdisk.

For example on a client LPAR using VSCSI for the rootvg (merged
columns and spaces in the UDID are not a paste error):

Client LPAR # lspv -u
hdisk0          00000000fb8a0572                    rootvg          active      533E3E21360170E50202E5A5A0000025E50A12AB20F1746      FAStT03IBMfcp05VDASD03AIXvscsi8060a98a-0292-e2c9-0382-b5263f2a7e61

VIO1 # lspv -u
hdisk0          0000000017b33224                    rootvg          active              2A1135000C5005474B9C30BST9146853SS03IBMsas                          b98fed26-76da-f15c-45c9-b65a814e3d75
hdisk1          000000009c4ae1f9                    rootvg          active              2A1135000C500546FA9330BST9146853SS03IBMsas                          c3435ec2-3db2-6c0e-f14e-947243ba482d
hdisk2          000000000572fb8a                    None                                3E21360170E50202E5A5A0000025E50A12AB20F1746      FAStT03IBMfcp      a9be7b27-42fd-b0d6-a5ba-da61929cf4fc
hdisk3          0000000038963f05                    None                                3E21360170E50202E5A5A0000037350A526780F1746      FAStT03IBMfcp      7ed82295-3d9e-36dc-d6e9-e094d0d1a4ee

VIO2 # lspv -u
hdisk0          00000000b46edc87                    rootvg          active              2A1135000C5004CE6C7FF0BST9146853SS03IBMsas                          a36f5dae-8281-4df7-7fa7-e9fbf619c7d5
hdisk1          00000000605de1f9                    rootvg          active              2A1135000C5004CE6CF8F0BST9146853SS03IBMsas                          d1148dc8-ea98-a255-bf34-2921a56e8ca1
hdisk2          000000000572fb8a                    None                                3E21360170E50202E5A5A0000025E50A12AB20F1746      FAStT03IBMfcp      a9be7b27-42fd-b0d6-a5ba-da61929cf4fc
hdisk3          0000000038963f05                    None                                3E21360170E50202E5A5A0000037350A526780F1746      FAStT03IBMfcp      7ed82295-3d9e-36dc-d6e9-e094d0d1a4ee

Our client UDID contains the UDID from the VIO server with a prefix
and suffix:

client hdisk0: 533E3E21360170E50202E5A5A0000025E50A12AB20F1746      FAStT03IBMfcp05VDASD03AIXvscsi
vio1   hdisk2: ^^^^3E21360170E50202E5A5A0000025E50A12AB20F1746      FAStT03IBMfcp^^^^^^^^^^^^^^^^^

where ^ indicates the prefix and suffix added by the VIO server.

Using UDIDs the client can be cross referenced to the server quickly
with the most significant bytes of the UDID, in this case the middle
15 digits.

Historically to find the real LUN that a client is using in a VIO
environment would require the following steps for each VIO server:

– Obtain client hdisk parent vscsi device hardware location code and
LUN number
– Lookup on HMC which VIO server and slot the client vscsi device is
linked to
– Lookup vhost adapter on VIO server by slot number
– Lookup VSCSI mappings for vhost adapter to location hdisk on VIO server

A client with dual VIO would have to repeat the procedure twice. PVIDs
can also shortcut the process, but they may not show up on the VIO
server’s lspv output until after they are written to by the client and
the VIO server is rebooted. If the client rewrites the PVID, the VIO
server can also be out of date. Thus UDID’s are the preferred method
because they are static values.

The output can stretch the columns until they merge and spaces in the
UDID break the columns, which I hope is fixed in a future release.

 

——————————————————————
Russell Adams                            RLAdams@AdamsInfoServ.com

PGP Key ID:     0x1160DCB3           http://www.adamsinfoserv.com/

Fingerprint:    1723 D8CA 4280 1EC9 557F  66E8 1154 E018 1160 DCB3

Rule of Thumb: Sizing the Virtual I/O Server

Great Posting on IBM Developer works blog…
https://www.ibm.com/developerworks/mydeveloperworks/blogs/aixpert/entry/rule_of_thumb_sizing_the_virtual_i_o_server78?lang=en

Rule of Thumb: Sizing the Virtual I/O Server

I often get asked: How large to make a pair of Virtual I/O Server (VIOS)? 
The classic consultant answer is “it depends on what you are doing with Disk & Network I/O” is not very useful to the practical guy that has to size a machine including the VIOS nor the person defining the VIOS partition to install it!

Observations:
The VIOS server unfairly gets a bad press but note:

    • Physical adapters are now in the VIOS, so device driver CPU cycles (normally hidden and roughly half of the OS CPU System time) move to the VIOS – this is not new CPU cycles.

    • Extra CPU work involves function shipping the request from client to VIOS and back but this is a function call to the Hypervisor = small beer.

    • Data shipping is very efficient as the Hypervisor uses virtual memory references rather that raw data moving.

    • Aggregating the adapters in one place means that all client virtual machines have access to much larger and redundant data channels at reduced cost so it is a win win situation.


Who knows the I/O details to rates and packet sizes?

  • Answer: No one (in my experience) knows the disk and network mixture of block or packet sizes, the read and write rates for each size or the periods of time which will cause the peak workload. In new workloads it is all guess work anyway – lets be generous that would be + or – 25%.

  • If you do know, IBM can do some maths to estimate the CPU cycles at the peak period.

  • But most of the time that peak sizing would be total overkill.


So here is my Rule of Thumb (ROT), starter for 10 with caveats:

  • Trick 1 – “Use the PowerVM, Luke!”

  • Use PowerVM to re-use unused VIOS CPU cycles in the client Virtual Machines

  • VIOS = Micro-partition Shared CPU, Uncapped, high weight factor, with virtual processor minimum +1 or +2 headroom (virtual processor would be better called spreading factor)

  • This allows for peaks but doesn’t waste CPU resources


  • Trick 2 – Don’t worry about the tea bags!

  • No one calculates the number of teabags they need per year

  • In my house, we just have some in reserve and monitor the use of tea bags and then purchase more when needed

  • Likewise, start with a sensible VIOS resources and monitor the situation


  • Trick 3 – Go dual VIOS

  • Use a pair of VIOS to allow VIOS upgrades

  • In practice, we have very low failure rates in the VIOS – mostly because systems administrators are strongly recommended NOT to fiddle!


  • Trick 4  – the actual Rule of Thumb

  • Each VIOS: for every 16 CPUs – 1.0 Shared CPU and 2 GB of memory

  • This assumes your Virtual Machines are roughly 1 to 2 CPUs each and not extremely I/O intensive (more CPU limited)


  • Trick 5 – Check VIOS performance regularly

  • As workloads are added to a machine in the first few months, monitor VIOS CPU and memory use & tune as necessary

  • See other AIXpert blog entries for monitoring tools – whole machine and including VIOS


  • Trick 6 – Driving system utilisation beyond, say, 70%

  • As you drive system utilisation up by adding more workloads you need more pro-active monitoring

  • Implement some tools for Automatic Alerting of VIOS stress

 
Caveats
:

  • If using high speed adapters like 10 Gbps Ethernet or 8 Gbps SAN then VIOS buffering space is needed, so double the RAM to 4 GB.

  • Ignore, if you have these adapters but are only likely to use a fraction of the bandwidth like 1 Gbps.

  • If you know your applications are going to hammer the I/O (i.e. stress the high speed adapters) then go to 6 GB or 8GB.

  • If you are using extreme numbers of tiny Virtual Machines (lots of virtual connections) also go to 6 GB or 8GB.

  • On large machines, like 32 processor (cores) or more, many customers use a pair of VIOS for production and further a pair for other workloads.


Remember this is a starting point – monitor the VIOS as workloads increase – starving you VIOS is a very bad idea.


These are my opinions, I am sure others have different ideas too, comments are welcome … thanks Nigel Griffiths

TIP: VIO Server level 2.2.1.4: “lspv -free” command has been redesigned

This info was just posted today 10/5/2012 on Linked In Group “AIX and POWER System Administrators”. I’ve edited to remove most editorial comments and leave the technical facts.

VIO Server level 2.2.1.4

The command ‘lspv -free’ has been redesigned to no longer show disks which have a VGID. So in brief, any disk that has ever been used will no longer be shown as free, even if you have intentionally freed the disk, unless you specifically overwrite the VGID.  IBM claims to have done this to support knowing when a disk is in use by various types of clusters outside of the knowledge of the specific VIO server. The documentation clearly states that this option, “Lists only physical volumes that are available for use as a backing device.” However, with this new change it does not do that. IBM’s response so far is that they may have to change the documentation to match the new design of the command.