PowerCLI: Easy iSCSI Send Target setup


In January this year I created a post about Easy NFS datastore setup with PowerCLI. In this post I showed how you can use a reference host to copy all the NFS share configurations to the new host. In this post I will show you how to do the exact same thing only for iSCSI Send targets. I finally find some time to write this post which I promised to write in part 2 of my PowerCLI and iSCSI series.

The following script will check the $REFHOST, in my case esx2.ict-freak.local for all the iSCSI Send targets configured on that host. After that the script will check if all the iSCSI Send targets exists on the $NEWHOST. If this is not the case the script will add the missing Send Targets.

$REFHOST = Get-VMHost "esx2.ict-freak.local"
$NEWHOST = Get-VMHost "esx1.ict-freak.local"

$REFHBA = Get-VMHostHba -VMHost $REFHOST -Type iScsi | Where {$_.Model -eq "iSCSI Software Adapter"}
foreach($target in (Get-IScsiHbaTarget -IScsiHba $REFHBA -Type Send)){
    $target = $target.Address
        $NEWHBA = Get-VMHostHba -VMHost $NEWHOST -Type iScsi | Where {$_.Model -eq "iSCSI Software Adapter"}
        If ((Get-IScsiHbaTarget -IScsiHba $NEWHBA -Type Send | Where {$_.Address -eq $target} -ErrorAction SilentlyContinue )-eq $null){
            Write-Host "Target $($target) doesn't exist on $($NEWHOST)" -fore Red
            New-IScsiHbaTarget -IScsiHba $NEWHBA -Address $target | Out-Null
        }        
}

But there is more..

Continue reading “PowerCLI: Easy iSCSI Send Target setup”

Advertisement

PowerCLI: Return the iSCSI Software Adapter


In my previous postsabout how to manage iSCSI targets with PowerCLI part 1 and part 2. I used the following line to return the iSCSI adapter:

$hba = $esx | Get-VMHostHba -Type iScsi

But when I used this line against a vSphere 4.1 update 1 host with Broadcom BCM5709 (Dell Poweredge R710). vSphere will use these adapters as Broadcom iSCSI Adapters. And when you run the $hba = $esx | Get-VMHostHba -Type iScsi one-liner, it will return all the vmhba adapters.

[vSphere PowerCLI] C:\> $esx | Get-VMHostHba -Type iScsi

Device     Type         Model                          Status

——     —-         —–                          ——

vmhba32    IScsi        Broadcom iSCSI Adapter         unbound

vmhba33    IScsi        Broadcom iSCSI Adapter         unbound

vmhba34    IScsi        Broadcom iSCSI Adapter         unbound

vmhba35    IScsi        Broadcom iSCSI Adapter         unbound

vmhba37    IScsi        iSCSI Software Adapter            online

This “problem” can easily be resolved with a Where statement. In the following Where statement you look for a Model that equals “iSCSI Software Adapter”. There is only one Software adapter in ESX(i) so it will return the right vmhba. The PowerCLI line will look like this:

$esx | Get-VMHostHba -Type iScsi | Where {$_.Model -eq "iSCSI Software Adapter"} 

[vSphere PowerCLI] C:\> $esx | Get-VMHostHba -Type iScsi | Where {$_.Model -eq "iSCSI Software Adapter"}

Device     Type         Model                          Status

——     —-         —–                          ——

vmhba37    IScsi        iSCSI Software Adapter         online

So the bottom line. Test your code on different setups and update it when necessary 😉

An important vSphere 4 storage bug is solved in patch ESX400-200912401-BG


image

Chad Sakac over at http://virtualgeek.typepad.com already blogged about the APD bug in December last year. You can find his post here. 

Just a short quote from Chad his post about the symptoms of this APD bug:

Recently saw a little uptick (still a small number) in customers running into a specific issue – and I wanted to share the symptom and resolution.   Common behavior:

  1. They want to remove a LUN from a vSphere 4 cluster
  2. They move or Storage vMotion the VMs off the datastore who is being removed (otherwise, the VMs would hard crash if you just yank out the datastore)
  3. After removing the LUN, VMs on OTHER datastores would become unavailable (not crashing, but becoming periodically unavailable on the network)
  4. the ESX logs would show a series of errors starting with “NMP”

Examples of the error messages include:

    “NMP: nmp_DeviceAttemptFailover: Retry world failover device "naa._______________" – failed to issue command due to Not found (APD)”

    “NMP: nmp_DeviceUpdatePathStates: Activated path "NULL" for NMP device "naa.__________________".

What a weird one…   I also found that this was affecting multiple storage vendors (suggesting an ESX-side issue).  You can see the VMTN thread on this here.

 

We found out about this issue during a big storage project. We where creating a lot of new LUNs and where removing a lot of the old LUNs. If you remove a LUN on a way not mentioned in Chad his post:

This workaround falls under “operational excellence”.   The sequence of operations here is important – the issue only occurs if the LUN is removed while the datastore and disk device are expected by the ESX host.   The correct sequence for removing a LUN backing a datastore.

  1. In the vSphere client, vacate the VMs from the datastore being removed (migrate or Storage vMotion)
  2. In the vSphere client, remove the Datastore
  3. In the vSphere client, remove the storage device
  4. Only then, in your array management tool remove the LUN from the host.
  5. In the vSphere client, rescan the bus.

So when we used the workaround described above, everything went fine. But at my current employer, we use a large LeftHand iSCSI SAN.  One of the great things of Lefthand SAN is the ability to move LUNs between different clusters. With the APD bug, we couldn’t use this option anymore.

When we discovered this APD bug we contacted VMware Support. After a couple of weeks we received an e-mail with the following fix.

I can now confirm that the APD (All paths dead) issue has been resolved by a patch released as part of P03.

To install this patch, please upgrade your hosts to vSphere Update 1 and use Update Manager to install the latest patches.

Please ensure that ESX400-200912401-BG is installed as this resolves the APD problem

We upgraded one of our clusters to Update 1 and installed the latest patches including the ESX400-200912401-BG patch. After installing the patch, we did some tests and I can confirm that the APD bug is history!!

To reproduce this issue I created two iSCSI LUNs on the EMC VSA. Instead of removing the LUNs I disconnected the iSCSI network to simulate this. So before I disconnected the iSCSI network, all LUNs are working just fine:

image

After I disconnected the iSCSI network and waited a while, all the paths to the EMC LUNs are dead and they are colored red:

image

This is just normal behavior but before installing the ESX400-200912401-BG patch, the ESX host will stall for 30 till 60 seconds. This means that all the VMs running on a host of which a LUN was disconnected will stall, even though the VM is on a different datastore!! I am happy that VMware has solved this APD bug.

 

If you want to make sure if you already installed the APD patch, you can easily verify this with the vCenter Update Manager.

Go to the tab Update Manager and open the Admin View. Add a new baseline. Select the Host Patch option:

image

In the next screen select Fixed:

image 

Now we are going to create a filter. Enter the name of the patch:

image

Select the ESX400-200912401-BG patch:

image

When the new baseline is ready, return to the Compliance view and attach the new baseline:

image

The final step is to perform a scan on your Datacenter, Cluster or ESX Host. Now wait and see if the patch is already installed or not.

 

More info about the patch can be found here:

For the readers who cannot upgrade to vSphere Update 1 and the latest patches, you can find some workarounds here:

StarWind – Free iSCSI Target


image

You can download the free software (after registration) here:  http://www.starwindsoftware.com/free

StarWind Free is an iSCSI Target that converts any Windows server into a SAN in less than 10 minutes.  This is a fully functional product at no cost. 

• Large 2 TB storage capacity
• Unlimited number of connections
• Virtualization environment support for VMware, Hyper-V, XenServer, Virtual Iron
• Enhances VMware environments by enabling VMotion, VMware HA, DRS and VCB
• Supports Windows server clustering for any application including SQL Server, Exchange, SharePoint

In this post I will show you how easy it is to configure the StarWind – Free  software.

Continue reading “StarWind – Free iSCSI Target”

How To: iSCSI Target 0.4.17 op Ubuntu 8.04 Server


Hier even een korte handleiding over het installeren en configureren van de iSCSI target software op Ubuntu Server versie 8.04. De handleiding is gebasseerd op een eerdere post van Frederik Vos op http://www.l4l.be.

Download en installeer Ubuntu 8.04 Server zoals jij het wilt hebben. De ISO kun je hier downloaden: http://www.ubuntu.com/getubuntu/download. Hoe de installatie in zijn werk gaat, lees je hier: http://www.ubuntugeek.com/ubuntu-804-hardy-heron-lamp-server-setup.html

Na de installatie update je de server via de volgende twee commando’s:

sudo apt-get update

sudo apt-get upgrade

De volgende stap is het installeren van build-essential software

sudo apt-get install build-essential linux-headers-`uname -r` libssl-dev

Nadat de bovenstaande stap klaar is, kunnen we gaan beginnen met de installatie van de iscsitarget.

Open de /tmp directory:

cd /tmp

Download het installatie bestand:

sudo wget http://heanet.dl.sourceforge.net/sourceforge/iscsitarget/iscsitarget-0.4.17.tar.gz

Pak het bestand uit:

sudo tar xzvf iscsitarget-0.4.17.tar.gz

Open de nieuwe map:

cd iscsitarget-0.4.17

Via de volgende twee commando’s wordt de iSCSI target software geïnstalleerd:

sudo make

sudo make install

Om te achterhalen welke hardeschijven je in je systeem hebt, voer je het volgende commando uit:

sudo fdisk –l

Disk /dev/sda: 200.0 GB, 200049647616 bytes
255 heads, 63 sectors/track, 24321 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x28781a14

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          31      248976   83  Linux
/dev/sda2              32       24321   195109425    5  Extended
/dev/sda5              32       24321   195109393+  8e  Linux LVM

Disk /dev/sdb: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00004688

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1       38913   312568641    5  Extended
/dev/sdb5               1       38913   312568609+  8e  Linux LVM

Disk /dev/sdc: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x10a711d3

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1       91201   732572001    5  Extended
/dev/sdc5               1       91201   732571969+  8e  Linux LVM

Zoals je ziet, heb ik drie hardeschijven in deze machine. Ik ga de /dev/sdc gebruiken voor de iSCSI target.

De laatste stap is het configureren van het configuratie bestand.

Open het bestand in je favoriete editor:

sudo nano /etc/ietd.conf of sudo vi /etc/ietd.conf

Pas hier de iqn aan. Meer informatie vind je hier: http://en.wikipedia.org/wiki/ISCSI. Daarnaast moet je een LUN configureren. Hieronder zie je hoe ik /dev/sdc5 aankoppel. Het is ook mogelijk om een file te koppelen als LUN. Hoe dit in zijn werk gaat, lees je hier: http://www.l4l.be. Meer informatie over het ietd.conf bestand vind je hier: http://manpages.ubuntu.com/manpages/hardy/man5/ietd.conf.5.html

Target iqn.2009-02.local.ict-freak:storage.disk2.750.xyz
        # Users, who can access this target. The same rules as for discovery
        # users apply here.
        # Leave them alone if you don’t want to use authentication.
        #IncomingUser joe secret
        #OutgoingUser jim 12charpasswd
        # Logical Unit definition
        # You must define one logical unit at least.
        # Block devices, regular files, LVM, and RAID can be offered
        # to the initiators as a block device.
        Lun 0 Path=/dev/sdc5,Type=fileio

Sla het bestand op en sluit je editor.

Via de volgende twee commando’s start je de iscsi-target en bekijk je de status:

sudo /etc/init.d/iscsi-target start

sudo /etc/init.d/iscsi-target status

Deze target gebruik ik nu binnen VMware ESX 3.5u3.

image

Via het volgende commando kun je zien of de volume gebruikt word:

cat /proc/net/iet/volume

tid:1 name:iqn.2009-02.local.ict-freak:storage.disk2.750.xyz
        lun:0 state:0 iotype:fileio iomode:wt path:/dev/sdc5

op de ubuntu server kun je de iscsi target monitoren via:

cat /proc/net/iet/session

tid:1 name:iqn.2009-02.local.ict-freak:storage.disk2.750.xyz
        sid:564049469047296 initiator:iqn.1998-01.com.vmware:esx35srv1-673995f2
                cid:0 ip:172.1.1.211 state:active hd:none dd:none

 

Bron: http://www.l4l.be/docs/server/storage/iscsi/iscsitarget_ubuntu.php