The numPorts value: <####> in spec exceeded maxPorts 8192


Today I was playing around with the vSphere Distributed Switch and trying to reach the configuration maximums of it by creating lots of dvPortgroups. But when I reached the port numbers above 8192, the dvPortgroup wasn’t created and I got the following error:

image

From KB1038193 I used the resolution to change the default max numPorts from 8192 to 20000. The 20000 value is also the one mentioned in the Configuration Maxium document: vsp_41_config_max.pdf so I don’t know why VMware used the 8192 max value. If anyone can explain why this extra limit if effective, please let me know.

The solution from KB1038193:

Symptoms
  • You cannot configure more than 8192 virtual ports in vCenter Server vNetwork Distributed Switch (vDS).
  • You see the error:
    The numPorts value : 8256 in spec exceeded maxPorts 8192.
Purpose

This article provides steps to increase the maximum number of vDS ports. 

Resolution
Changing the maximum number of vDS ports by using vSphere PowerCLI

vSphere PowerCLI can be used to automate the different virtual machine tasks. It provides an easy-to-use C# and PowerShell interface to VMware vSphere APIs. For more information, see the VMware vSphere PowerCLI Documentation.

To change the maximum number of vDS ports, you can use this PowerCLI snippet:

$dvs = Get-VirtualSwitch -Distributed -Name DVSName | Get-View
$cfg = New-Object -TypeName VMware.Vim.DVSConfigSpec
$cfg.MaxPorts = 20000
$cfg.configVersion = $dvs.config.configVersion
$dvs.ReconfigureDvs_Task( $cfg )

I have changed the code slightly to report the current configuration. you can use this script or the one from the KB article:

# dvSwitchName
$dvSwitchName = "dvSwitch01"

$dvs = Get-VirtualSwitch -Distributed -Name $dvSwitchName | Get-View
Write-Host "The current configuration of MaxPorts = $($dvs.Config.MaxPorts)" -for Yellow
$cfg = New-Object -TypeName VMware.Vim.DVSConfigSpec

# Org
#$cfg.MaxPorts = 8192

# New
$cfg.MaxPorts = 20000

$cfg.configVersion = $dvs.config.configVersion
$dvs.ReconfigureDvs_Task( $cfg ) | Out-Null

# Report new configuration
$dvs = Get-VirtualSwitch -Distributed -Name $dvSwitchName | Get-View
Write-Host "The new configuration of MaxPorts = $($dvs.Config.MaxPorts)" -for Green

Output:

image

Source: KB1038193

Host Profiles: Ruleset xxxx doesn’t match the specification


Today I was testing Host Profiles (again) and I must say it works a lot better than during my previous tests. There was only one thing very annoying during my tests. When the host was in maintenance mode, I applied the Host Profile and performed a check. Everything was OK and the host was compliant.  But when the host was out Maintenance mode and I checked if the host was still compliant, I received the following message:

image

Unfortunately there’s no knowledgebase article which describes those messages so I started to Google and found a post on the VMware Communites by khushal: http://communities.vmware.com/message/1357268

1. Open vCenter go to Home — > Management –> Host Profiles

2. Right Click on the Host Profile you are using for your Cluster and Select Edit

3. Expand the profile Profile

– Profile-name

– Firewall configuration

*     – Ruleset Configuration*

*     – faultTolerance*

       Select Ruleset and check the checkbox in right hand “*Flag Indicating whether ruleset should be enabled”

Click OK.

and check Compliance again in Cluster.

To fix the annoying messages I did change the aam and faultTolerance settings via:

Continue reading “Host Profiles: Ruleset xxxx doesn’t match the specification”

PowerCLI: Copy Datastore Items


In this short post I will show a PowerCLI script I wrote to copy ISO files from datastore y to datastore x. The datastores are in the same vCenter and virtual datacenter accessible but the vSphere hosts are located inside two different IP subnets and a firewall rule prevents to copy files between the two subnets. So I had to think about a work around. Well this one is easy. On the vCenter server I created a script to peform the following steps:

  1. Create two PSDrives for each Datastore
  2. Get al the ISO filenames
  3. Downlad the ISO to the c:\tmp directory from datastore y
  4. Upload the ISO from the C:\tmp directory to the datastore<X>\iso directory
  5. Remove the ISO from C:\tmp
  6. repeat the steps above until all the ISO files are copied to the new datastore.

The PowerCLI script to perform the described tasks:

New-PSDrive -location (get-datastore template-01) -name tpl01 -PSProvider VimDatastore -Root '\'
New-PSDrive -location (get-datastore template-02) -name tpl02 -PSProvider VimDatastore -Root '\'

$isos = ls tpl01:\iso\ | % {$_.Name}
foreach($iso in $isos){
    Write-Host "copy $($iso) to C:\tmp" -fore Yellow
    Copy-DatastoreItem -item tpl01:\iso\$iso -Destination C:\tmp
    
    Write-Host "copy $($iso) to template-02\iso" -fore Yellow
    Copy-DatastoreItem -item C:\tmp\$iso -Destination tpl02:\iso
    
    Write-Host "removing the tmp file $($iso) from C:\tmp" -fore Yellow
    Remove-Item C:\tmp\$iso -confirm:$false
    
    Write-Host "done" -fore Green
}

So once again PowerCLI to the rescue.

Error 25114. Setup failed to generate the JRE SSL keys


Today I was busy with a vCenter server upgrade to vCenter 4.1 update 2. Everything went fine except the vCenter Update manager installation. I received the following error:

image

The solution is pretty simple this time. Just be sure to stop the vCenter Update Manager service before starting the setup. Right after stopping the service, the installation was successful and I was happy again. In the VMware Communities you’ll find that this issue is also know for the upgrade of vCenter 5 to vCenter 5 update 1. See http://communities.vmware.com/ for more info. Now let’s patch some vSphere hosts with the help of PowerCLI: powercli-update-vmhost-function/

vCOPS 5: HTTP Status 404 –


When I opend the vCOPs v5 page this morning a got the following error:

image

So I checked if the services where running at the time. You can do this with the vcops-admin status command:

image

The next thing to check is the free disk space with the df –h command:

image

The disk seems to be full. To fix this issue, You can follow the steps from KB2016645:

To add a new virtual disk to the virtual machine:

  1. Power off the vApp.
  2. In the vSphere Client, right-click the virtual machine and click Edit Settings.
  3. Add the additional virtual disk.
    Note: Ensure to consider the future growth while selecting the disk size.
  4. Power on the vApp. The virtual machine automatically configures the newly added disk at boot time.

When the VM starts the disk is added to the VM and the LVM logical group will be configured with the new disk:

image

And we’re back:

image

PowerCLI: Disable / Enable HA Host Monitoring


In the case you need to or your network team needs to do some network maintenance on the switches which VMware HA uses to communicate with the other hosts or where the das.isolationaddress (default gateway) is configured/ It’s smart to disable the Host Monitoring feature of VMware HA. You can do this easily by hand via edit cluster – VMware HA and uncheck the Enable Host Monitoring feature. See screenshot below:

image

But what if you have to disable Host Monitoring on multiple VMware HA cluster? Well, if you like PowerCLI, you can use the following script to disable or enable the HA Host Monitoring feature:

param(
    $vCenter,
    $option
)

if($vCenter -eq $null){
    Write-Host "Please enter the name of the vCenter Server" -ForegroundColor Yellow
    exit 
} 

switch($option){
    enabled {"The HA Host Monitoring feature will be enabled"}
    disabled {"The HA Host Monitoring feature will be disabled"}
    default {"the option value could not be determined."
    exit
    }
}

Connect-VIServer $vCenter

$clspec = New-Object VMware.Vim.ClusterConfigSpecEx
$clspec.dasConfig = New-Object VMware.Vim.ClusterDasConfigInfo
$clspec.dasConfig.hostMonitoring = $option

foreach($cluster in (Get-Cluster | sort Name)){
    $clview = Get-Cluster $cluster | Get-View
    $clview.ReconfigureComputeResource_Task($clspec, $true)
}

Disconnect-VIServer -Confirm:$false

Just save the script to change-HAHostMonitoring.ps1 and run it like this to disable the HA Host Monitoring feature:

Change-HAHostMonitoring vcenter.domain.loc disabled

If you want to enable Host Monitoring, just change disabled to enabled:

Change-HAHostMonitoring vcenter.domain.loc enabled

Note: Please test the script mentioned in this blog post in a lab or test environment before you use the script in a production environment.

NMP: nmpDeviceAttemptFailover: Retry world failover device


After restarting a vSphere 4.1 update 2 host there where a lot of warnings in the vmkernel log about NMP and failover messages:

Feb 29 16:03:19 esx vmkernel: 0:00:53:08.502 cpu1:4305)WARNING: NMP: nmpDeviceAttemptFailover: Retry world restore device "mpx.vmhba34:C0:T0:L0" - no more commands to retry
Feb 29 16:03:24 esx vmkernel: 0:00:53:13.498 cpu0:4096)VMNIX: VmkDev: 2860: abort succeeded.
Feb 29 16:03:24 esx vmkernel: 0:00:53:13.498 cpu0:4096)WARNING: NMP: nmp_IssueCommandToDevice: I/O could not be issued to device "mpx.vmhba34:C0:T0:L0" due to Not found
Feb 29 16:03:24 esx vmkernel: 0:00:53:13.498 cpu0:4096)WARNING: NMP: nmp_DeviceRetryCommand: Device "mpx.vmhba34:C0:T0:L0": awaiting fast path state update for failover with I/O blocked. No prior reservation exists on the device.
Feb 29 16:03:24 esx vmkernel: 0:00:53:13.498 cpu0:4096)WARNING: NMP: nmp_DeviceStartLoop: NMP Device "mpx.vmhba34:C0:T0:L0" is blocked. Not starting I/O from device.
Feb 29 16:03:25 esx vmkernel: 0:00:53:14.500 cpu1:4305)WARNING: NMP: nmpDeviceAttemptFailover: Retry world failover device "mpx.vmhba34:C0:T0:L0" - issuing command 0x41027fa7e340
Feb 29 16:03:25 esx vmkernel: 0:00:53:14.500 cpu1:4305)WARNING: NMP: nmpDeviceAttemptFailover: Retry world failover device "mpx.vmhba34:C0:T0:L0" - failed to issue command due to Not found (APD), try again...
Feb 29 16:03:25 esx vmkernel: 0:00:53:14.500 cpu1:4305)WARNING: NMP: nmpDeviceAttemptFailover: Logical device "mpx.vmhba34:C0:T0:L0": awaiting fast path state update...
Feb 29 16:03:34 esx vmkernel: 0:00:53:23.500 cpu0:4096)VMNIX: VmkDev: 2767: a/r=2 cmd=0x1e sn=4054 dsk=vml0:88:0 reqbuf=0000000000000000 (sg=0)
Feb 29 16:03:34 esx vmkernel: 0:00:53:23.500 cpu15:4127)ScsiDeviceIO: 1688: Command 0x1e to device "mpx.vmhba34:C0:T0:L0" failed H:0x5 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.
Feb 29 16:03:34 esx vmkernel: 0:00:53:23.500 cpu15:4127)WARNING: NMP: nmp_DeviceStartLoop: NMP Device "mpx.vmhba34:C0:T0:L0" is blocked. Not starting I/O from device.
Feb 29 16:03:34 esx vmkernel: 0:00:53:23.500 cpu0:4096)VMNIX: VmkDev: 2812: abort sn=4054, vmkret=0.

There’s only one thing weird about these warnings because the vmhba mentioned in the vmkernel log isn’t visible inside the vSphere client:

image

So to verify what kind of device the vmhba was I logged in on the vSphere host and ran the esxcfg-scsidevs –l command:

[root@esx ~]# esxcfg-scsidevs -l | grep vmhba34
mpx.vmhba34:C0:T0:L0
   Display Name: Local USB Direct-Access (mpx.vmhba34:C0:T0:L0)
   Devfs Path: /vmfs/devices/disks/mpx.vmhba34:C0:T0:L0

This particular vSphere host was a Dell R710 server with the Dell OMSA agent installed. After a quick search on http://kb.vmware.com I found the following KB article KB1013818 which describes the following cause:

Cause

This issue occurs if you have external USB devices and the iDRAC is set to mount these devices as virtual media.

The fix is quite simple. Just detach the iDRAC virtual CD drive or run the following commands to fix the “issue”:

# cd /opt/dell/srvadmin/sbin
# mv invcol invcol.bak
# srvadmin-services.sh restart

After restarting the services the VMkernel logs are clean again.

Source:  KB1013818

PowerCLI: Migrate templates during the Enter Maintenance Mode task


Normally when you put a host into Maintenance mode the templates will stay on the host instead of being migrate to a different host. This can be very annoying if you are performing maintenance on the vSphere host and a colleague needs to deploy a VM from the template. I am running vSphere 4.1 update 1. I don’t know if this is still the case with vSphere 5. The host in Maintenance mode will look like this:

image

So to fix this annoying “issue” I have created a PowerCLI function to place the vSphere host into maintenance mode and if there are Templates registered on the vSphere host, the Templates will be moved to another host in the Cluster.

Function Enter-MaintenanceMode{
<#
.SYNOPSIS   Enter Maintenance mode 
.DESCRIPTION   The function starts the Enter Maintenance task and also migrates the Templates to another host.
.NOTES   Author:  Arne Fokkema
.PARAMETER vmHost
   One vmHosts.
.EXAMPLE
   PS> Enter-MaintenanceMode<vmHost Name>
.EXAMPLE
  PS> Get-VMHost <vmHost Name> | Enter-MaintenanceMode
#>

[CmdletBinding()]
param(
    [parameter(ValueFromPipeline = $true,
    position = 0,
    Mandatory = $true,
    HelpMessage = "Enter the vmHost to start the Enter Maintenance mode task")]
    $vmHost
)    

    $templates = Get-VMHost $vmHost | Get-Template
    if($templates -eq $null){
        $tplMigrate = $false
    }
    else{
        $tplMigrate = $true
    }
    
    $targetVMHost = Get-VMHost -Location (Get-Cluster -VMHost (Get-VMhost $vmHost)).Name | Where {$_.Name -ne $vmHost} | Sort Name | Select -First 1
    if($tplMigrate -eq $true){
        foreach($tpl in $templates){
            Write-Host "Converting template $($tpl.Name) to VM" -ForegroundColor Yellow
            $vm = Set-Template -Template (Get-Template $tpl) -ToVM 
            
            Write-Host "Moving template $($tpl.Name) to vmHost: $($targetVMHost)" -ForegroundColor Yellow
            Move-VM -VM $vm -Destination (Get-VMHost $targetVMHost) -Confirm:$false | Out-Null
            
            Write-Host "Converting template $($tpl.Name) back to template" -ForegroundColor Yellow
            ($vm | Get-View).MarkAsTemplate() | Out-Null    
        }    
    }
    Write-Host "Enter Maintenance mode $($vmHost)" -ForegroundColor Yellow
    Set-VMHost $vmHost -State Maintenance | Out-Null
}

You can run the script like this:

Enter-MaintenanceMode esx07

Or from the pipeline:

Get-VMHost esx07 | Enter-MaintenanceMode

The output will be the same:

image

And the host is completely empty and ready for maintenance:

image

Troubleshoot TCP connection issues from the ESX Service Console


If you need to troubleshoot TCP connection issues from your ESX host. You will notice that Telnet isn’t available on your ESX host. But VMware posted a workaround in KB1010747. The reason why VMware did not include the Telnet package is simple:

The TELNET package does not ship with the ESX service console. While the TELNET daemon is clearly a security risk, the TELNET client may be a useful tool in diagnosing TCP session connectivity between the ESX Service Console and TCP ports on a foreign host.

The workaround is a Python script. You can copy the following script to your ESX host and place it in /tmp directory.

#!/usr/bin/python
# TCP connection "tester" -
# provide the hostname/IP address followed by port number (if no port
# is specified, 23 is assumed ;)
# program will connect to the port and read till either it receives a
# newline or 5 seconds expire
# be sure to chmod 755
import sys
import telnetlib
import socket

PORT = ""
argc = len(sys.argv)

if argc == 3 :
    PORT = sys.argv[2]
elif argc < 2 or argc > 3:
    print "usage %s host <port> \n" % sys.argv[0]
    sys.exit()

HOST = sys.argv[1]

try:
    tn = telnetlib.Telnet(HOST,PORT)
except socket.error, (errno, strerror):
    print " SockerError( %s ) %s\n" %  (errno, strerror)
    sys.exit()

print tn.read_until("\n ",5)
print "connection succeeded\n"
tn.close()
sys.exit()

If you want to test if a particular TCP port is reachable from your ESX host. You can use the script like this:

[root@esx01 ~]# ./testtcp vc01.ict-freak.loc 443

connection succeeded

[root@esx01 ~]#

In case the TCP port is not reachable from your ESX host. The script will just hang on your command and will eventually time out:

[root@esx01 ~]# ./testtcp vc01.ict-freak.loc 443
SockerError( 110 ) Connection timed out

You can also cancel the script by pressing CTRL + C

I didn’t test the script from ESXi. If you did, please leave a comment.

Source: KB1010747

Disconnect ISO files from Templates with PowerCLI


Just a quick post about how to disconnect ISO files from templates with PowerCLI.  With the following script you can set the CD Drive to No Media. So the ISO files will be disconnected from the Template VMs.

$templates = Get-Template 
foreach($tpl in $templates){
    $vm = Set-Template -Template (Get-Template $tpl) -ToVM 
    Get-CDDrive -VM $vm | Set-CDDrive -NoMedia -Confirm:$false | Out-Null
    ($vm | Get-View).MarkAsTemplate() | Out-Null
}

First the script will fill the templates variable with all the templates available. The next step is to convert the Template back to a VM. When the template is converted to a VM the Get-CDDrive cmdlet is used to set the CD Drive to No Media. When the CD Drive is configured the VM will be converted back to a template. In stead of nine mouse clicks per template you can lean back and drink your cup of coffee or thee and see the magic powered by PowerCLI.