I recently needed to configure an ESXi 4.0 server to use the new multi-pathing capability along with jumbo frames. I have done this in the past with “classic” ESX using this fantastic post by Chad Sakac and friends which uses the ESXCFG commands within the service console. As ESXi doesn’t have a service console I had to do a little research to figure it out. The information is out there but it took a bit of finding so I decided to post the process here.

Yes I know I can hack ESXi so I can use SSH and use the regular esxcfg commands. However since VMware keep telling us that the service console is going away, I figure I may as well do it “right” and use the vSphere CLI.

On a side note I originally tried to accomplish this using power CLI (based on power shell) but ran into issues with setting up vmknics and their MTU settings. I also couldn’t bind thevkernel ports to the iSCSI HBA. There is most likely a way of doing it using power CLI but in the end I found it was easier to use the regular vSphere CLI.

This guide assumes that you have already –

  • Got a reasonable knowledge of ESX already
  • Have read this post and understand the concepts
  • Enabled Jumbo frames on the relevant physical switch ports
  • Ensured that your iSCSI target and server supports jumbo frames
  • That you have a base install of ESXi 4.0 up and running
  • You know the name of your iSCSI HBA (Should be something like vmhba33)
  • That you are able to substitute anything between the with your own relevant information J


To get the job done we will be using a combination of the sphere CLI which you can get from here and the vSphere windows client which you can get by connecting to the IP of your ESXi host using your web browser.

Step 1 – Create the vSwitch and set the MTU

In this section we will create the vSwitch , assign the physical NIC’s that will be used for iSCSI traffic using the GUI and then switch to the CLI to set the MTU to 9000 (Jumbo Frames) as it can’t be done using the GUI.

  1. Log into the ESX host with the vSphere Client
  2. Create a vSwitch and take a note of its name (ie “vSwitch1”)
  3. Attach the NICS you intend to use for iSCSI traffic. Be sure these are plugged into switch ports with jumbo frames enabled. In this example I am using two NICs.

1. If you choose all the defaults you will end up with a port group on the vSwitch. You can safely delete that as you don’t need it.

2. If you haven’t already, install the vSphere CLI and choose all the defaults

3. Fire up the vSphere CLI command prompt from the start menu

4. The command prompt defaults to c:Program FilesVmwareVmware vSphere CLI. Change to the” bin” directory.. You should now be at c:Program FilesVmwareVmware vSphere CLIbin.

5. To configure the switch we just created with jumbo frame support type –

vicfg-vswitch.pl -server -m 9000

eg. vicfg-vswitch.pl -server ESX01 –m 9000 vSwitch1

6. To confirm it worked correctly run the following –

vicfg-vswitch.pl -server -l

eg. vicfg-vswitch.pl -server ESX01 -l

Your switch should appear with MTU of 9000 as shown.

  1. Keep the prompt open as we will be using it a few more times yet

Step 2 – Setup vkernel ports with jumbo frames support

We have to do this part entirely from the CLI as we can’t create vmknics in the GUI and set the MTU later on like we did with the vSwitchs. The MTU can only be set on the creation of a vkernel port.

1. Before you can create the vmknics and assign them an IP address and MTU setting you need first create a port group with the names that you intend to use for each vkernel port. For each vkernel port type –

vicfg-vswitch.pl -server -add-pg

eg. vicfg-vswitch.pl -server ESX01 -add-pg iSCSI_1 vSwitch1

2. To confirm it worked type-

vicfg-vswitch.pl -server -l

eg. vicfg-vswitch.pl -server ESX01 -l

You should get something like this –

3. Now create the vkernel ports and attach them to the relevant port group by typing –

vicfg-vmknic.pl -server -add –ip -netmask -p “PortGroup” –mtu 9000

eg. vicfg-vmknic.pl -server ESX01 -add -ip -netmask -p “iSCSI_1” –mtu 9000

4. To confirm it worked type –

vicfg-vmknic.pl -server -l

eg. vicfg-vmknic.pl -server ESX01 -l

Step 3 – Binding the vkernel ports to the physical NIC’s

At this point we need to switch back to the GUI and configure each vKernel port so that it only uses one active adaptor.. This allows the NMP driver within ESX to handle all the load balancing and failover. Once that is done we go back to the command line one more time and then the job is done.

1. Connect to you ESXi host with the vsphere client

2. Go to the properties of the vSwitch that you have created.

3. Highlight the first vKernel port and click edit, then go to the “Nic Teaming” tab.

4. Check the “override vSwitch fail over order” box

5. Move all but one of the physical adaptors from the “active” list to the unused list. Do this for each adaptor so that each vKernel port uses a different physical adapter.


6. Go back to the CLI prompt and “bind” each vKernel port to the iSCSI initiator by running the following command –

esxcli –server swiscsi nic add -n -d

eg. esxcli –server ESX01 swiscsi nic add -n vmk1 -d vmhba34

7. To confirm it worked run the following-

esxcli –server swiscsi nic list -d

eg. esxcli –server esx01 swiscsi nic list -d vmhba34

You should see a whole bunch of details (IP, MTU etc) for each vKernel port that is bound to the iSCSI HBA.

Wrapping it up

So that’s it. If everything worked you should now be able to point your jumbo frame enabled ESXi iSCSI initiator at your target and run a discovery. Each target device should now have at least two paths to the storage. Keep in mind that you can only have a maximum of 8 paths to a device when using ISCSI on ESX.

Once you can see your LUNS you should be able to configure the NMP diver to use Round Robin for each of the accessible devices.

Categories: VMWare

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts


Microsoft Virtual Academy- Microsoft Virtualization for VMware Professionals – VDI



VMWare Support: vSphere Command-Line Interface Documentation



VMware View – Installing View Connection Broker

Installing a basic View 3 environment is a fairly straight forward process.