Sunday, July 26, 2009

vCenter Lifecycle Manager

OK, so this topic has already been covered in the Varrow Blogs, mostly on Calfo's Blog. I want to bring it up again, however, because I keep seeing environments that stand to benefit greatly by putting this or a similar technology into place in their ESX environment.

There's no argument that VMware makes things easier for your IT Department but could it be argued that it makes things a little too easy? Now that you can have a new windows or linux guest up, running, and configured in minutes rather than hours everyone wants part of the action, right???

Of course they do and why would you blame them? Most administrators try manage provisioning tasks manually and quickly get overwhelmed. Ed, the exchange admin, had you create him a test server last month. Is Ed done testing now? Can you shutdown Ed's VM and free up some much needed resources? We can ask Ed, but oh yeah.. that's right, Ed's on vacation for a couple of weeks. Will we forget to ask him when he gets back?

You get the picture. It can get really nasty trying to keep up with everyone and every project that requires use of your ESX infrastructure. For this reason and after some recommendations from coworkers I started looking into VMware's vCenter Lifecycle Manager. It appears that it will give IT the ability to automate much of the lifecycle process and hooks right into vCenter 2.x. I'm hoping to evaluate it in depth soon but you can check it out on VMware's page here:

https://www.vmware.com/products/lcm/

And check out Calfo's demo video he posted at the beginning of the year.

http://calfo.wordpress.com/2009/01/11/vmware-lifecycle-manager-demo-video/

Sunday, July 19, 2009

Unregistered hosts with CLARiiON iSCSI

I ran into an issue recently where I was unable to register w2k8 hosts in Navisphere completely with the naviagent. The host would communicate with Navisphere and autoregister the iqns but the host itself would show a "U" (unregistered) status. Also, Navisphere would now show host info such as drive mappings etc... I spent a lot of time going over settings and rechecking configurations. It was really bizarre and an issue I had never encountered in the Fiber Channel environment. After working with EMC support, the solution finally came out. When using iSCSI on Windows servers to connect to a CLARiiON iSCSI storage system, the iSCSI NICs on the hosts cannot be the first bound NIC.

This solution is elaborated upon in the EMC solution emc191748.

You can check the binding order in a number of ways.

Use the netsh interface
  1. Go to "Network and Dial-up Connections." (For Windows 2008, select "Manage Network Connections.")
  2. From the toolbar select Advanced/Advanced Settings.
  3. In the "Adapters and Bindings" tab, ensure that the NIC used for normal, non-iSCSI traffic is at the top of the list, followed by the iSCSI NICs.
  4. If you need to change this order, a reboot is required or you can use the follow two commands to turn off/on each NIC.

    To disable:

    netsh interface set interface <interface name> DISABLED


    To re-enable:

    netsh interface set interface <interface name> ENABLED

    Run these two commands for each NIC.

Use the ipconfig/all command

You can use the ipconfig/all command from a command prompt. For Windows 2000 and 2003, the NICs will display in reverse order, that is, the first NIC listed is the lowest NIC in the binding order. For Windows 2008, the order of the NICs will follow the correct sequence, that is, the first NIC listed will be the NIC bound.

Using netstat-rn command

For Windows 2008 servers you can use the netstat-rn command.

The numbers listed in the left column reflect the binding order with the lowest number being the first NIC bound. For Windows 2000 and 2003, it is the opposite.

C:\Users\Administrator>netstat -rn
===================================================================
Interface List
10 ...00 14 22 b1 7b ae .........Intel(R) PRO/1000 MB Dual Port Server Connection* (See note below.)
11 ...00 14 22 b1 7b af ...... ...Intel(R) PRO/1000 MB Dual Port Server Connection #2** (See note below.)

1 ................................... Software Loopback Interface 1*** (See note below.)

12 ...02 00 54 55 4e 01 ......... Teredo Tunneling Pseudo-Interface
13 ...00 00 00 00 00 00 00 e0 isatap.{14388A07-03E6-48AE-A713-D835413A72A5}
14 ...00 00 00 00 00 00 00 e0 6TO4 Adapter
16 ...00 00 00 00 00 00 00 e0 isatap.{6838E21C-4151-41EB-89E6-7C005E8E58A2}

* Second bound NIC. This is the first, real NIC. It does show up in the GUI.

** Third bound NIC. This is the second, real NIC. It does show up in the GUI.

*** First bound NIC. This is the localhost and will not show up in the GUI list above.



Note See solution emc159428.

Sunday, July 12, 2009

Adding a Backup URL into Citrix Web Interface

Creating a backup URL is one of several strategies that exist for designing a redundant Web Interface in your XenApp farm if you are using the XA plugin for hosted apps (PN Agent). There is always the option of creating multiple Web Interface servers and using load balancing appliances. There is also the option of using a round robin approach with DNS (poor man's load balancing). However, using appliances may be "over the top" in an environment without heavy utilization and the DNS approach isn't intelligent enough to detect a failed server and would require manual removal of the failed server entry.

Adding a backup URL to Web Interface for your hosted plugin allows a seamless failover to a second site. The process starts by specifying the backup URL into the Web Interface configuration. After that, the URL is pushed out to all of the hosted apps plugins upon their next successful connection. In the event that your primary Web Interface fails, the hosted apps plugin will detect the failure and automatically attempt to connect with backup web interface server specified in the backup URL configuration. Instructions on how to add the backup site can be found here:

http://support.citrix.com/proddocs/index.jsp?topic=/web-interface/wi-specify-backup-urls.html

Sunday, July 5, 2009

Enhanced VMotion

EVC (Enhanced VMotion Compatibility) is a feature in ESX Clusters that allows you to VMotion among different processors of the same family. To enable this feature, you need to start with a new cluster and add your hosts in after verifying they have the correct settings turned on in the BIOS.

It's common practice to turn Virtualization on in the advanced CPU settings before building a new host. On some hosts, such as HP, the "No Execute" (also referred to as NX or XD) bit needs to be enabled in the BIOS as well.

There's a lot of potential for using EVC between hosts of the same processor family. Following are some great links with more detail about EVC, its use cases, and some intel compatibility specifications.

http://www.itworld.com/virtualization/56292/understanding-vmware-evc

http://www.vmguru.nl/wordpress/2009/06/vmware-evc-cluster-what-is-that/

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1991