In some ways we are fortunate that VMware continues to work even when it is configured incorrectly… and in other ways we are not. At CVM we often take over the administration of a VMware platform in production use. Some platforms are configured correctly while others function but have configuration shortcomings that greatly increase platform risk and significantly reduce platform performance. Below is a short description of some of the most common configuration errors that we encounter.
1) DNS
VMware’s high availability functionality is reliant upon proper domain name service configuration. Not only does the network configuration of your VMware host need to be setup with a FQDN and proper network settings but your local network DNS must have the hostname records for your VMware hosts. If something is not working in VMware, this is a great place to look for errors.
2) Time
VMware hiccups if a host’s system time starts to drift away from its peers. NTP should be properly configured utilizing VMware’s time servers to assure that all system times are accurate and in sync. A quick online search will reveal various time servers that can be used.
3) NIC Redundancy
Most of today’s hosts arrive in the box with four NICs. In VMware these can easily be used to provide both “front-side” and “back-side” network redundancy. At CVM we consider the front-side to be the networking that services the virtual machines and we consider the back-side to be the network that services the iSCSI storage. In both cases at least two physical network cards should be allocated to a VMware vSwitch. This done through the vSwitch properties. Just add the additional NIC to the vSwitch and then adjust the NIC teaming properties on the named network. We tend to leave both NICs active under NIC Teaming on Virtual Machine Port Groups. On the back-side network for VMkernel Ports the configuration is more complicated.
4) Jumbo Frames
When you are using NICs for iSCSI storage you must set the MTU size of your storage NICs (VMkernel Ports) to 9000 if it is required by your iSCSI storage vendor – which it should be. This is done on the vSwitch and on the VMkernel port itself. It may work without these settings but the performance will be far less than what is possible.
5) iSCSI NIC Bindings
When the VMware iSCSI Software Adapter is used the VMkernel Port Bindings must be set in the adapter properties under the Network Configuration tab. Surprisingly iSCSI will function without this properly set. Unfortunately, the addition of a new iSCSI datastore may in fact break iSCSI on the host if VMware is left to set its own iSCSI bindings. You will want to use at least two NICs for iSCSI. The configuration of these NICs, the teaming, and the pathing will affect performance; however, this discussion is outside the scope of this summary.
Have other questions? Drop us a line at info@cvm.com, we’d be happy to take a look.