Posts tagged NCU

Windows Server 2008 R2 SP1 and HP Network Teaming testing results

This begins to look like a serial story. Hyper-V and NIC Teaming. Ever since I wrote a blog on using native VLANs with HP NIC Teaming and Hyper-V, I have been reluctant to try new versions. Now that Windows Server 2008 R2 SP1 was published almost a month ago, I have attempted to evaluate the consequences for this lovely combination.

Let me begin with referring to my blogs on NIC Teaming:
http://www.hyper-v.nu/blogs/hans/?s=NIC+teaming

As from HP Network Configuration Utility (NCU) 10.10.0.x it is possible to use native VLANs in combination with Hyper-V without going through the hassle of manually adding multiple VLANs on the team and creating Hyper-V virtual networks and string those two together. By using a team setting called Promiscuous Mode we only had to make sure VLANs were known on the trunk and we were able to apply a VLAN ID to the virtual network adapter in the Hyper-V guest.

I have tested a couple of combinations:

TESTED CONFIGURATION 1

Config: Windows Server 2008 R2 SP1 with NCU 10.10.0.0 and Broadcom 5.2.22.x drivers from HP PSP 8.60 but without teaming configured. VLAN configured on adapter itself.

Result: Guest can communicate with network. VLANs are passed through correctly.

Conclusion: Although you have no network redundancy in this configuration, it is easy to add a VLAN id to the vNIC in the guest. Promiscuous mode is not applicable here.

TESTED CONFIGURATION 2

image

READ MORE »

Update HP ProLiant Network Teaming HOWTO

I you are not the typical RTFM installer, you probably install your Hyper-V server by throwing in a HP SmartStart DVD and do the guided install that prepares your System Configuration, OS installation and HP ProLiant Support Pack. Well if you are that kind of person: be careful!

If not treated well, you would end up with problems stemming from a wrong installation order. If you want HP ProLiant NIC Teaming to work properly, you always have to enable Hyper-V with latest updates and hotfixes before you install the HP ProLiant Teaming Software. If you don’t, the network team may stop passing traffic.

I read a tweet from Virtualization MVP Kurt Roggen. In his blog Kurt mentioned that HP had published an update on the proper use of HP NIC Teaming.

image

The HOWTO document explains what happens if you create or dissolve a team if a Hyper-V virtual network was already configured and more importantly how to get rid of this.

The other topic discussed is the use of VLANs with Hyper-V. Last year I wrote a couple of blogs on native Hyper-V VLANs and the new promiscuous mode introduced in the NCU 10.10.xx version. I suggest you use the newer 10.20.x.x versions for reasons I mentioned in one of my previous blog.

The 4th edition of the HP document dated December 2010 can be found here.

Related blogs about HP NIC Teaming:
http://hyper-v.nu/blogs/hans/?s=nic+teaming

 

Improving network performance for Hyper-V R2 virtual machines on HP blade servers

How can we dramatically improve the network communication between two Hyper-V R2 virtual machines in HP blade servers?

Many of our customers have started using HP BladeSystem c7000 enclosures with ProLiant G6 or G7 blade servers. For the network interconnect they use HP Virtual Connect Flex-10 or FlexFabric blade modules which are ideally connected to the outside world via 10Gb Ethernet network switches. In a less ideal world multiple 1Gb connections can be combined to form a fatter trunk to redundant 1Gb Ethernet core switches.

So much for the cabling! As soon as we dive into the blade enclosure, all network communication stays within the confines of the blade enclosure with its multi-terabit signal backplane.

So how on earth can two virtual machines on two different physical Hyper-V R2 blade servers within this same enclosure only communicate at the speed of only 20 to 30MB per second? Conversely, how can we get them back to a much more acceptable speed? If you want to find out, I invite you to read on.

Let me first explain how the different components work together.

In the following diagram we see an HP c7000 Blade Enclosure with several blade servers, two Virtual Connect Flex-10 or FlexFabric blade modules which are each connected to a core Ethernet switch.

A Hyper-V R2 server cannot have enough network adapters. With the latest generation of blade servers we don’t need to fill up the enclosure with a switch pair for each network module or mezzanine. The dual-port 10Gb Ethernet onboard modules can be split into 2 x 4 FlexNICs. Speeds can be dynamically assigned in 100Mb increments from 100Mb to 10Gb.

image

So the Parent Partition on the Hyper-V R2 server sees 8 network adapters at the speed set in the Virtual Connect server profile. We set up at least three NIC teams for management, virtual machines and cluster communication (heartbeat, live migration, cluster shared volumes). The onboard network adapters in the blade server are from Broadcom and the teaming software used is HP Network Configuration Utility (NCU).

Until last week we used the NCU 10.10.0.x teaming software which allowed us to use native VLANs (see previous blogs). What the older versions have in common is the ultra low speed of VM to VM communication with this particular combination of hardware.

Because we wanted to find out we setup a test configuration with above configuration. The Hyper-V servers were Hyper-V Server 2008 R2 with SP1 (release candidate).

The network performance tests were conducted with NTttcp. This is a multi-threaded, asynchronous application that sends and receives data between two or more endpoints and reports the network performance for the duration of the transfer.

We setup two Windows Server 2008 R2 Virtual Machines with 1 vCPU, 1Gb of memory, 1 Virtual Machine Bus Network Adapter (with IC 6.1.7601.17105) and two VHD’s, one dynamic VHD for the OS and one fixed-sized VHD for the test. No changes were made to the physical or virtual network adapter in terms of TCP and other hardware offloads. We simply kept the defaults.

Test 1: VM to VM on same Hyper-V R2 host in same blade enclosure

Broadcom driver on host 5.2.22.0
Teaming software on host NCU 10.10.0.x
Teaming type Network Fault Tolerance with Preference Order
Network speed 2 x 4Gb (Effectively 4Gb)
Total MB Copied 1342
Throughput in Mbps 3436
Result Excellent
Method Average of 10 tests

Test 2: VM to VM on two different Hyper-V R2 hosts in same blade enclosure

Broadcom driver on host 5.2.22.0
Teaming software on host NCU 10.10.0.x
Teaming type Network Fault Tolerance with Preference Order
Network speed 2 x 4Gb (Effectively 4Gb)
Total MB Copied 1.342
Throughput in Mbps 319
Result Awful
Method Average of 10 tests

Via my HP contacts we learnt that HP has been working to improve network performance specifically for HP BladeSystem, Virtual Connect and the FlexNIC network adapters. It turned out that the slow speeds occurred on the LAN on Motherboard (LOM) B, C and D only. So LOM 1:a and LOM 2:a appeared to perform well. If you don’t split your networks into multiple FlexNICs you wouldn’t have noticed any degradation. However, in a Hyper-V cluster environment you need many more networks.

In this iSCSI connected server we use three teams:

  1. LOM 1:a + 2:a Management_Team (Domain access; managing Hyper-V hosts)
  2. LOM 2:a + 2:b VM_Team (for Virtual Machine communication)
  3. LOM 3:c + 3:c Cluster_Team (Live Migration, Cluster Shared Volumes)

The remaining two ports are used to create a MPIO bundle for connection to the iSCSI network.

image

Because the VM_Team is on LOM B, we suffered very low performance when two VM’s living on different Hyper-V hosts.

To see if the newly built drivers and teaming software (released on december 19th) we updated the Broadcom drivers and the NIC teaming software. The same tests were executed to see the difference.

Test 1: VM to VM on same Hyper-V R2 host in same blade enclosure

Broadcom driver on host 6.0.60.0
Teaming software on host NCU 10.20.0.x
Teaming type Network Fault Tolerance with Preference Order
Network speed 2 x 4Gb (Effectively 4Gb)
Total MB Copied 1342
Throughput in Mbps 4139 (+21,7%)
Result Excellent
Method Average of 10 tests

Test 2: VM to VM on two different Hyper-V R2 hosts in same blade enclosure

Broadcom driver on host 6.0.60.0
Teaming software on host NCU 10.20.0.x
Teaming type Network Fault Tolerance with Preference Order
Network speed 2 x 4Gb (Effectively 4Gb)
Total MB Copied 1342
Throughput in Mbps 1363 (+426%)
Result Good
Method Average of 10 tests

Although we haven’t tested and compared all LOMs we feel quite confident that network bandwidth is now distributed in a more even way across the different FlexNICs.

Configuring Native Hyper-V VLANs with HP NIC Teams and HP Virtual Connect

In my previous blog I promised to publish my findings of implementing Native Hyper-V VLANs with HP NIC Teams created by HP Network Configuration Utility v10.10.0.x which was released in September.

First of all I must warn you not to think lightly if you have to reconfigure your existing BladeSystem and Hyper-V Cluster. I had the luxury to try this all out in a research environment. The only ones who might have wondered why their research VM’s were not available were our consultants, specialists and technicians. As a high availability / cluster MVP I can of course appreciate the requirement for uptime. With Live Migration this has been a very easy job and nobody had wondered until now why the research BladeSystem was up all the time.

Now if you consider reconfiguring your production systems, think twice. Design, Plan, Change! Of course this is how we should go about major changes like this.

If you don’t have the stuff to practice, then read this very carefully and plan for some downtime.

I have documented the steps in this document:
http://www.hyper-v.nu/blogs/hans/wp-content/uploads/2010/09/Transparent-Hyper-V-VLANS-with-HP-Network-Configuration-Utility-and-Virtual-Connect-Flex-10.pdf

image

 

Our Sponsors





Powered by