Windows Server 2008 R2 SP1 and HP Network Teaming testing results

This begins to look like a serial story. Hyper-V and NIC Teaming. Ever since I wrote a blog on using native VLANs with HP NIC Teaming and Hyper-V, I have been reluctant to try new versions. Now that Windows Server 2008 R2 SP1 was published almost a month ago, I have attempted to evaluate the consequences for this lovely combination.

Let me begin with referring to my blogs on NIC Teaming:
http://www.hyper-v.nu/blogs/hans/?s=NIC+teaming

As from HP Network Configuration Utility (NCU) 10.10.0.x it is possible to use native VLANs in combination with Hyper-V without going through the hassle of manually adding multiple VLANs on the team and creating Hyper-V virtual networks and string those two together. By using a team setting called Promiscuous Mode we only had to make sure VLANs were known on the trunk and we were able to apply a VLAN ID to the virtual network adapter in the Hyper-V guest.

I have tested a couple of combinations:

TESTED CONFIGURATION 1

Config: Windows Server 2008 R2 SP1 with NCU 10.10.0.0 and Broadcom 5.2.22.x drivers from HP PSP 8.60 but without teaming configured. VLAN configured on adapter itself.

Result: Guest can communicate with network. VLANs are passed through correctly.

Conclusion: Although you have no network redundancy in this configuration, it is easy to add a VLAN id to the vNIC in the guest. Promiscuous mode is not applicable here.

TESTED CONFIGURATION 2

image

Config: Windows Server 2008 R2 SP1 with NCU 10.10.0.x and Broadcom 5.2.22.x drivers from HP PSP 8.60 with teaming configured.

Result: Guest can communicate with network. VLAN set on vNIC works correctly. I had to disable and enable the vNIC insight the guest to make it aware of the configuration change. I could change to any other VLAN known on the trunk without rebooting the guest.

Conclusion: In this configuration we have network redundancy for the Hyper-V guests with great flexibility and a lot easier to configure than with the classic method of adding VLANs to the NIC team.

TESTED CONFIGURATION 3

imageimage

Config: Windows Server 2008 R2 SP1 with drivers from HP PSP 8.60 but with updated Broadcom drivers to 6.0.60.x and NCU upgraded to 10.20.0.x with teaming configured.

Result: Guest can communicate with network. VLAN set on vNIC works correctly. I could change to any other VLAN known on the trunk without rebooting the guest.

Conclusion: In this configuration we have network redundancy for the Hyper-V guests with great flexibility and a lot easier to configure than with the classic method of adding VLANs to the NIC team. Biggest side effect is the performance gain as a result of the updated NIC Teaming software (and drivers) which I described earlier in http://www.hyper-v.nu/blogs/hans/?p=383

TESTED CONFIGURATION 4

image

Config: Windows Server 2008 R2 SP1 with drivers from HP PSP 8.60 but with updated Broadcom drivers to 6.2.16.x and NCU remaining at 10.20.0.x with teaming configured.

Result: VLANs could be swapped easily.

Conclusion: Just updating the networking driver to the most current version left functionality unchanged

TESTED CONFIGURATION 5

imageimage

Config: Windows Server 2008 R2 SP1 with drivers from HP PSP 8.60 with Broadcom drivers at 6.2.16.x and NCU upgraded to 10.30.0.x with teaming configured.

Result: Guest can communicate with network. VLAN set on vNIC works correctly. I could change to any other VLAN known on the trunk without rebooting the guest.

Conclusion: Conclusively we can say that all three Broadcom driver versions and the latest three HP Network Configuration Utility versions work with Windows Server 2008 R2 SP1.

FAST STEP UPGRADES

On one of the other nodes in the cluster I upgraded the Broadcom drivers from 5.2.22.x directly to 6.2.16.x with NCU remaining on 10.10.0.x which was installed with the ProLiant Support Pack 8.60. No problems emerged.

Then I directly upgraded from NCU 10.10.0.x to NCU 10.30.0.x and again connectivity on different VLANs in the guest kept working.

Be aware that if you change a Hyper-V guest configuration in a cluster, Live Migration might fail because Failover Cluster Manager is not yet aware of the changes. Don’t forget to refresh the virtual machine configuration.

image

image

Final note

I have found no problems in HP NIC Teaming in the three latest versions of HP Network Configuration Utility and the Broadcom network drivers and should not be an impediment for upgrading to Windows Server 2008 R2 Service Pack 1

image

 

5 Comments

  1. Gaston's Gravatar Gaston
    March 22, 2011    

    Great post! Thanks a Lot, it is very usefull for me.

  2. Peter Häcker's Gravatar Peter Häcker
    April 7, 2011    

    Thanks for your great work, Hans.

    Just for Info – HP ProLiant Tools 9.7.0 are out now (05.04.2011)

    In our company, we tried to implement Hyper-V on Server Core (Windows Server 2008 R2 and SP1).
    Hardware: HP ProLiant DL380 G7

    Although we followed the HP & Microsoft whitepapers and “best practices” on several forums, we were not successful and had a lot of problems with HP NCU (Network Configuration Tool).
    Therefore, we decided to change to Windows Server 2008 R2 inkl. SP1 and activate the Hyper-V Role.
    This should be work better… We´re ongoing ;-)

    Are there any tipps or known customers who work successfully with Hyper-V on Server Core 2008 R2 on HP ProLiant Servers (incl. Teaming & VLAN Configuration) ???

    Kind regards.
    Peter

    • April 7, 2011    

      Hi Peter,

      Thanks for the update. I hadn’t spotted HP PSP 8.70 yet and will certainly start testing that. I have learnt not to simply install these new versions in production as they can lead to all sorts of trouble.

      Cheerz, H/

  3. May 8, 2011    

    Hi Hans.
    We run HP DL380G6 servers with all the latest HP drivers, NIC teaming and Hyper-V clustering. However we find the NIC performance of the VMs poor.
    Our standard setup is to have a 4*1GB (364T) card for the VMs with two pairs or two teamed up. Cluster management is all done on the onboard NICs.
    Have you any advice on tweaking the performance?
    Do you think TCP offload could be related?
    Cheers
    Dan

    • September 12, 2011    

      Hi Dan, we’re experiencing the same problem. Did you find a solution?
      Thanks, Joost

  1. Windows Server 2008 R2 SP1 and HP Network Teaming testing results … | Windows (7) Affinity on March 19, 2011 at 13:21
  2. Compare Hyper-V legacy networking with state-of-the-art vSphere vSwitches | VCritical on May 26, 2011 at 19:51
  3. Hyper-V : Network Design, Configuration and Prioritization : Guidance « Virtualisation & Management Blog on July 8, 2011 at 08:31

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>