System Center VMM 2012 R2 Bare Metal Deployment with Converged Fabric and Network Virtualization – Part 2 Components

This blog series is divided in the following parts.

In this part we will prepare the individual components in System Center VMM 2012 R2 that form the building blocks for the logical switch. There are a lot of screenshots in this series but I’m trying to explain this as detailed as possible. A lot of system admins are having difficulties to get configured, so bear with me.

Host Groups

In this example I use a dedicated management cluster. The management cluster has all the virtual machines for managing the environment (including System Center 2012 VMM R2 in a HA installation). New host added by the bare metal deployment process will be separated from the management hosts by defining two hosts group in VMM. In this example there is a host group called System Management and a host group called Tenants. To create a host group open the VMs and Services tab in VMM and select Create Host Group.

01 Host Groups

When you have configured the host groups move the management cluster to the System Management host group.

Logical networks and IP Pools

The logical network allows us to document the physical networking infrastructure in System Center VMM 2012 R2. The IP schema and the physical network design in this blog series form the basis of this documentation. In this example we will create three Logical Networks.

  • Management
  • Cloud network
  • Tenants VLANs

The Logical network Management will be used for Management Hosts, Management VMs, Live Migration and Csv. The logical network Cloud Network is used by tenant workloads for Network Virtualization. The logical network Tenants VLANs is used by tenant workloads with traditional VLANs.

To create a logical network open the Fabric tab and select Create Logical Network. For the logical network Management select VLAN-based independent networks

02 Logical Network Management 1

The next piece is important. Most hosts (blade or rack) will issue their PXE on the physical switch infrastructure untagged. In this example we use VLAN 50 for management.

The hosts in the management cluster have the Host Management adapter in VLAN 50 untagged. Management virtual machines in the management cluster are configured tagged in VLAN 50 on a separate vSwitch.

Management Cluster

The hosts in the tenant cluster have the Host Management adapter in VLAN 50 untagged. There are no management virtual machines in the tenant cluster.

Tenant Cluster

To match this configuration we create a network site called Management_Hosts and a network site called Management_VMs.

Scope the network site Management_Host to the host group System Management and the host group Tenants. Enter the IP subnet of the Management network and leave the VLAN field empty. This will force untagged traffic for this network and will be used for the Host management adapter by the cluster nodes in both host groups.

03 Scope Management_Hosts

Add a second network site called Mangement_VMs. Scope this network site to the host group System Management (Management workloads will only run on the management cluster). Specify the same management IP subnet. For this network site we enter VLAN 50. This will force tagged traffic on VLAN 50 and will be used by management virtual machines running on top of the Management cluster.

04 Scope Management_VMs

Add a third network site called Live Migration. Scope this network site to the host group System Management and the host group Tenants (Live migration will be used by cluster nodes in both host groups). Specify the Live Migration IP subnet. For this network site we enter VLAN 51. This will force tagged traffic on VLAN 51.

05 Scope Live Migration

Add the fourth network site called Csv. Scope this network site to the host group System Management and the host group Tenants (Csv will be used by cluster nodes in both host groups). Specify the Csv IP subnet. For this network site we enter VLAN 52. This will force tagged traffic on VLAN 52.

06 Scope Csv

After completing the first logical network wizard, we will define IP pools for this logical network. IP Pools are used in VMM to assign and keep track of IP addresses. During the Bare Metal Deployment process IP addresses are assigned to the server from the relevant IP Pools.

The first IP Pool is created for management hosts. Open the Fabric tab and select Create IP Pool from the menu. Specify a name (in this example Management_Hosts) and select the logical network Management.

07 Management_Hosts IP Pool

In the next screen select the appropriate network site for the IP Pool. In this example select the network site Management_Hosts.

08 Management_Hosts Network Site

The network site for Management is used by Physical Hosts on VLAN 50 untagged and by management VM on VLAN 50 tagged. This configuration should be reflected in the IP Pools. The IP Pools in the network site cannot overlap. Designate an appropriate number of IP addresses for your physical hosts. For this environment we will define the IP Pool with 9 IP addresses for the management hosts.

09 Management_Hosts Ip Address range

Management hosts use the management network to route traffic that is outside of its own subnets. For this reason a default gateway is specified for this IP Pool.

Name resolving will also be performed on the management network. Enter the IP addresses of the DNS servers that are accessible in or from the network site management.

Skip the WINS screen and complete the wizard. Instead of showing all the screenshot for creating the additional IP Pools for the logical network Management this table will display the necessary settings.

Name Logical Network Network Site IP Range Default Gateway DNS Servers DNS Suffix
Management_Hosts Management Management_Hosts 172.30.50.31-172.30.50.39 172.30.50.254 172.30.50.51172.30.50.52 Res.local
Management_VMs Management Management_VMs 172.30.50.111-172.30.50.150 172.30.50.254 172.30.50.51172.30.50.52 Res.local
Live Migration Management Live Migration 172.30.51.31-172.30.51.29      
Csv Management Csv 172.30.52.31-172.30.51.29      

After you created the IP Pools for Management your Logical Networks entry in VMM should look like this

10 Logical Switch Management and IP Pools

For isolating the tenant workloads you can choose to use Network Virtualization or traditional VLANs. This blog will create a logical network for both to create a mixed environment. The mixed environment is especially useful in a transition period where you are moving your workloads from traditional VLANs to Network Virtualization. If you do not wish to use Network Virtualization or are configuring a Greenfield scenario with Network Virtualization you only need to create the logical network specific to your environment.

Logical Network for Network Virtualization

In a previous blog on network virtualization you can read about the difference between the Provider Address Space (PA Space) and the Customer Address Space (CA Space). For network virtualization a logical network in the PA Space must be created.

With network virtualization it is possible to delegate the creation of a virtual subnet (CA Space) to the tenant. A great example of this functionality is Windows Azure Pack. This solution gives the tenant an intuitive portal to create all kind of services. Windows Azure Pack can be layered on top of VMM to give the tenant the possibility to create virtual machines and virtual subnets. When a Tenant creates a virtual subnet from Windows Azure Pack the name of the logical network site (PA Space) from VMM is displayed.

Therefore you should a descriptive name for the logical network for the PA Space. In this example Cloud Network is used. Select to create a new Logical Network. Specify the descriptive name, select One connected network and enable the checkmark for Allow new VM networks created on this logical network to use network virtualization.

11 Logical Network Cloud Network

In the next screen create a network site for the PA Space. System Center VMM is located in the System Management Host group and the Management hosts are located in the Tenants host group. Scope the network site to both host groups, specify the correct VLAN and IP Subnet for the PA Space.

12 Scope Cloud Network

Complete the Create Logical Network Wizard. For management of network virtualization from VMM it is required that an IP Pool exists in the PA Space. Select to create an IP Pool from the Fabric tab, give a descriptive name (in this example Provider Address Space) and select the logical network Cloud Network.

13 Cloud Network IP Pool

Select the network site that we created in the logical network and verify that the correct subnet and VLAN is specified.

14 Cloud Network network site

Specify the IP range that VMM will use to assign IP addresses from the PA Space.

15 Cloud Network IP Address range

The hosts will use the management network as default gateway. The IP Pool for the Cloud network does not need a default gateway or DNS servers. Complete the IP Pool wizard.

The following table shows the settings for the IP Pool used for Network Virtualization PA Space.

Name Logical Network Network Site IP Range Default Gateway DNS Servers DNS Suffix
Provider Address Space Cloud network Cloud network 172.30.55.31-172.30.55.39      

Logical Network for Tenant VLANs

For isolating tenant or departments workloads, in most environments VLANs are currently used. To prepare System Center VMM for VLANs we need to define the VLANs in a logical network. We will create one logical network and use network sites within this logical network to define the VLANs.

First create a new logical network. In this example we will call it Tenants VLANs. Select VLAN-based independent networks.

16 Logical Network Tenants VLANs

Add each VLAN that you want to use for tenant or department workloads as a network site. Define the VLAN ID and IP subnet for each VLAN. The logical network is a layout of your physical switch configuration. Give the network sites a name that represent the physical network configuration. We will create VM Networks later in this blog that will be the abstraction to the tenant. If a tenant leaves and a new tenant is assigned to the VLAN you only have to create a new VM network and attach it to the VLAN. If your physical network does not change, then your logical network should also remain the same. I created two VLANs (101 and 102) as an example.

17 Scope Tenants VLANs

IP Pools are not required for the configuration in this blog, but virtual machines deployed for a tenant from a VM Template or a service template can be assigned an IP address from a IP Pool. The steps for creating an IP Pool for VLANs are similar to the IP Pools we created before. Just as an example I create the IP Pools as shown in the following table. One thing to note is the IP address used for the gateway. Network Virtualization will use the first available IP address from a virtual subnet as the default gateway. If you plan to move your workloads from VLANs to Network Virtualization in a future point in time, it is a good idea to use the first available IP address from the subnet as default gateway.

Name Logical Network Network Site IP Range Default Gateway DNS Servers DNS Suffix
101 Tenants VLANs 101 172.30.101.2-172.30.101.200 172.30.101.1 4.2.2.14.2.2.2  
102 Tenants VLANs 102 172.30.102.2-172.30.102.200 172.30.102.1 4.2.2.14.2.2.2  

The logical network and IP Pool creation is complete. The logical networks entry in System Center VMM 2012 R2 should look something like this.

18 Logical Networks overview

VM Networks

In System Center VMM 2012 SP1 VM Networks were introduced. VM Networks form the abstraction from the logical network. The only way to provide network connectivity to a virtual machine with is to attach a VM Network to the virtual machine. A VM Network is mapped to a logical network. The type of logical network will define the possible choices when you create a new VM Network.

The following table displays the VM Networks that we will create.

Name Logical Network Isolation Options Network site Subnet VLAN
Management_Hosts Management Specify a VLAN Management_Hosts 172.30.50.0/24
Management_VMs Management Specify a VLAN Management_VMs 172.30.50.0/24-50
Live Migration Management Specify a VLAN Live Migration 172.30.51.0/24-51
Csv Management Specify a VLAN Csv 172.30.52.0/24-52
TenantA_Network Tenants VLANs Specify a VLAN 101 172.30.101.0/24-101
TenantB_Network Tenants VLANs Specify a VLAN 102 172.30.102.0/24-101

The VM Networks created on the logical network Management are used by the logical switch to create vNICs. Each deployed host will require a vNIC for Management, Live Migration and Csv. Within the logical network Management the network sites Management_Hosts and Management_VMs were created. The network site Management_Hosts is configured with an untagged VLAN. The VM Network based on top of the network site Management_Hosts will be used to create the vNICs for the hosts that will be rolled out by using bare metal deployment.

19 VM Network Hosts

The network site Management_VMs is configured with a VLAN. The VM Network based on top of the network site Management_VMs will be used by management virtual machines in the management cluster.

20 VM Network Management_VMs

The network sites in the logical Network Tenants VLANs are configured with a VLAN. The VM Network based on top of the network sites in the logical Network Tenants VLANs will be used by tenant virtual machines in the tenant cluster.

22 VM Network Tenant VLAN

To create the VM Networks open the VMs and Services tab in System Center VMM 2012 R2 and select create VM Network. After creating the VM Networks as described in the table the VM Networks entry in VMM should look something like this.

23 VM Network overview

Uplink Port Profiles

The logical switch in System Center VMM 2012 R2 allows you to create a virtual switch on top of a NIC Team. NIC Teaming in Windows Server 2012 enabled a lot of great possibilities that are even enhanced in Windows Server 2012 R2. In System Center VMM 2012 R2 these settings are configured in Uplink Port Profiles.

Select the Fabric tab, open the networking entry and select Port Profiles. System Center VMM 2012 R2 provides you with a set of preconfigured Virtual Port Profiles out of the box that we will use later in this blog. There are no preconfigured Uplink Port Profiles. Each host in the management cluster and in the tenant cluster has two NIC Teams with a virtual switch on top of that. Only one NIC Team per host will be enabled for Network Virtualization. This settings is configured in the Uplink Port Profile. Therefore we will create two Uplink Port Profiles as shown in the following table.

Name Port Profile Type Load Balancing Algorithm Teaming Mode Network Sites Enable Hyper-V Network Virtualization
SI – HD Uplink Host Default Switch Independent Management_Hosts
Live Migration
Csv
 
SI – HD – HNV Uplink Host Default Switch Independent Cloud Network
101
102
Yes

To create an Uplink Port Profile select create a new Hyper-V Port Profile. The first Uplink Port Profile will be used for the management traffic. Give the Uplink Port Profile a descriptive name (in this example SI – HD, meaning Switch Independent – Host Default) and select Uplink port profile as the type. Windows Server 2012 R2 enables a new Load balancing algorithm called Dynamic mode. This algorithm will distribute traffic amongst the physical NIC in a team in the best possible way, even if only one VM is generates a single TCP stream. The Dynamic mode is only available in Windows Server 2012 R2. If you want to reuse this Uplink Port Profile with a mixed set of Windows Server 2012 and Windows Server 2012 R2 host, the System Center VMM 2012 R2 provides a settings called Host Default that sets Dynamic mode algorithm for hosts that support it and Hyper-V Port algorithm for hosts that do not.

24 Uplink Port Profile name

Select the Management_Hosts, Live Migration and Csv as the networks sites that are support by the Uplink port profile. Do not enable Hyper-V Network Virtualization for this Uplink Port Profile.

25 Scope Uplink Port Profile

The second uplink port profile will be used for NIC Teams that will handle the tenant workloads. Create an additional Uplink Port Profile with a descriptive name (in this example SI – HD – HNV, meaning Switch Independent – Host Default – Hyper-V Network Virtualization). Select Uplink port profile as the type, Host Default as the Load balancing algorithm and Switch Independent as the Teaming mode.

26 Uplink Port Profile name HNV

Select Cloud Network, 101 and 102 as the network sites that are supported by this uplink port profile. Enable the checkmark for Hyper-V Network Virtualization.

27 Scope Uplink Port Profile HNV

Virtual Port Profiles

All network interfaces that the host will use are created as virtual NICs. System Center VMM 2012 R2 uses Virtual Port Profiles to define the settings for these virtual NICs. Out of the box System Center provides you with a set of preconfigured virtual port profiles that are suitable for most environments. A virtual port profile defines offload settings, security settings and bandwidth settings. Virtual Port Profiles are abstracted by port classifications.

The bandwidth settings require some attention. It is common practice to use a weighted configuration for virtual switches. Microsoft advices to total the weight of all adapters and the virtual switch default flow to 100. It is not easy to see the preconfigured minimum bandwidth weight of the virtual port profiles. But our dear friend PowerShell is ready to help out with the following System Center VMM PowerShell cmdlet.

Get-SCVirtualNetworkAdapterNativePortProfile | ft name, MinimumBandwidthWeight

28 Verify Weight

The default settings for the out of the box virtual port profiles that we will use for the converged networking confirmation management must be adjusted to sum up to 100.

Name Minimum Bandwidth Weight Original Value Minimum Bandwidth Weight New Value
Host Management 10 20
Live Migration 40 60
Cluster 10 20
  60 100

To change the Minimum Bandwidth Weight value open the properties of a virtual network adapter port profile and select the bandwidth settings tab. Change the value of the Minimum bandwidth weight.

29 Change weight

In the next part of this series we will create the logical switch and the host profile.

9 Comments

  1. September 18, 2013    

    Many thanks to this article, which shows all the ip-stuff and the networking. I really appreciate that and I’m exited about the next tracks you have already announced.
    kind regards
    Buschu

    • September 18, 2013    

      Hi Bueschu,

      You’re welcome. Part 3 should be published by the end of this week.

      Regards,
      Marc

  2. December 17, 2013    

    You state that the “Cloud Network” network site should have both hosts allocated and it is shown in the picture. Yet 2 pictures later it is shown having only the “Tenants” host.

    Also the “Tenants VLANs” is also showing the “Tenants” host only.

    Should the “Cloud Network” have both hosts?

    • December 17, 2013    

      Hi Nate,
      Some sharp observation there ;)
      The only hosts that require access to the Cloud Network is the hosts that will participate in the NVGRE network. In other words only hosts that will have VMs with NVGRE. So in this example the Tenant hosts is sufficient.
      This statement is also true for the Tenants VLANs.
      I’ll update the screenshot.

      Marc

      • December 19, 2013    

        Thanks for the update. These guides have been very helpful and I appreciate them greatly!

  3. April 24, 2014    

    In your example, you create two network sites – Management_Hosts and Management_VMs – that specify the same subnet with one tagged and the other untagged. I’m trying to do the same thing in 2012 R2, and I can create the first site. When I try to add the second, I get error 13638 which says that it cannot assign a subnet because it overlaps with another.

    I’m curious how you were able to do this, when it seems the product will not allow that configuration.

    • April 25, 2014    

      As I said on Twitter, you can disregard the question. I figured out I needed to select VLAN-based independent networks when I created my logical network

  4. Daniel Clarke's Gravatar Daniel Clarke
    May 7, 2014    

    Virtual Port Profiles weight
    in the above you recommend setting the weight to total 100, however does this take into account the weight of each vmNICs adapter that will also be connected to the switch?
    10 – Management
    40 – Live Migration
    10 – Cluster
    This totals 60, which leaves 40 for the VMs that are also attached to the virtual/logical switch (I’m assuming its a converged production/management switch). That would allow up to 40 vmNICs that were assigned the ‘Low Bandwidth’ port profile (which would then bring the total up to 100).
    Let me know your thoughts.

    • May 9, 2014    

      We’ve learned to separate the management networks from the VM networks in two separate teams with 4 network ports, preferably 10Gb with only a vSwitch connected to the VM team (so 100% for VM traffic) and several tNICs for management, Live Migration, CSV, SMB1 and SMB2 each in their own subnet and VLAN. This gives you high throughput benefitting from RSS for the management networks and maximum performance for your VMs (if you can use VMQ/vRSS), unless you have Emulex adapters of course.

  1. System Center VMM 2012 R2 Bare Metal Deployment with Converged Fabric and Network Virtualization – Part 1 Intro on August 30, 2013 at 15:41
  2. System Center VMM 2012 R2 Bare Metal Deployment with Converged Fabric and Network Virtualization – Part 3 Logical Switch on September 23, 2013 at 20:20
  3. System Center VMM 2012 R2 Bare Metal Deployment with Converged Fabric and Network Virtualization – Part 4 Deployment on October 7, 2013 at 20:59

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code class="" title="" data-url=""> <del datetime=""> <em> <i> <q cite=""> <strike> <strong> <pre class="" title="" data-url=""> <span class="" title="" data-url="">