Another Update on the VMQ Issue with Emulex and HP

I wrote my last blog on this topic 6 months ago. Meanwhile I have seen several firmware and driver updates from Emulex, usually followed by HP several weeks later. I’m still talking about the ongoing VMQ problems with  HP/Emulex 554FLB CNAs in HP BL460c Gen 8 blade servers in c7000 blade enclosures. Meanwhile I have tested several incarnations of this firmware/driver combination in our own Azure Pack cloud environment. I already found out in previous attempts, that a switch to new Emulex firmware and HP/Emulex drivers, including the switch from VMQ disabled to enabled, can be disruptive and hosts and VMs need to be restarted if things turn bad.

And it did turn bad on all four occasions I tried until now. In July 2014 our hopes were raised by Emulex when Mark Jones from Emulex posted the first update on July 23 2014. I tried them out but not until HP released their OEM specific driver for the CNA. It didn’t take me very long to find out that my test guest cluster quickly got disconnected during a live migration with one node being evicted from the cluster during the black-out. First opportunity: failed

On August 4 a special release was issued for non-OEM Emulex branded cards. I decided not to try this out on our HP/Emulex CNAs. I later found out that some people who tried had to call in HP to replace their servers. Second opportunity: Ignored

Just over a month later, another update appeared on the Emulex blog letting us know that HP versions of the firmware/drivers had been made publicly available. Previous versions had been considered but we found a mystery insider in the comments of our blog stating:

Without revealing NDA information, the HP driver released in September is not really a fix, it is also a workaround, with some caveats that don’t appear in the published notes. My understanding is that the fix delivered by Emulex was considered unstable by HP, and a truly “fixed” driver won’t be released until sometime in Q1 2015.

I didn’t have to think long. Third opportunity: Ignored

image

On October 21st 2014, another update appeared on the Emulex blog. I had become very suspicious of again another update and lacked the time or drive to once more spend many hours testing this version. What changed my mind was when fellow Hyper-V MVP Patrick Lownds sent a message to one of the MVP distribution lists, letting us know that HP had released their OEM version of the HP/Emulex 554FLB firmware and driver. READ MORE »

Windows Azure Pack Remote Console – Create the RD Gateway Farm with PowerShell

 

The community is equivalent to sharing knowledge and helping each other. One of those super motivated community members is Carsten Rachfal. I finally met him at the MVP summit. Somewhere during that week we had to walk from one building to another. I noticed has was dragging along a mobile office. Carsten explained that it contained his complete datacenter. Or to be more precise, a laptop with some crazy specs that contained the complete Cloud OS. He did a lot of work creating a completely automated installation of all the Cloud OS components with HA and perform functional configuration to end up with an environment that was demo or (if it wasn’t for the hardware) even production ready. No single click needed after the deployment process. There was one piece missing in his complete puzzle.

01 Puzzle

He had asked me a couple of times if I had a solution to complete his masterwork. But that is another thing about the community. Time. Somehow you never have enough of it. This week another reminder popped up in a DL and I forced it to the top of my priority list. His question was

I want automate the configuration of a high available RD Gateway for Windows Azure Pack Remote Console. How can I set the RD Gateway server farm members with PowerShell?

Carsten is a smart man. He has been struggling with this issue for a couple of months and it was going to complete his masterwork. He had looked at all the possible angles already.

READ MORE »

Scale-Out File Servers – DNS Settings

Scale-Out File Server (SOFS) is a feature that is designed to provide scale-out file shares that are continuously available for file-based server application storage such as Hyper-V. Scale-out file shares provide the ability to share the same folder from multiple nodes of the same cluster.

In this blog we assume you already have played around with SOFS and know the basics.

 

In this example I’m using a 2-node Windows Server 2012 R2 cluster with 4x 10GbE adapters.
NIC1 and NIC2 are dedicated for iSCSI traffic, used for the shared storage which is used to present LUNs to the SOFS cluster. These LUNs are then added as CSV and this is where the SMB3 shares are landing on.
NIC3 and NIC4 are converged using native NIC teaming with Team NICs for Management, CSV, and SMB Traffic.

DP_SOFSDNS1

When creating your Client Access Point (CAP) for the SOFS, the cluster service will register all the network interface IPs, that have the “Allow clients to connect through this network” setting enabled, in DNS for the CAP.

DP_SOFSDNS2

READ MORE »

New blood for Hyper-V.nu

I’m very happy to announce that three very talented young men have agreed to officially start blogging for Hyper-V.nu. This time not as guest bloggers, but as official Hyper-V.nu bloggers.

If you check Hyper-V.nu on a regular basis, you will have noticed that in recent weeks very few blogs have appeared on the site. This is largely due to the enormous success of Windows Azure Pack which is more or less keeping us fully occupied.

Apart from the many CloudOS related projects that Peter Noorderijk, Marc van Eijk and I run on a daily basis, we also maintain the Azure Pack Wiki, some of the Hyper-V hotfix lists, do presentations at IT events, write books, blog for the MS Building Clouds blog, evangelizing hybrid cloud with Azure Pack, and as MVPs have very regular meetings with Microsoft product teams. In other words, there are not enough hours in the day to make this all work.

So that’s enough explanation why we need more bloggers to help fill the pages of Hyper-V.nu. In the previous year, you may have already seen several guest blogs by Darryl van der Peijl, Ben Gelens and Mark Scholman. All three also happen to be colleagues at INOVATIV, but that is not why they join Hyper-V.nu. It is their real world experience with Azure and Azure Pack technology that makes them special and why we let them join as bloggers.

Let me quickly introduce Darryl, Ben and Mark.

Darryl van der Peijl

image

Darryl was working for a service provider where I met him during the deployment of what was then called Windows Azure Services for Windows Server. Darryl is a very clever young man and quickly came up to speed with the Microsoft System Center and Cloud offering. He is also very proficient in PowerShell which is of course a must-have knowledge these days. Darryl has been implementing Azure Pack ever since, often sharing scripts he developed such as the Azure Pack Tool , the Windows Azure Pack Update Script on the TechNet Gallery. After several guest blogs, he just submitted his first blog on Scale-Out File Servers.

Darryl tweets at @DarrylvdPeijl and has his own blog at http://www.darrylvanderpeijl.nl/

Ben Gelens

image

I met Ben virtually via Twitter and was amazed at the quality of this blogs on VMM, storage, bare metal deployment. I praised his blogs a couple of times and got to know more of what Ben was doing. He happens to be also very versed at PowerShell and PowerShell Workflow, which is as you might know the center of focus in Service Management Automation (SMA), which was first exposed via the Windows Azure Pack admin portal. We then talked about several guest blogs about Bare Metal Post-Deployment using SMA and VM Role.

Ben tweets at https://twitter.com/bgelens and blogs at http://mssecbyben.wordpress.com/

Mark Scholman

image

Mark also quickly made fame while promoting his blogs via Twitter on networking, Azure Pack, NVGRE and Network Virtualization. These are all qualities which are highly desirable if you start implementing Windows Azure Pack in the real world. Mark recently starting investigating Azure Pack Websites for one of the projects we currently engage in. Learning and writing always ends up in a great blog for Mark and the Installing and Configuring HA Azure Pack Websites series is just one example.

Mark tweets at https://twitter.com/markscholman and blogs at http://sysctr.nl/

Let me finish by saying that these three guys are worth following and hopefully they’ll share many blogs on Hyper-V.nu.

Hans Vredevoort
@hvredevoort

Windows Azure Pack authentication signing certificate expired

The Cloud OS was implemented in our lab environment directly after the release of the 2012 R2 bits. That was a little over a year ago. The Windows Azure Pack installer creates multiple self-signed certificates that are used for different websites. In a simple Windows Azure Pack express installation you will get fourteen self-signed certificate. Looking at these certificates you will notice two different types. Most certificates are web server certificates assigned to a Windows Azure Pack website in IIS. There are also two signing certificates. The signing certificates are used by the Windows Azure Pack authentication sites.

01 Signing Certificates

I’d like to point out that one of the post deployment tasks for every environment should be to replace the default self-signed certificates with trusted certificates. This is possible for all default certificates but not for the two signing certificates used for the authentication sites.

All self-signed certificates created by the Windows Azure Pack installer have an expiration date of one year after the deployment. If you are still using self-signed certificates and they have expired after a year, you can just delete the expired certificates from the personal computer store with a certificates snap-in in an MMC and rerun the Windows Azure Pack configuration wizard after that. My fellow MVP Stanislav Zhelyazkov  has already blogged about this previously here.

Unfortunately is the self-signed authentication signing certificate recreated with the information stored in the Windows Azure Pack database, including the original expiration date. Recreating the authentication signing certificate by deleting it from the personal computer store and recreating it by running the Windows Azure Pack configuration wizard results in the same issue. An expired self-signed authentication signing certificate.

02 Expired Signing Cert

After making some changes in the database I was able to recreate the certificate with a new expiration date. But as you might now, hacking the database is not supported.

Working with some smart folks from the WAP PG, we were able to convert my non supported database hacking and slashing into a supported procedure by using the following PowerShell script.

Win free tickets for Experts Live 2014!

Only forty days to go for the next Experts Live event! This year we have an amazing line up again with tracks about:

  • Windows Server
  • Azure
  • Hyper-V
  • System Center
  • Office 365
  • PowerShell
  • SQL

More than forty sessions! This time the event will be hosted in Cinemec in Ede.

We from Hyper-V.nu may give away two tickets that gives you free entrance to this event. If you want to win one of these tickets you have to write a tweet explaining why anyone should attend Experts Live using the hashtag #Hypervnu #ExpertsLive. The tweet must be posted between October 8th and 15th.

Designing VM Roles with SMA in mind

Blog by Ben Gelens who blogs at http://mssecbyben.wordpress.com 


SMA Runbooks can be triggered by Windows Azure Pack events like VM Role creation. You can configure this by going to the VM Cloud automation tab and linking a runbook to an Azure Pack known object like “Microsoft VMRole“.


The objects have actionable event like Create, Scale, Repair and Delete. You can link one runbook per actionable event per object.


A runbook is only available to linkup with an event if it is configured with the SPF tag.


When taking this information in mind, you can see that if you are going to build a dependency between actionable events and SMA runbooks you need to develop and commit to some standards. In this blog post I’ll show you some examples of what you could think off while developing your SMA Windows Azure Pack integration strategy for VM Roles.

Background information

In my daily job I see VM Roles being used by enterprise customers as a replacement for SCVMM service templates and VM templates. Although the VM Role is meant to be disposable, scalable, recyclable and easily replaceable, they are actually being used for VMs which will have long life spans and take advantage of some of the VM Role benefits like scaling and resource extension configuration.
READ MORE »

Hyper-V Configuration Toolkit

Guest blog by Mark Scholman

I have been working on a new script project to configure Hyper-V hosts. Every time I deploy new Hyper-V servers in a new customer environment, there are many things that require configuration. Also little things that are easily forgotten. Let’s see how we can automate that process. Of course I realize that Bare Metal Deployment is an available functionality in VMM. But with the chicken and egg story or customers who cannot use Bare Metal Deployment, I decided to create this tool. It is using the converged network setup as described in this blog post. What it does is the following:

  • Rename Adapters
  • Create Teams
  • Create Team Nics (tNics)
  • Set Network Configuration (MGT,LM,CSV)
  • Join Server to the Domain
  • Create a server-local administrators group in the domain
  • Allows you to create a new or join an existing cluster
  • Configure Cluster network names
  • Configure Cluster Live Migration subnet

On the to-do list is the following and will be added with upcoming releases:

  • Configure Storage network (iSCSI & SMB3)
  • Use of different topologies for converged networking as described here
  • Using JEA (Just Enough Admin) or PSCustomSessionConfiguration for deployment of Hyper-V hosts

How to use the toolkit

Download the latest version from the Technet Gallery. If you feel like to help me optimize and extend the tool fork it on github.

On the newly provisioned Hyper-V Server start the Deploy-HyperVHost.ps1. On the Configure Nic’s tab select the adapters you want to use for Management (MGT / LM / CSV) and click Set Management Adapters:

Notice the list box will refresh with the new names for the adapters. Next select the adapters you want to use for VM Network and click Set VMNet Adapters

READ MORE »

Hyper-V Networking NVGRE do’s and don’ts

This blog is written by Mark Scholman (@markscholman).

This post will explain the do’s and don’ts of Hyper-V Network virtualization. Especially on the topic where we want to bring our next solution / example to Microsoft Azure Pack and / or System Center Virtual Machine Manager. First things first. For those who need to understand the basics on Hyper-V Network Virtualization I recommend to start reading the article here.

This blogpost is based on the following use case:

A customer wants to host their infrastructure at a Service Provider. The Service Provider utilizes Hyper-V Network virtualization, management with System Center Virtual Machine Manager and optionally Windows Azure Pack. The customer currently has the following networks:

  • Production Network
  • DMZ Network

The customer prefers to bring their own Linux firewall and use it as the default gateway for their networks. The customer network consists of the following subnets:

Production subnet: 10.10.0.0/24
DMZ Network 10.11.0.0/24

As for each subnet the first possible IP address (normally x.x.x.1) is automatically provisioned as the default gateway. The FW/Gateway (MS-TEST-A01) is configured with x.x.x.254 for each NIC. The default gateway in the firewall is set to 10.10.0.1 and this VNet is enabled with Internet connection and NAT.

image

Not really an exciting network configuration you might think. We will change the default gateway in each machine to the x.x.x.254 (ip of the virtual fw).

The following image displays two provisioned virtual networks. VNET-A is configured with a gateway:

clip_image004

The virtual machines use the following IP configuration:

READ MORE »

Bare Metal Post-Deployment – Running the Post-Deployment Automation and Bonus – Part 5

Earlier this week I introduced Ben Gelens and his blog series on Hyper-V Bare Metal Post-Deployment. Here is part 5, a continuation on the topic of Constructing the Post-Deployment automation. This blog post series consists of 5 parts:

  1. Part 1: Introduction and Scenario
  2. Part 2: Pre-Conditions
  3. Part 3: Constructing the Post-Deployment automation
  4. Part 4: Constructing the Post-Deployment automation continued
  5. Part 5: Running the Post-Deployment automation and Bonus (this blog post)

__________________________________________________________

Running the Post-Deployment automation

So you have arrived at the last blog post in this series. Hope you have enjoyed everything you have read and must importantly have learned what you were seeking to learn. As a little extra, I’ve put in a little bonus script to finalize a Hyper-V cluster configuration.

When everything is in place we simply run the “Run-HyperVPostdeployment” runbook.

Output of child runbooks is returned to the master runbook by using return statements and sending them back as strings. You only have to check the job summery of the master runbook to get a view of how things went.

When things didn’t go well, you can see at which stage the failure occurred and start troubleshooting from there. Check the VMM job log to get more detailed information.

I didn’t include cleanup steps but it’s really easy to restart the process by removing all logical switches from the host and just restart the master Runbook.

Bonus Post Cluster Deployment Configuration script

Run the following script on one of the Cluster nodes (this has to run only once per cluster). The script will:

  • Configure the cluster to register its PTR records into the reverse lookup zone.
  • Rename the cluster networks
  • Configure the correct cluster network metrics
  • Configure the networks allowed for live migration
  • Configure 512MB RAM as CSV block cache
  • Configure the SameSubnetThreshold for 20 seconds
  • Configure the cluster service shutdown timeout to 30 minutes
  • Renames the CSV mount point to reflect the volume name
  • Remove the quorum drive letter

Please find a link to all script files here.