Powered by System Center
Archive for September, 2010
If you are looking for a very nice set of Hyper-V Visio Stencils you can go to Jonathan Cusson’s site: http://www.jonathancusson.com/tag/virtualization-stencil/
Extending certificate validity to avoid mouse/video refresh issues with the Hyper-V Virtual Machine Connection
Unfortunately, there is a bug in the annual certificate review process that can affect the refresh of mouse/video connections. The bug only applies to certain use cases with VMConnect (i.e. Remote Desktop connections are unaffected) and there are two possible workarounds …
Mark Wilson explains this bug on his blog: http://www.markwilson.co.uk/blog/2010/09/extending-certificate-validity-to-avoid-mousevideo-refresh-issues-with-the-hyper-v-virtual-machine-connection.htm
In my previous blog I promised to publish my findings of implementing Native Hyper-V VLANs with HP NIC Teams created by HP Network Configuration Utility v10.10.0.x which was released in September.
First of all I must warn you not to think lightly if you have to reconfigure your existing BladeSystem and Hyper-V Cluster. I had the luxury to try this all out in a research environment. The only ones who might have wondered why their research VM’s were not available were our consultants, specialists and technicians. As a high availability / cluster MVP I can of course appreciate the requirement for uptime. With Live Migration this has been a very easy job and nobody had wondered until now why the research BladeSystem was up all the time.
Now if you consider reconfiguring your production systems, think twice. Design, Plan, Change! Of course this is how we should go about major changes like this.
If you don’t have the stuff to practice, then read this very carefully and plan for some downtime.
I have documented the steps in this document:
Using VLANs in Hyper-V virtual machines has always been a bit messy with HP’s teaming sofware, aka HP Network Configuration Utility for Windows Server 2008 R2. Until now a lot of steps had to be taken:
- Prepare trunk/channel between Hyper-V host and core network switch(es) and add all possible VLANs
- Create NIC Team with HP NCU
- Create VLAN’s on the NIC Team
- In Network Connections additional NICs are created which each represent one VLAN.
- Create a Hyper-V Virtual Network connected to an external network adapter representing a specific VLAN and remove tick in front of Allow management operating system to share this network adapter because we don’t want to see even more NIC’s under Network Connections.
- Add VLAN id to virtual network adapter in Virtual Machine, start the machine and add an IP address to the virtual network adapter that belongs to that VLAN.
A NIC Team with VLANs is recognized by a V in front of the team name.
Recently HP upgraded NCU to version 10.10.0.0 (9 Sep 2010). Apart from additional support for a new Converged Network Adapter (CNA), one of the notable improvements is that it now formally supports VLANs created in Hyper-V Virtual Machines.
The help file that goes with the new NCU version states that a new VLAN Promiscuous property allows a team to pass VLAN tagged packets between virtual machine and external networks only when there is no VLAN created on that team in the Host operating system. If a team is assigned to a virtual machine, the NCU disables the VLAN button to prevent VLANs from being created on the team in the host operating system. This property is available only on Windows 2008 x64/R2 and only when the Hyper-V role is installed.
VLAN Promiscuous is disabled by default.
If the VLAN Promiscuous property and the VLAN button on the NCU GUI are mutually exclusive. If one is selected or configured, the other is hidden or disabled.
If Hyper-V is installed and VLANs are created on the team in the host operating system, the NCU either hides the VLAN Promiscuous property or disables it.
If we interpret correctly a NIC Team is now transparent for VLAN tags. It now allows us to use more than 64 VLANs when the Virtual Connect is switched into tunneling mode. The only thing to do is create a team from multiple network ports in a ProLiant server, use the teamed NIC as an external adapter for a Hyper-V virtual network and add a VLAN tag to the virtual network adapter in a virtual machine.
As soon as I have been able to test this setup, I will write an extra piece to this blog. After all, the proof is in the pudding!
HP ProLiant Servers have been there for ages and ages. For twenty years in fact. I can’t even remember when Insight Lights Out was introduced. After iLO came iLO 2 and now we are at iLO 3.
At HP’s website iLO is described as a remote management technology that
delivers precise control and web-based remote management that is always available. ProLiant iLO products… make it easier to remotely manage your servers from just about anywhere in the world. HP iLO solutions use a common management interface for both Integrity and ProLiant servers which means you can use similar tools and processes to manage all your HP Servers.
It is exactly the word precise control that still puzzles many server admins. The first thing they do is turn on Remote Desktop to remote control the server via Windows. During remote management with iLO they often see an irritating distance between two mouse pointers.
If you recognize this problem, this blog is for you.
Because I come across quite a few people who don’t know how to solve this problem, I will explain the steps:
Either in the HP Blade Onboard Administrator Console or in the iLO console goto Web Administration.
Login with an administrator account are accept the automatic pass-through from the blade enclosure administrator.
Select the Remote Console tab and Settings from the menu on the left. Just change the High Performance Mouse from to the Enabled value and you are done! Enjoy a synchronized pointer in your iLO management screens.
Somebody was twittering his TweetCloud the other day. This inspired me to do the same. Just register at http://tweetcloud.com/ and see what you have been up to in Twitterland. Since I use Twitter primarily for business use there are not many words in it that I have to be sorry for. Ok, maybe a few
In the following location you can find a complete collection of System Center Virtual Machine Manager 2008 R2 updates. The list is divided into several sections for different configurations:
However it does not make mention of the latest hotfix rollup which appeared recently:
One of the discussions when designing a Hyper-V server is about the pagefile settings.
The pagefile is used for:
- Supply Virtual Memory to the operating system (i.e. the parent partition). Traditional guidance states ~1.5x the amount of physical memory, although this doesn’t make sense anymore on a >64GB host;
- Crash dump purposes, but on a hosts with lots of memory (more than 64GB) do you want to have a full memory dump? Look at the number of hours this memory dump will take and you’re convinced that for 98% of all cases a kernel dump is sufficient.
Normally the parent partition uses around 2GB of memory (recommendation) so I usually recommend a manually managed pagefile of approximately 4 ~ 6 GB. Why not a system managed pagefile? Because it will grow to the amount of physical memory.
If you are interested in virtualizing SharePoint 2010 come to my presentation on September 23rd 2010 at C)Solutions in Veenendaal :
A beautiful poster about Virtualizing SharePoint 2010 can be found here:
Click on poster for other SharePoint posters (Visio, PDF, XPS)