On August 12 three MVP’s (Aidan Finn, Damian Flynn and Hans Vredevoort) launched their first worldwide Hyper-V Survey dubbed The Great Big Hyper-V Survey of 2011. Our goal was to independently learn about why businesses small, medium and large have decided to implement Hyper-V as their primary hypervisor; to understand their infrastructure choices; to learn how they managed their Hyper-V servers and clusters; look at cloud as the future of virtualization and ask them to look one year ahead.
There were five sections representing those goals:
Why Hyper-V? [covered in this blog]
Hyper-V Implementation [covered in this blog]
The Year is 2012
Let’s begin with the statistics. We had 612 people who responded and 192 of those did not complete all 80 questions Perhaps that number questions was a little daunting for some. Nevertheless we are very content with the way this survey was received. After all the three of us only used our blogs, Twitter and maybe Facebook to promote the survey. The multiplication effect of social media helped us to get the message across and quite a few very respectable people in the IT industry took note and retweeted our intentions or even wrote a promotional blog. Thank you!
I decided to not dismiss the incomplete surveys because the answers to individual questions are quite interesting. In the results you see different totals for number of replies. That’s because of incomplete answers, but also because of several questions allowed multiple answers. So we have to look at the percentages of the answers which I sorted from high to low for each question.
If you want to help us … please retweet with hashtags #hyperv #survey or use any other medium to promote this survey.
So much for the introduction. Let’s look at the results!
First thing to note is the vast majority use Hyper-V at work. Only 6% use it for other purposes only. That means we have reached the intended audience.
With over 68% the two most prominent reasons for deploying a a virtualized infrastructure are of course cost reduction and greater flexibility. When people started with virtualization not many linked this to the concept of cloud except for the 7% who probably started when the word cloud had begun to hype.
64% of the responders use Hyper-V for production although 14% don’t use Hyper-V for the heaviest workloads. Could this be related to the maximum number of 4 vCPU’s in the current version of Hyper-V R2? In that case Microsoft has already promised much more horsepower in the next version of Hyper-V: at least 16 vCPU’s per VM. Naturally testing and development are good candidates for any hypervisor, hence the high score for flexibility in question 2.
Of course we wanted to know if people considered other hypervisors. The natural winner here is vSphere. More remarkable is that almost one third did not even think of looking at any other hypervisor. XenServer takes an honorable third place (as usual).
Needless to say, vSphere was also the winner of the companies using more than one hypervisor next to Hyper-V. If we total the big three hypervisors they account for over 90% which makes the upcoming version of System Center Virtual Machine Manager the perfect choice. VMM 2012 treats all three hypervisors as equals in terms of management, deploying VM’s & Services. The majority use Hyper-V as their only hypervisor. Good job!
The Hyper-V only people chose Hyper-V primarily because they have a Microsoft-centric network. They often already owned a license to install an operating system that includes Hyper-V. Low cost and familiarity with the GUI and how Windows is managed contributed to this choice. Home Run!
Another reason that contributed to choosing Hyper-V was the relatively high number of companies with Software Subscriptions with update rights. Next is the free Hyper-V Server that has all the cool features that are in the paid Hyper-V Core or Windows versions (Memory to the Max, Cores to the Max, Clustering, Live Migration, Dynamic Memory, Remote FX to name a few).
Very remarkable is the answer to who deployed the Hyper-V infrastructure. The vast majority was able to complete their deployment using their own internal IT force. I wonder why Microsoft didn’t make installing Hyper-V a lot more complicated. Just to serve some of us consultants and system integrators ;-)
Clearly some kind of expertise was required because almost the same amount of people answered that there was somebody with enough expertise to oversee the design and implementation. What we didn’t ask if that person was colleague or an external consultant.
The final three questions of the first section Why Hyper-V disclosed that 65% of the virtualization projects were complete. Partly because 15% had only just started and only a small minority had technical issues. More than 75% never even had to consult Microsoft for their virtualization project.
The number of deployed Hyper-V hosts varied widely but about 50% had only deployed 1 to 5 hosts. About 10% have deployed very large numbers between 32 and 150+ hosts.
Not surprisingly two thirds have deployed clusters for all or some of their Hyper-V hosts. One third did not have clustered Hyper-V hosts. They either only have 1 host or might not have learnt about the very easy to implement Failover Cluster Feature in Windows Server 2008 R2, Server Core and heck yes also in the free Hyper-V R2 Server.
The number of nodes in one cluster (which currently has a maximum of 16 nodes) is between 3 and 12 nodes for 33% of the responders. The majority (36%) has one Hyper-V cluster and another 30% have 2 to more than 5 clusters in their facility.
If we take in to account that over 70% of Hyper-V hosts are clustered, nearly all of them have implemented cluster shared volumes (CSV). That gave us a good number of replies to the question how they had designed their CSVs. There are quite a few interesting design considerations that can also be found in Aidan Finn’s whitepaper Planning Hyper-V CSV and Backup. The single one that needs special mention is the 5% that creates a new CSV when the moon is full :-)
Naturally storage is an important topic, especially with clusters. If we discard the 32% that don’t use clusters, the majority use either iSCSI or Fibre Channel. Even without adding up the separate possible answers for specific iSCSI implementations, iSCSI is the clear winner. If we relate that to the requirement for low cost this is easy to understand.
Several of the following questions relate to Disaster Recovery (DR). Less than one quarter have access to a disaster recovery site and if they have one it is one of the branch offices. About 10% rent space in a data center.
If we look at those who do have implemented some sort of DR, two of the mostly used scenarios are SAN based replication or using backup software to replicate to a secondary site. DPM-2-DPM is an example of the latter. Less than 5% consider tape shipment to another location as a DR solution.
Now we come to the challenges with deploying Hyper-V. Even though responders say they never had to call in Microsoft Support, they may have solved their issues by way of Bing ending up on some of the cool Hyper-V blogs that are beginning to flourish. The responders were allowed up to 3 issues so this made up a nice and very representative list. Who’d be surprised to learn that most issues dealt with backup, networking and NIC teaming. NIC teaming is not supported by Microsoft so they would have to call the OEM. Could these be the topics that Microsoft have addressed in the next version of Hyper-V. We’ll soon know! An odd last place is won by Security. Could it be that administrators still don’t care about this? I sincerely hope not. On the other hand, maybe this is because Hyper-V was designed with security in mind. Surprisingly snapshots only account for 3% of all issues. CSV less than 5%? Also note the issue “Everything went perfectly” that takes almost 7%.
Many of these topics are covered well by our blogs.
The most popular new Hyper-V feature is of course Dynamic Memory. Even on my 16GB mobile cloud laptop I use Dynamic Memory extensively. It helps me to easily run a dozen guests for demo, educational and testing purposes. Try that with any of the other hypervisors (on your laptop). Third party NIC teaming has improved and now takes a second place. Virtual Machine Queuing is obviously a lesser known new technology.
Another debatable configuration option is the installation of anti-virus in the Hyper-V parent partition. More than 55% don’t have anti-virus in the root and a good number have no anti-virus at all. A third do protect the parent partition and know how to configure it correctly. Some tried it out but removed it after they had bad experiences. Others found out about the correct exceptions. Nevertheless this is a topic that neither receives a full Yes or a full No. It depends on the authority of the security officer, the absence of one or the tenacity of the consultant.
Furthermore passthrough disks are largely ignored (74%), only 22% never use Dynamic VHD’s and a large majority (84%) of Hyper-V hosts are in the same Active Directory as the virtual machines. If I may guess only the larger private cloud or hosted cloud type of implementations have separate management domains for Hyper-V hosts and clusters. The number of multi-tenant implementations of Hyper-V still only 15% but we may assume that number will grow in the coming years.
Another conclusion we might draw is that security is not an underexposed topic because 77% of responders say that only a small group of people in IT hold the full administrative rights on the Hyper-V hosts, Microsoft security updates are installed every month (55%) or once per quarter (27%).
A minority of 20% never use snapshots or checkpoints as they are called in VMM. Could it be that this feature has improved a lot in R2? In my experience snapshot related problems can be very serious if they happen and take along time to fix. This is certainly a subject for improvement. Again, we will soon know!
Networking answers may hold a few surprises: 41% never uses VLANs, 72% don’t use firewalls to isolate guests from each other, 69% don’t use guest clusters and 70% don’t implement network load balancing between virtual machines. Apart from external virtual networks, both the Internal and Private virtual network are used by about half of all respondents. Not surprisingly if we read that Hyper-V is used a lot for testing and development too.
We have now come to the end of the second section. In the next blog we will examine the last three sections:
The Year is 2012
Hopefully you have enjoyed this first analysis of the The Great Big Hyper-V Survey of 2011. Please help us to make the results known by tweeting and blogging about this survey and its results. At the end I will publish a document with all questions and answers so you get access to all replies.
Let me finish by stressing that this survey was organized independently of vendors. Microsoft had no knowledge of this survey being held. It is simply the result of three very curious MVP’s.