Powered by System Center
Especially for those who want to manage their Windows Server 2012 servers and clusters, an important moment has come. System Center 2012 SP1 is not only RTM but also ready for download on Technet. Several of our projects were delayed because we needed the SP1 version of Virtual Machine Manager 2012, Data Protection Manager 2012 and several of the other components to manage Windows Server 2012 Hyper-V and Hyper-V Server 2012 cluster. The long wait is over!
Because the new Windows operating system introduced such a shipload of new features, you can imagine what this meant for developing SP1.
It is good to know that some of the ISOs provided contain the full product including SP1 which makes it easier if you want to do a completely fresh install.
Release notes and documentation SCVMM 2012 SP1: http://technet.microsoft.com/library/gg702209.aspx
Installation Guide: http://technet.microsoft.com/library/gg610669.aspx
Release notes and documentation SCOM 2012 SP1: http://technet.microsoft.com/library/hh561709.aspx
Installation Guide: http://technet.microsoft.com/library/hh298609.aspx
[Updated: see ]
HP bought 3PAR several years ago and as I expected the recently announced HP 3PAR StoreServ 7000 series are intended to replace HP EVA. There is even a data migration solution to make the migration as easy as possible. Of course I looked at what 3PAR has to offer in terms of features in Windows Server 2012 and integration with Hyper-V and System Center Virtual Machine Manager 2012.
The 3PAR StoreServ 7000 datasheet tells us this:
In Windows® Server 2012 environments, built-in, fine-grained virtualization, system-wide striping, and support for multi-tenancy give you the ability to consolidate mixed workloads onto a single HP 3PAR StoreServ 7000 system. With Windows Server 2012 Offload Data Transfer (ODX) and the HP 3PAR StoreServ 7000, you can migrate large files—such as databases or video files—up to seven times faster with near-zero network impact due to zerodetection capability integrated into the HP 3PAR ASIC. HP 3PAR Thin Technologies and Windows Server 2012 automatic reclamation of storage automate storage growth and shrinkage, while HP 3PAR Adaptive Optimization Software delivers the right QoS to the right data at the right time in Windows Server 2012 environments.
Not only does the new 3PAR StoreServ 7000 support Offloaded Data Transfer (ODX) but also Storage Management Initiative Specification (SMI-S) and a new Web Services API. SMI-S is what Microsoft has promoted vigorously and nobody seemed to believe in. At the end of the day most storage vendors finally comply with this SNIA standard. In our Microsoft Private Cloud Computing book, a good part of the storage fabric chapter is devoted to SMI-S integration in System Center Virtual Machine Manager 2012.
Lower end and non-comparable storage systems such as Dell’s EqualLogic only promise 2x improvement on file copy with ODX. The 7x promise with 3PAR’s ODX capability sounds good but of course we have to verify this in practice. At TechEd we were shown video’s comparing a 10GB regular copy (in about 3 minutes at 80-90MB/sec) versus a 10 second copy with ODX. I’m not sure if Microsoft used NetApp or EMC for the demo.
HP’s P4500 requires SANiQ v10.0 which is finally available and at a minimum will support Windows Server 2012. Many customers who invested in this platform and plan to upgrade to Windows Server 2012 Hyper-V will no doubt be very disappointed to learn that there is no ODX, no SMI-S, no UNMAP, no Dedupe. At least I cannot find any references to these important integrations for Windows Server 2012, Hyper-V and System Center. A 60-day trial for HP StoreVirtual VSA and documentation can be obtained here:
@WorkingHardInIT pointed out that the Dell EqualLogic number for ODX might even be 5x the speed of a regular non-offloaded copy. I based my number on a demo I had seen. In a Dell Techcenter blog on Windows Server 2012 and EqualLogic an increase in copy time of 5x is mentioned. As I stated for the 3PAR number, it is always best to verify these numbers in your own practice.
During the last two months hundreds of Hyper-V users have taken the time and effort to participate in The Great Big Hyper-V Survey of 2012 which closed last week. We cannot share any details about the results just yet as we are still counting and analyzing the results. We’ll let you know as soon as we can. Promise!
As a stimulus for you we offered a couple of Microsoft Private Cloud Computing books. Because Aidan Finn said he would select a winner for the book based on the responses he got on his own blog and on Twitter, I still have two books to give away. One is offered by co-author Damian Flynn and the other by me.
It was quite interesting to see how much attention has been raised on the web, on Twitter, on blogs, perhaps also on Facebook (I can’t tell because I got fed up with FB and closed down my account about a year ago). I bang “The Great Big Hyper-V Survey) and got 15,100 results. I googled and found 72,600 results. Of course Google also placed an ad on top, but please rest assured that this survey has no connection with Microsoft whatsoever. They are probably just as curious as you and me for that matter.
All right, now it is time to pick a winner… (please click on More)
Altaro, developers of Altaro Hyper-V Backup, are giving something back this holiday season and are giving every Microsoft Hyper-V admin 50 licenses of their desktop backup software, that’s up to $2,000 worth of software.
There’s no catch! All you need to do is send them a screenshot of Hyper-V that proves that you use Hyper-V and they will send you the licenses which you can use on your own machines or give out to friends, family or colleagues, to use at work or home. The giveaway ends on Dec 24th 2012.
To claim your 50 licenses check out http://www.altaro.com/hyper-v/50-free-pc-backup-licenses-for-all-hyper-v-admins
To check out their Hyper-V portal check out http://www.altaro.com/hyper-v/
For more info about Altaro Hyper-V Backup check out http://www.altaro.com/hyper-v-backup/?LP=Xmas
As most of our readers know by now is that we are currently asking users of Hyper-V to complete our second edition of The Great Big Hyper-V Survey. Last year we had several hundred people who took part in the survey.
Our original plan was to close the survey after one month, but after some consultation with Aidan Finn and Damian Flynn, we decided to keep the survey open until December 1st.
You Can enter the survey here:
Meanwhile I had also offered a free copy of our Microsoft Private Cloud Computing book, which is a great resource on several System Center 2012 products and deployment of Hyper-V in a Private Cloud scenario. Because we have kept you waiting for another month, I am pleased to triple the number of books offered as a prize.
Both Aidan Finn and Damian Flynn were so kind to each offer an extra copy so now we have three Microsoft Private Cloud Computing books available.
So here is another opportunity for you to win one of the three books or enter the survey if you have not been able to do this. We are stressing that this survey is read by Microsoft and we are still able to influence what is important for the next release of Hyper-V. Be our guest!
All you need to do is tweet or blog with a reference to this blog:
Many businesses are changing their perspective on cloud services. A lot of arguments that prohibit acceptance of the cloud slowly start to crumble. Internet connectivity is a common good, bandwidths are increasing and even the legal aspects are dealt with. Instead of a threat the cloud now empowers possibilities. Microsoft anticipated on this change years ago with numerous game changing developments. They have set the bar with their public cloud offerings like Office365.com, Outlook.com and Bing.com just to name a few worldwide deployed services. They have even raised the bar further with their cloud operating system Windows Azure. Providing these services to millions of people gave Microsoft great insight.
Leveraging the knowledge learned through their public cloud offerings, Microsoft has created a great platform for the private cloud with Windows Server 2012 and System Center 2012. This scalable solution provides businesses an elastic environment of pooled resources with self-service capabilities and usage based metering.
Microsoft now takes it to the next level by bringing Windows Azure to Windows Server. This solution mainly focusses on service providers but if you do not consider yourself one, please continue reading. The new service has also immense potential for enterprise organizations and might even trigger IT organization to think about providing hosted services.
Windows Azure for Windows Server consists of a Service Management Portal and a Service Management API. Microsoft has released a beta of Windows Azure for Windows Server with two services and will continue to add services over time. The first two services are high density website hosting and virtual machine provisioning and management. I will focus on the virtual machine provisioning service in this blog.
Microsoft provides an end to end solution enabling service providers to create an IaaS (Infrastructure as a Service) offering. This end to end solution consists four layers.
- Windows Server 2012: The virtualization layer
- System Center VMM 2012 SP1: The management layer
- Service Provider Foundation for SCVMM: Multitenancy for SCVMM and a REST ODATA API
- Windows Azure For Windows Server (WA4WS): The Service Management Portal and API
Windows Server 2012 is the most cloud capable Operating System ever released. Combined with System Center Virtual Machine Manager 2012 as the management tool, they form the building blocks of the modern private cloud, providing scalability and availability features to meet even the most critical environments. Service Providers running hundreds or even thousands of workloads for their customers have these requirements. Leveraging the power of the private cloud, Microsoft enables multitenancy with the Service Provider Foundation (SPF). SPF also creates an industry standard Restful ODATA web service that developers can use for programming to System Center VMM 2012. This means that service providers can retain their current portal solutions and still benefit from the private cloud provided by Microsoft.
Windows Azure runs tens of thousands of customers and hundreds are added to the platform each day. As you can imagine the management portal, the primary tool for this platform, is used heavily. Since the launch of Windows Azure on February 1, 2010 the management portal has evolved to fulfill a diversity of requirements, from browser independency to easy administrating, monitoring and diagnostics. Microsoft has taken these developments to complete their end to end offering for service providers and created a Service Management Portal combined with a Service Management API for Windows Server that are now available in Beta.
Project Virtual Reality Check is a initiative by Ruben Spruijt and Jeroen van de Kamp to investigate VDI infrastructure compared to a virtualized SBC infrastructure (among other things). You can read more about VRC here:
So far, Project Virtual Reality check has been a massive undertaking, which generated much interest over the years. The results have been presented at all the big technology events over the world and their findings and best practices have been published on different occasions. However, one thing was clear: many discussions in the VDI and SBC space are not just about performance best practices and product comparisons.
As a result the VRC team decided to boot up the first edition of the Project Virtual Reality Check “State of the VDI and SBC union” survey. http://www.brianmadden.com/blogs/jeroenvandekamp/archive/2012/09/26/announcing-project-vrc-s-quot-state-of-the-vdi-and-sbc-union-quot-survey.aspx
It was the VRC Team’s aim to ask all the relevant questions, both functional and technical. These questions range from “What are the most important design goals set for this environment”, to “Which storage is used”, to “How are the VM’s configured”. The questions are comprehensive, and relevant to everyone in building VDI and SBC environments. The aim of Project VRC is to repeat the survey at least once a year. This will allow us to see how our industry is changing in practice.
Within a couple of weeks more than 600(!) people started the survey:
The VRC team asked us to help promote this survey, which we are glad to do. Within the first week already more than 600 people have started the survey. So if you are in any way involved in the Hyper-V and VDI community, please go ahead and participate in this survey!
If you want to contact the guys behind VRC:
Jeroen van de Kamp | @TheJeroen
Ruben Spruijt | @Rspruijt
If you install Windows Server 2008 R2 with Hyper-V, the pagefile will be automatically managed, which usually has a 1:1 ratio with physical memory. However VM’s use their own pagefile and do not use the host paging mechanism. If you have lots of memory, chances are your pagefile will be way too big.
If we compare two servers with 16GB of memory each, one installed with Hyper-V R2 and the other installed with Windows Server 2012 Hyper-V we clearly see different numbers. Both servers have run for several days.
In the figure to the left you will see that Windows has 16GB allocated but recommends 24GB (1:1,5)
In the figure to the right you see that Windows Server 2012 is much more intelligent and allocates considerably less memory for paging. It is fairly consistent with the best practice in R2 to set a fixed maximum for the pagefile between 4 and 6GB.
Although I haven’t found any best practices yet for the pagefile in Windows Server 2012 Hyper-V, this might suggest that you can use the autopilot for virtual memory in the latest server edition of Windows. Of course we need a little bit more experience in the field to really call this a best practice.
If you are looking for a very thorough blog on pagefile settings in R2, please visit this blog.
Here is another one on technet:
And finally a post about the Hyper-V Dynamic Memory and Host Memory Setting in R2:
Host Memory Reserve
The Host Memory Reserve, which reserves memory for the processes in the parent partition can be found in the following registry key:
Registry Key: HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization
Value Name: MemoryReserve
Value Type: REG_DWORD
According to this post, the Host Memory in R2 is calculated in MB as follows: 384MB + (Memory in GB * 64)
So a 16GB host in R2 will have a default Host Memory Reserve of 384MB + (16 * 64) = 1.408MB
In two 16GB Windows Server 2012 hosts I looked at, the default Memory Reserve was 2048MB. We have to conclude that the formula used for Hyper-V R2 does not apply to Windows Server 2012.
I also looked at a Windows Server 2012 Hyper-V host with only 4GB which had a currently allocated pagefile size of 704MB and a recommended pagefile size of 3.582MB. This small host did not have a Host MemoryReserve entry at all, probably because a default 2GB Memory Reserve would be disproportional to the physical memory.
I had an interesting discussion about MemoryReserve with Michel Luescher who works for Microsoft Consultancy Services at Microsoft Schweiz. Michel is writing the chapters in Windows Server 2012 Hyper-V Installation and Configuration which deal with this subject.
IMPORTANT NOTE: Do not manually configure the Host MemoryReserve entry and leave this to Windows Server 2012 which will auto-configure this value on an as needed basis
In fact any newly installed Windows Server 2012 server with the Hyper-V role does not have a Host MemoryReserve entry in the registry and we still need to figure out under what circumstances this entry is added to the registry.
Take a look at Michel’s blog (German) on the subject:
A few days ago I promised to send someone a free Microsoft Private Cloud Computing book, provided they tweeted or blogged about The Great Big Hyper-V Survey of 2012.
Of course we want as many people to participate in this survey as possible. That’s why we have been rather persistent on Twitter and we are glad so many of you have already given attention to our survey. Several hundred Hyper-V users from all sizes of companies including several Fortune-500 companies have already contributed.
If you haven’t completed the survey yet, let me offer the book as an incentive.
But I haven’t explained the rules yet:
Of course fellow authors and colleagues of mine are not eligible to this prize. Sorry Aidan, Patrick, Damian!
You must have retweeted my tweets on TGBHSof2012 or you must have named us in your blog
Mentions on Facebook are not valid (as I have abandoned Facebook some time ago). Not sorry for Facebook : –)
If you retweeted I must be able to find you in this list which keeps track of the retweets:
If you blogged and mentioned this page you are eligible:
I will announce an answer on the day of the close of the survey: October 31st 2012 19:00 CET