Hyper-V Tips and Tricks – NIC teaming – Easier in Windows 8!

For the remaining days of our 12 Days of Hyper-V Tips and Tricks, I’ll be focusing on new features that are coming in Windows 8. I’ve been using Hyper-V since it first shipped, and with each release, more and more of the “must haves” and “nice to haves” have been filling in, to the point that with Windows 8, I’m not looking for much more in my Virtualization solution. Some of my favorite things that are new in Windows 8 are:

  • Cluster Shared Volume (CSV) 2.0
  • In-box NIC Teaming
  • Storage Migration
  • Concurrent Live Migration
  • Hyper-V Cmdlets
  • Hyper-V Replica

Today, we’ll focus on NIC Teaming.

In last year’s MMS/Tech-Ed Hyper-V FAQ Tips and Tricks sessions, we had a few questions about NIC teaming, and Nathan Lasnoski wrote up this response regarding NIC teaming in Windows 2008 R2 SP1, posted here

“How do I enable Hyper-V NIC teaming?”

Although Microsoft has offloaded this capability to the network card manufacturers, it is a capability that works, assuming you’ve configured the teaming software properly. There are several different types of load balancing configurations (in Broadcom BASP and Intel):

  • Smart Load Balancing with Failover: This implementation is sort of like multicast, where all the switch ports have different MAC addresses and theoretically can be implemented without any switch changes. We’ve found this relatively easy to configure, but prone to network integration issues.
    *Link Aggregation (802.3ad): This implementation aligns with the IEEE 802.ad (LACP) specification. In this configuration all adapters receive traffic on the same MAC address. In this configuration you’ll need to have a switch which supports LACP integrations. I’ve seen people who have had a lot of success with this option.
    *Generic Trunking (FEC / GEC) / 802.3ad-Draft Static: This implementation is similar to 802.ad link aggregation, but instead of integrating with LACP, it uses a trunking mechanism at the switch level, such as EtherChannel. We’ve had success with this on Cisco, HP, and Dell switches. This implementation type has been the predominate option we’ve used because of its ease of configuration and because we’ve experienced very few issues with it. It should be noted that when using Intel NICs, which configuration is called “Static Link Aggregation” vs. “IEEE 802.3ad Dynamic Link Aggregation”

To configure the NIC teaming integration with Hyper-V follow these configuration steps:

*Install Hyper-V role and clear networks

*Install and configure teaming software

*Connect the team to a Hyper-V virtual network

Additional Tips:

  • We’ve found it useful to enable “VLAN Promicuous Mode” if the feature is available, as that allows for VLAN tagging to work properly.
  • Make sure to fully test your configuration before moving into production. This is especially true for live migration and access to teams from other networks or VLANs. Also, if you run into issues, with virtual machine networking, make sure you aren’t running into an IC or hotfix issue that is not related to teaming.
    *We have tended to be very careful about offloading features, often disabling them completely

Teaming in Windows 8

In Windows 8, the story changes pretty dramatically. Microsoft is looking to support NIC Teaming natively in a totally vendor agnostic way. Regardless of your NIC brand, you just run a simple cmdlet (or make a few clicks in a GUI if you prefer) and you have a team. Virtualization MVP Alessandro Cardoso just wrote a great post on NIC teaming here, and Virtualization MVP Didier Van Hoye just wrote a great post NIC teaming here so rather than re-post that content, I’ll direct you their way. They are quick and easy reads, and I recommend taking a look.

Microsoft also just released a 34-page whitepaper this past week which goes deep into the new feature, which you can find here.

In my early testing, especially on the Beta, I’ve absolutely loved the new feature, and it’s now part of our default build process. You can set it up a couple different ways, depending on what you’re trying to achieve (teaming the guest NICs and host NICs separately, teaming all your NICs together and sharing between the host and guests, et cetera).

There will be more to come on this soon, but for today, know that if you’ve avoided NIC teaming with Hyper-V to date due to the complexities of the 3rd Party implementations, take a fresh look at the new in-box LBFO feature. It may be just what you’re looking for!

Good Luck, and Happy Virtualizing!


Hyper-V Tips and Tricks – Inbox drivers are generally a bad idea

Day 6 in our continuing series of Hyper-V FAQs, Tips, and Tricks deals with inbox drivers.
Some might say that this one is obvious and goes without saying, but I’ve run into lots of people that have just installed Hyper-V as is, and begun to deploy workloads, so I think it’s important to bring this one up in case there are those out there that might skip this important step.

There are primarily two locations I’ve personally seen where inbox drivers can wreak havoc on a deployment: Network Interface Cards (NICs), and Host Bust Adapters (HBAs). Especially with NICs, I’ve regularly seen a variety of issues ranging from poor network performance in guest and/or the parent partition to intermittent network failures in the guest and/or parent partition. A lot of these issues have to do with issues around RSS, Offload, Chimney, and other advanced networking features, and it’s almost a guarantee that you’ll see improvements if you update to the latest network drivers. I’ve found this to be especially true around 10 Gigabit network cards like the Intel X520.

We’ve also found issues in early Windows 8 testing with the inbox drivers on our HBAs as well in both the Developer Preview and the Beta, and the first thing we do on a new install now is replace the inbox HBA drivers.

Historically, we’ve seen these issues in Windows 2008, Windows 2008 R2, and Windows 8, and I see no reason that this will change.

To reiterate, though it’s generally a good idea to always update inbox drivers on any physical server you deploy, I’ve found that it can be more critical on Hyper-V workloads due to the extra things that are going on around virtualized NICs (and in Windows 8, virtual HBAs as well).

This concludes Day 6 of the series. I’ll be posting the rest from sunny Las Vegas. If you’re heading there this week for MMS 2012, feel free to give me a shout, and don’t forget to check out SV-B313 “Hyper-V FAQs, Tips, and Tricks!” on Wednesday at 4PM.

Good Luck, and Happy Virtualizing!


Dynamic VHDs – Yes, you can use them in production (most of the time)!

For those following along, yes we missed a couple days there in our 12 days of Hyper-V FAQs, Tips, and Tricks. In my defense, I was building out a new scale-out production Hyper-V Cluster on Win8, and it was all so exciting, I lost all track of time for a day or two. 🙂

More on Windows 8 soon.

But back to the focus – Tips and Tricks.

The Dynamic VHD topic is one that’s near and dear to my heart, and also another one that’s a bit controversial. Depending on who you ask, the answers can be “it’s fine – use Dynamic VHDs and don’t think twice” to “don’t use them at all in production“. I lean toward the former statement (with some caveats), and I’ll do my best to explain why.

First, I want to state that I strongly disagree with the statement “You should use fixed disks in production, but dynamic disks are OK for test and dev”. This blanket statement, though great as a type of CYA for someone who either doesn’t know the details of a deployment or doesn’t have time to go into them”, is far from the case in many (if not most) of the cases I’ve come across.

I also want to start out by saying that a fixed VHD will always outperform a dynamic VHD (sometimes by 10%, sometimes more), just like a Corvette will always outperform a Ford Fiesta. However, I drive a Ford Fiesta, because it’s a heck of a lot cheaper, and it provides me 100% of what I need to get there, and when I drive to work with mostly 40 mph speed limits along the way, I’m never the least bit aware of any limitations in performance. I can take the difference in money saved, and apply to it to whatever else I want.

Likewise with Dynamic VHD, when you look at the cost of storage, and balance that against performance requirements for your application, more often than not, I think you’ll find that the money you can save by going with Dynamic VHD can significantly outweigh the benefits of using Fixed VHD for the workload.

In my role with Indiana University’s Auxiliary Information Technology Infrastructure team, I support about 300 virtual machines running a variety of workloads, from File, Print, and IIS to SQL, Oracle, SharePoint, and System Center. We run all of these workloads in Dynamic VHD, all on Cluster Shared Volumes, and our performance is acceptable. Could it be 10% (or more) faster? Yes. Would that potentially cost us tens or hundreds of thousands of dollars more in storage (or force us to constantly re-evaluate disk size and try to grow the disk via scripts)? Yes. From a cost/benefit analysis, we chose Dynamic VHD.

From a tips and trick perspective though, make sure you monitor the disk space where the VHDs live! The number one issue people will hit Dynamic VHD (as well as when using snapshots), is that they fail to watch the disk or LUN from the parent, and after humming along for a year, find themselves out of disk space with a bunch of crashed VMs. Make sure you stay ahead of your space requirements on the Hyper-V side. We personally have a Thin Provisioned SAN (Dell Compellent), and I set all our LUNs to be 2TB, so we never run into issues from that perspective, but if you aren’t thin provisioning your LUN, take extra care there to monitor your space.

I’ll close with my CYA caveat as well – For some scenarios, as Aidan described in his post, you can hit perf issues. It depends a lot on what kind of storage you have underneath what you’re deploying to. I can safely say that we’ve not seen the fragmentation issue that Aidan describes, and I don’t actually think that VHDs would grow like that on the disk to begin with for a few reasons: one is that Windows doesn’t just write chunks of data to the disk starting at the beginning and working to the end, for reasons like the one he mentions. Another is that if you’re using a big virtual SAN array anyway, those blocks are spread across tens of disks anyway.

So, as I’ve said before, I’ll say it again. Test dynamic VHD in your environment. If it performs well, go for it. There are lots of guidance docs recommending against it depending on the scenario, but most of those are for CYA reasons. Choose wisely, but don’t be afraid to try these out and use them in production. I do, and am happy we made the leap.

Good Luck, and Happy Virtualizing!


Hyper-V Scripting in Windows 8

For last year’s FAQs, Tips, and Tricks, I detailed one way you can quickly spin up new VMs via scripting, using System Center Virtual Machine Manager’s “Rapid Provisioning”. In day 3 or our 12 days of Tips and Tricks, I’m going to show you what this might look like in Windows 8, and this can all be done using the in-box Hyper-V and Failover Clustering cmdlets without any need for System Center.

First of all, a quick look at the new Hyper-V cmdlets.

Warning – this example is based on pre-release Windows 8 code, and is subject to change before shipping. This is just a point in time example that may not work in 6 months. Use at your own risk, and validate in the lab.

New-VM, Set-VM

At the most basic level, you only really need to use a couple cmdlets to get your VM up and running (and one additional cmdlet to cluster it): New-VM, and Set-VM

$newvm = NEW-VM -Name $VMNAME -Path  C:\ClusterStorage\Volume1\ -VHDPath "C:\ClusterStorage\Volume1\$VMNAME\c.vhdx" -SwitchName "Guest"
Set-VM $newvm -ProcessorCount 4 -DynamicMemory -MemoryMinimumBytes 1GB -MemoryStartupBytes 2GB -MemoryMaximumBytes 8GB
Set-VMNetworkAdapterVlan -VMName $VMName -VlanId 123 -VMNetworkAdapterName "Network Adapter" -Access
Add-ClusterVirtualMachineRole -Cluster MYCLUSTER -VMName $VMName

As always, the beauty of PowerShell is that you can take this little snippet and bring it into something much more wondrous and magical with just a little bit of work. In my case, I have a workflow that will do the following:

  1. Create a PS-Session to a host within my cluster
  2. Create a new folder on my Cluster Shared Volume based on the VM parameter
  3. Copy the Gold Image VHD to the Clustered Shared Volume
  4. Mount the VHD on the parent partition
  5. Inject the Computer Name and IP Address, and “IP of the Provisioning Server” information into the VHDx (to create a firewall rule in real-time)
  6. Unmount the VHD
  7. Create the VM (using above code)
  8. Power on the VM
  9. Wait a bit of time for the VM to run through it’s autounattend and come online
  10. Remotely connect to the VM through WS-MAN and WMI to finish build (domain join, Windows Updates, role/feature installation)

With about 25 lines of PowerShell, you can have your own highly flexible Scripted Cloud solution without installing any additional software! (My script is actually about 300 lines, due to lots of debugging code, error handling, et cetera, but the work is done in about 25 lines.)

An example of how it all works:

  • Create a PS-Session to a host within my cluster
$s = New-PSSession -ComputerName MYHYPERVBOX
Invoke-Command -Session $s -ScriptBlock {
param ($VMName, $IP,$ProvIP)
}  -ArgumentList $VMName,$IP,$ProvIP

The following steps happen within the PS-Session braces above.

  • Create a new folder on my Cluster Shared Volume based on the VM parameter
mkdir C:\ClusterStorage\Volume1\$VMNAME
  • Copy the Gold Image VHD to the Clustered Shared Volume
COPY D:\Win8_Gold.vhdx C:\ClusterStorage\Volume1\$VMNAME\C.VHDx
  • Mount the VHD on the parent partition
$mountedvhd = Mount-VHD C:\ClusterStorage\Volume1\$VMNAME\C.VHDx -Passthru -NoDriveLetter
Add-PartitionAccessPath -DiskNumber $mountedvhd.DiskNumber -PartitionNumber 2 -AccessPath C:\VHDMountPoint
  • Inject the Computer Name and IP Address, and “IP of the Provisioning Server” information into the VHDx
ac  C:\VHDMountPoint\Build\computername.txt $VMName
ac  C:\VHDMountPoint\Build\ip.txt $IP
ac  C:\VHDMountPoint\Build\expresssetup1.ps1 'netsh advfirewall firewall add rule name="Temp-InProvisioning" action=allow direction=in profile=All remoteip=$ProvIP'
ac  C:\VHDMountPoint\Build\expresssetup1.ps1 'shutdown -r -t 1'
  • Unmount the VHD
Remove-PartitionAccessPath -DiskNumber $mountedvhd.DiskNumber -PartitionNumber 2 -AccessPath C:\VHDMountPoint
Dismount-VHD -DiskNumber $mountedvhd.DiskNumber
  • Create the VM
$newvm = NEW-VM -Name $VMNAME -Path  C:\ClusterStorage\Volume1\ -VHDPath "C:\ClusterStorage\Volume1\$VMNAME\c.vhdx" -SwitchName "Guest"
Set-VM $newvm -ProcessorCount 4 -DynamicMemory -MemoryMinimumBytes 1GB -MemoryStartupBytes 2GB -MemoryMaximumBytes 8GB
Set-VMNetworkAdapterVlan -VMName $VMName -VlanId 123 -VMNetworkAdapterName "Network Adapter" -Access
Add-ClusterVirtualMachineRole -Cluster MYCLUSTER -VMName $VMName
  • Power on the VM
Invoke-Command -Session $s -ScriptBlock {param ($VMName) Start-VM $vmname } -ArgumentList $VMName
  • Wait a bit of time for the VM to run through it’s autounattend and come online
while (!($(Test-Connection -ComputerName $IP -Count 1 -ErrorAction SilentlyContinue)))
sleep 60
Write-Host "Waiting for machine to come online"
  • Remotely connect to the VM through WS-MAN and WMI to finish build (domain join, Windows Updates, role/feature installation)
    This part is a bit more complicated and the subject of a later blog post. 🙂

I hope this snippet of PowerShell gets you excited about the flexibility and endless possibilities of setting up your own scripted cloud in Hyper-V with Windows 8.

Good Luck, and Happy Virtualizing!


Hyper-V Tips and Tricks – C-States – Still not all they’re cracked up to be.

It’s been almost a full year since I wrote this post on C-States, but I wanted to post an update and state that it still applies. Though most of the issues have been ironed out these days, in early testing of Windows 8, a few issues have crept up around C-States again, and I’d recommend that if you ever run into odd unexplained performance issues on your deployment, this would be the first thing to double-check. One additional comment I wanted to add that I didn’t really mention in the post last year is that there might be multiple settings in your BIOS, such as one for C1E. You’ll want to disable C1E in addition to C-States.

I had an interesting chat a month or two back with an engineer from a certain related technology company who shall remain nameless in which he argued that having C-States enabled on your server is like buying a high-performance race car but only driving below 55 MPH to save gasoline (or some metaphor like that, it doesn’t really matter), and if you wanted performance, you should disable it. The irritating thing about this is there’s a ton of press that comes out about all of the energy efficiency and green initiatives and blah blah blah, going on about how great these Intel procs can manage power, and that they can sleep or power down unused cores etc, but in the end, that’s all total marketing and you just disable all that and run it with your amp turned to 11.

In the end, the best way to conserve power is to load each host as full as you can load it, and then power off the spare hosts, rather than spreading your workloads across all your hosts and letting them run at 50%.

This concludes day 2 of our “12 days of Hyper-V Tips and Tricks”. If you’re coming to MMS 2012, be sure to check out SV-B313 -Hyper-V FAQs, Tips, and Tricks!

Have any questions? Feel free to contact me directly, or leave a comment below.

Good Luck, and Happy Virtualizing!


Hyper-V Dynamic Memory Tips and Tricks

One of the great additions of Windows 2008 R2 SP1 was the introduction of Dynamic Memory (DM) support for virtual machines. Since memory is generally the limiting factor on most Hyper-V stock deployments, Dynamic Memory can help to significantly increase VM density in many cases, especially those where you always configure a VM’s memory based on the “recommended configuration” from a vendor, only to find that the server generally uses about 40-50% of that memory.

Dynamic memory is very simple to use. You just go into the VM settings and flip the radio button from “Static” to “Dynamic”, and then you can adjust “Startup RAM”, “Maximum RAM”, and “Memory Buffer”. Rather than go into those details here, I’ll simply reference this guide.

The main things I want to point out from a “FAQs, Tips, and Tricks” perspective is that there are a few cases where you might encounter unexpected behavior:

  1. Software Installation Doesn’t Meet Minimum Memory Requirements
  2. Applications that perform their own memory management
  3. Windows 2008 Standard and Web Edition
  4. Available memory in the parent partition

Software Installation Minimum Memory Requirements

Because of the way Hyper-V sets up the “Startup RAM” on a VM, one scenario that you run into on occasion is that software won’t install because the prereq checker doesn’t think it has enough memory to complete the installation (since Windows doesn’t allocate the memory until it’s needed). The quickest and easiest way around this issue is to crank the memory buffer up from Hyper-V settings to something like 200%, then run the installer, and then setting the memory buffer back down to the default. This quick workaround doesn’t require a reboot, and takes less than 10 seconds. Your other options are to temporarily disable DM or artificially use up some extra memory in the guest, but I find the first option to be the quickest and easiest.

Applications that perform their own memory management

You’ll find over time that some applications and some databases (e.g. Oracle and SQL) don’t necessarily seem to perform the way you might expect once you enable DM. Generally, this behavior stems from the fact that memory is configured from within the app or DB, so the app will just keep pushing until it finds it can’t get any more memory or it reaches the max level that’s been set.

In such applications, you can leave DM enabled and receive some benefit, but you want to go in and adjust the in-guest memory settings to line up with the Hyper-V settings. For example, let’s say you have a multi-instance SQL server on a VM. By doing a bit of testing, you find that a particular database can perform fine with 1 GB of RAM, but the left to its own devices, the SQL instance will consume 3 GB of RAM. In this case, turning on DM doesn’t necessarily have the desired effect from an “optimal usage” perspective. In this case, you’ll want to go into each instance and configure a memory cap. Once you have SQL capped appropriately, the OS will continue to use what it needs from a DM perspective, and all will play nicely.

Windows 2008 Standard and Web Edition

Windows 2008 Standard Edition and Web Server edition require a special hotfix to enable Dynamic Memory. For some reason, as of the time of this writing, it’s still a bit of a pain to get this hotfix. You have to contact Microsoft Customer Support Services (CSS) rather than download it directly, but it’s worth the headache if still have quite a few Windows 2008 Standard VMs deployed.

Available memory in the parent partition

In our testing, we found some scenarios where, when under heavy load, the parent partition may become starved of resources, and havoc began to ensue. This generally happens if you have “extras” running in the guest. Such extras could include, Antivirus, System Center agents (OM, VMM, DPM), Hardware Monitoring (OpenManage), et cetera. If you have a lot of this stuff, you might find that you need to up the memory reserves in the parent partition. For more information on this topic, see the “Troubleshooting” section here

Hopefully, these tips will help you find your way to a successful deployment with Dynamic Memory and Hyper-V. Have any questions? Feel free to contact me directly, or leave a comment below.

Good Luck, and Happy Virtualizing!



Hyper-V Dynamic Memory Configuration Guide

Implementing and Configuring Dynamic Memory

Dynamic Memory with SQL Server

12 Days of Tips and Tricks – Countdown to MMS

It’s that time of year again! April 16 marks the beginning of this year’s sold out Microsoft Management Summit. If you’re one of the lucky ones with tickets, I’ll be presenting Hyper-V FAQs Trips and Tricks again this year on Wednesday, April 18 at 4PM in Murano 3301 with fellow Virtualization MVP Nathan Lasnoski, and if it’s anything like last year, we’ll probably have a few special guest MVPs join us as well.

In order to ramp up for the big day, I’ll be doing a post a day highlighting Hyper-V FAQs, Tips, and Tricks, and will also begin to talk about some of the exciting things coming in Windows 8 Hyper-V, and how a lot of the tips and tricks when deploying Hyper-V in Windows 2008 R2 become less important or unnecessary in Windows 8. If you have questions, feel free to post them here or contact me directly, and we can either write up a post or discuss them at MMS.