Hyper-V Failover Cluster: Basic Setup

16. February 2017 Technical 4

I have previously shared a slightly more complex method for configuring a simple 2-node failover cluster, involving a converged Hyper-V virtual switch, with QoS policies and so forth to manage traffic on the Management, CSV and Live Migration networks.

However, there is another config I wanted to share, which in some cases is preferable in an SMB environment. Reason being: not every SMB administrator / generalist is going to have (or want to learn) about PowerShell, applying VLAN and QoS settings to their servers’ virtual NIC’s, and so forth.  Especially where it may not be necessary to do so, like if they only have a few 1 Gb connections anyway.

Therefore the question is: what is the “easiest” config that will take us from zero to clustered? We can accomplish this pretty easily within the GUI, and by using simple cross-over cabling for the cluster networking:

In the diagram above, we can see two physical Hyper-V hosts connected to each other, and to two independent physical switches (switches do not necessarily need to be stacked unless that is a requirement in your environment). As well, we have a dual controller shared storage device (SAS-connected). You can use any shared storage for your cluster–I tend to recommend SAS to my clients for its affordability and ease of setup. Again, we will not configure storage in the following steps.

  • Step 1: Install Hyper-V Role without a Virtual Switch
  • Step 2: Setup Physical Network Connections
  • Step 3: Create the NIC Team
  • Step 4: Create Hyper-V Switch
  • Step 5: Join the Domain
  • Step 6: Create the Cluster
  • Step 7: Optional Tuning
  • Summary of PowerShell cmdlets (script it!)

Step 1: Install Hyper-V Role without a Virtual Switch

Hopefully you have already done the basic setup/install of Windows Server 2012 R2 or later. To add the necessary roles & features, go to Server Manager > Add Roles.

In addition to selecting the Hyper-V role, you’ll also want to install Failover Clustering and Multipath IO, which can be found on the Features page in the Wizard.

Step 2: Setup Physical Network Connections

We want to have at minimum two, but preferably three or even four physical NIC’s available in each host server–this should not be a problem for most modern servers. Here is how you will connect them:

  • String at least one cross-over cable between the hosts; this will be the CSV/heartbeat network for the cluster
  • Optionally, a second crossover can be dedicated for Live migrations
  • Connect two (or more) cables to each host; one going to each switch from each server

Open Control Panel > Network & Internet > Network Connections.  To begin, on the Properties of each of the NICs, we need to make some adjustments by clicking on Configure.

First, we cannot have you using Virtual Machine Queues (or VMQ) on 1 Gb NIC’s–there is just no need for it, and on certain NIC’s, it is known to cause issues. So find this setting on the Advanced tab and disable it. Do this for all NICs.

Next, on the NICs used for your crossover connections, enable Jumbo Packets (or Jumbo Frames–this description will vary by vendor).  It will look something like the below. Sometimes the options are just “Enabled” or “Disabled” (choose Enabled), but it might also ask you to select a value such as 9000 or 9014 like we see here:

cluster-net-2

Next, for each host, assign the TCP/IP settings on the CSV network adapter. I usually choose a very unique IP scheme that will not be in use elsewhere in your network (e.g. here I chose 10.127.127.1 & 10.127.127.2 for HOST1 & HOST2 respectively). Optionally, you can also rename the adapter to something more user-friendly (e.g. “CSV” or “Heartbeat” instead of “Ethernet 3”

cluster-net-1

Test the result–you should be able to ping the opposite host with a large packet. Open a command prompt and use ping <ipaddress> -f -l 8000:

cluster-net-3

You may optionally repeat this step for another cross-over connected pair of NIC’s, assigning them IP’s in a different subnet such as 10.127.128.x.

Step 3: Create the NIC Team for VM traffic

Go to Server Manager > Local Server and click where it says Disabled next to NIC Teaming. Find Tasks > New Team.

Just name the team something, select the adapters (not including the one used for CSV/heartbeat), click OK.

cluster-net-5

Now you have a switch-independent team, meaning there is no need to configure anything special on your switching–the Windows Server OS will do all the work for you.

In PowerShell, you could accomplish this same exact configuration by entering the following command:

The next step is attaching a Hyper-V switch to this team.  Before we proceed, take a peek in your Network Connections settings again. You will see a new object for the NIC Team. Notice its Device Name is Microsoft Network Adapter Multiplexor Driver. This is the name of the device we will attach our Hyper-V virtual switch to in the next step.

cluster-net-5a

Step 4: Create Hyper-V Switch

Open Hyper-V Manager. From the right-hand side Action pane, find Virtual Switch Manager. Make sure you select External as the virtual switch type and click Create Virtual Switch. Name the switch and be sure to Allow management operating system to share this network adapter.

cluster-net-6

You can accomplish this same task in PowerShell with the following command:

See? PowerShell isn’t so hard after all. Let’s go check out the results by returning to our Network Connections in the Control Panel.

cluster-net-7

The new “vEthernet (HVSwitch)” adapter is a virtual Ethernet adapter assigned to the Management OS (that is, your Hyper-V host server). It is attached to the Hyper-V Switch, the same as any virtual machine will get their virtual Ethernet connection. You can also see this object in PowerShell:

cluster-net-7a

Be sure to assign this virtual NIC a static IP on your data network if you haven’t yet already! Define the default gateway & DNS servers on the network also.

Step 5: Join the Domain

Now that we have our networking configured, with an IP on the network, we will be able to join the domain, which is a pre-requisite for creating a failover cluster.  You can join the domain from Server Manager > Local server > click on the Computer Name.

Or in PowerShell:

Step 6: Create the Cluster

Our last step! Open the Failover Cluster Manager. From the Action pane on the right, click Create Cluster. Step through the wizard–you just have to add both host’s names, and skip validation for now (you will want to run it eventually in order to be eligible for MS support). Finally add a Cluster Name & IP address (pick an address that is not in use on your data network, and make sure it is excluded from the DHCP range).

cluster-net-9

In PowerShell, this is even easier:

Here we have used the -NoStorage switch since we plan to add shared storage later. Otherwise, if your shared storage is connected properly and provisioned with multi-path IO enabled, in this case you would remove the “-NoStorage” switch, and your cluster should be ready to go.*

Step 7: Optional Tuning

If you want to dial it in a little more, open the Failover Cluster Manager, and check out the Networks. You can edit the properties of each network to give it a “friendly” name as I have done here–you can see which IP scheme is which under Network Connections so that you name them properly.

cluster-net-A

One more item to check is Live Migration settings. Right-click on Networks on the left, then Live Migration Settings… This shows you which order of preference will be used for Live Migration traffic (or whether a network will be included at all). I usually like to leave the CSV network deprioritized in here.

cluster-net-B

This configuration is perfectly reasonable for many SMB organizations with a small number of virtual machines (less than say one dozen, and often even less than half a dozen). These days, it is not uncommon to see only these roles in an SMB (one per virtual machine):

  • DC (or maybe two)
  • FILE
  • PRINT
  • RDS
  • SQL

Basic rule of thumb: if you’d only require Windows Server Standard licensing (2x VM’s/Host/license), go with the basic solution. If you’d need Datacenter (unlimited VM’s/Host), go with the advanced (converged) solution.

Summary of PowerShell cmdlets

Consider scripting this to save time! In the following, you would edit the variables to suit your environment, then execute the script. That means anywhere you see $VariableName=”<Something>”, you would simply replace the <Something> that appears between quotes with what you’d like to name that something–I have examples/suggestions included behind the hashtags at the end of these lines.

#To begin setup of the server (modify and run for each server):

#For the CSV Network:

#Setup the NIC Team & Virtual Switch:

#Configure the Management Network:

#Join the Domain:

#Setup the cluster:

 


4 thoughts on “Hyper-V Failover Cluster: Basic Setup”

  • 1
    Craig Franklin on February 16, 2017 Reply

    Many thanks for preparing this – it’s very helpful. However I don’t understand where the cluster storage fits into this topology – the CSV cluster cable linking the two nodes suggests you are using local storage, but I thought it needs to be a shared storage resource.

    • 2
      Alexander on February 16, 2017 Reply

      Actually, the CSV cable does not suggest local disks. Cluster Shared Volume traffic is sometimes redirected over this link, but each host has a connection to some type of external storage which it uses normally–iSCSI, SAS, Fiber Channel, or whatever. The CSV link is only used to redirect I/O at certain events, such as during a failover event, or if a host loses its connection to storage–then it is able to communicate to the storage through this other path–using the other host over the CSV link. So yes you are correct–shared storage is a requirement of failover clusters. The CSV network is recommended as well, even though cluster traffic can also co-exist on the management subnet, it is best practice to dedicate a link (or two) for this purpose.

  • 3
    Craig Franklin on February 16, 2017 Reply

    Of course, that makes sense. Many thanks for the prompt response.

  • 4
    Alan Osborne on August 14, 2017 Reply

    You mentioned the PING command for testing jumbo packets:

    “ping -l 8000”

    Note that, without the -f switch, PING would fragment the packets to fit the default 1500 byte MTU. So, you need to add the -f (don’t fragment) switch to get a valid result.

Leave a Reply

Your email address will not be published. Required fields are marked *