Five Easy Steps to Configure Windows Server 2008 R2 Failover Cluster using StarWind iSCSI SAN

December 12, 2010 at 9:02 pm | Posted in Cluster, Windows Server 2008 R2 | 18 Comments
Tags: , , , , , ,


When there’s a no direct requirement about the business, people usually avoid the term “cluster” in their platform; mostly just because partial ignorance about the technology. For a long time, no matter if you were using open source platforms, Windows or any other, there was the believing that installing, configuring and maintaining a cluster is just a hard thing to do. The idea within this post is show you in a simple few steps, with no complex requirements, how to create a Windows Server 2008 R2 Failover cluster using another simple and effective solution as StarWind iSCSI SAN.

cluster051StarWind iSCSI SAN software represents one of the most popular solutions available in the market to create your own iSCSI shared storage (or Storage Area Network) without the need to acquire expensive hardware solutions. StarWind iSCSI SAN provides also the fastest solution to create, configure and maintaining these type of storage, having the chance to make available LUNs to any operating system capable of using an iSCSI initiator.

Let’s take a look about this step-by-step guide to create and configure a Windows Server 2008 R2 Failover Cluster, here are the steps involved:

1. Review and complete pre-requisites for the environment.

2. Install StarWind iSCSI SAN software.

3. Configure and create LUNs using StarWind iSCSI SAN.

4. Install Failover Cluster feature and run cluster validation.

5. Create Windows Server 2008 R2 Failover Cluster.

1. Review and complete pre-requisites for the environment

Requirements for clustering in Windows Server 2008 R2 changed significantly. You don’t longer need complex aspects for hardware to be compatible with Failover Cluster:

Requirements for Windows Server 2008 R2 Failover Cluster

Here’s a review of the minimum requirements to create a Windows Server 2008 R2 Cluster:

  • Two or more compatible servers: You need hardware that is compatible with each other, highly recommended to always use same type of hardware when you are creating a cluster.
  • A shared storage: This is where we can use StarWind iSCSI SAN software.
  • Two network cards on each server, one public network (from which we usually access Active Directory) and a private for heartbeat between servers. This is actually an optional requirement since using one network card is possible but not suitable in almost any environment.
    When we are using iSCSI protocol for our shared storage Microsoft recommends three network cards on each host: Public network, private, and one dedicated to iSCSI communication from servers to the storage, which in our case will be represented by a server using StarWind iSCSI software.
  • Windows Server 2008 R2 Enterprise or Datacenter Editions for hosts which will be part of the cluster. Always keep in mind that cluster is not supported in Standard Editions.
  • All hosts must be member from an Active Directory domain. To install and configure a cluster we don’t need a Domain Admin account, but we do need a Domain account which is included in the local Administrators of each host.
Requirements for StarWind iSCSI SAN Software

Here are the requirements for installing the component which will be in charge of receiving the iSCSI connections:

  • Windows Server 2008 or Windows Server 2008 R2
  • 10 GB of disk space for StarWind application data and log files
  • [Highly Recommended] 4 GB of RAM
  • 1 Gigabit Ethernet or 10 Gigabit Ethernet.

You can download StarWind iSCSI SAN software using this link, previous registration required.

Optimize TCP/IP stack to improve iSCSI performance

Before using StarWind as iSCSI target it’s recommended you’d "accelerate" TCP/IP stack to make sure it runs at full speed.

1. Enable 9K Jumbo frames for your GbE network adapter.

2. Change the following TCP parameters in the registry: [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters]

GlobalMaxTcpWindowSize = 0x01400000 (DWORD)
TcpWindowSize = 0x01400000 (DWORD)
Tcp1323Opts = 3 (DWORD)
SackOpts = 1 (DWORD)

3. Reboot.

2. Install StarWind iSCSI SAN Software

Ok, after reviewing and completing the requirements for the environment we should start installing the StarWind iSCSI SAN software.

The product is available for download in this link, and you only need a previous registration first which will also generate the license key you need to register the product.

Installing StarWind iSCSI software is probably the easiest in all of these five steps, since you only need to complete a wizard to accomplish it.

2.1 After you’ve downloaded the installation file, just double click it and the wizard will start.


2.2 Follow the wizard normally as any installation. In the process you will find one of the interesting features about it: You can install the service separately from the console from which you can administer the StarWind iSCSI.


This way you can install the console on any machine compatible to access the server or servers with StarWind iSCSI and manage storage, permissions, etc. In this case, I’ll be selecting the full installation.

The next steps are pretty straight forward so you won’t have any problem. Once the final steps are completed you’ll get a warning about the iSCSI Service needed before installing the StarWind iSCSI Service.


You just need to access the “Services” console and set the service as started and automatic.


After you click install, the process only takes a few seconds and you will additionally see some drivers that will be installed on the operating system; click “Install”.


3. Configure and create LUNs using StarWind iSCSI SAN

With the program installed, using and configuring it won’t give us any trouble.

The StarWind iSCSI console is similar as any other console you may already use it. In the “General” screen we’ll find the summary information plus how to connect to local or remote StarWind host.


In the “Configuration” section we can find the common parameters to configure iSCSI StarWind, for example the “Network” options which enable the iSCSI communications (port 3260) on any of the network adapters identified.


If we are using a special LAN/VLAN to separate our iSCSI traffic as recommended, then we should only enable the IP address used for that purpose.

Now let’s get started with the StarWind configuration.

Configuring StarWind iSCSI

We are going to review the basic steps to configure the StarWind iSCSI to start hosting LUNs for our cluster; the initial task is to add the host:

3.1 Select the “Connect” option and type in the credentials to manage the iSCSI host. The defaults used by StarWind are: User “root”; Password “starwind”.

3.2 With the host added, we can start creating the storage that will be published through iSCSI: Right-click the server and select “Add target” and a new wizard will appear.


3.3 Select the “Target alias” from which we’ll identify the LUN we are about to create and then configure to be able to cluster. In my case I’m using a simple name “w2k8r2-clstr”, click on “Next”.


3.4 Since we are going to be using hard drives to present our storage, in “Storage Type” select “Hard Disk”, click on “Next”.


3.5 In “Device Type” please note that we can use physical as virtual drives to present to our clients using iSCSI. We are going to select “Basic Virtual”, from which we’ll create a file (.img) that will represent the LUN; click on “Next”.


3.6 Select “Image File device” and click on “Next”.


3.7 Since we are creating a new one, select “Create new virtual disk” and click on “Next”.


3.8 In the following screen, select the destination and size for the file we are creating. In my case, I’m using a separate drive where I’m going to save all of my LUNs.


3.9 In the following options leave selected “Asynchronous mode” for the LUN, which will enable multithreaded disk operations (recommended for NTFS file system) and check “Allow multiple concurrent iSCSI connections (clustering)” which, of course, will provide the possibility for several hosts to be able to connect to this image file; click on “Next”.


3.10 In the cache parameters, leave the default options selected “Normal (no caching)”; click on “Next”.


3.11 In the last screen, just click on “Finish” and we’ll have our LUN ready.

As optional and recommended review the options for “CHAP permissions” and “Access Rights”. Within these options we can configure all the parameters needed for secure environments.

Once we’ve completed this, we can access this file from a Windows Server 2008 R2 host.

Configure Windows Server 2008 R2 iSCSI Initiator

Each host must have access to the file we’ve just created in order to be able to create our Failover Cluster. On each host, execute the following:

3.12 Access “Administrative Tools”, “iSCSI Initiator”.

3.13 In the “Target” pane, type in the IP address used for the target host, our iSCSI server, to receive the connections.


In my case, I’ve created two LUNs available for the cluster.

3.14 Click on “Connect” to be authorized by the host to use these files.

Once we’ve connected to the files, access “Disk Management” to verify we can now use these files as storage attached to the operating system.


3.15 And as a final step, just using the first host in the cluster, put “Online” the storage file and select also “Initialize Disk”. Since these are treated as normal hard disks, the process for initializing a LUN is no different than initializing a physical and local hard drive in the server.

Now, let’s take a look about the Failover Cluster feature.

4. Install Failover Cluster feature and run cluster validation

Prior to configure the cluster, we need to enable the “Failover Cluster” feature on all hosts in the cluster and we’ll also run the verification tool provided by Microsoft to validate the consistency and compatibility of our scenario.

4.1 In “Server Manager”, access “Features” and select “Failover Cluster”. This feature does not need a reboot to complete.


4.2 Once installed, access the console from “Administrative Tools”. Within the console, the option we are interested in this stage is “Validate a Configuration”.


4.3 In the new wizard, we are going to add the hosts that will represent the Failover Cluster in order to validate the configuration. Type in the server’s FQDN names or browse for their names; click on “Next”.


4.4 Select “Run all tests (recommended)” and click on “Next”.


4.5 In the following screen we can see a detailed list about all the tests that will be executed, take note that the storage tests take some time; click on “Next”.

If we’ve fulfilled the requirements reviewed earlier then the test will be completed with no warning.


We can also have a detailed report about the results on each test.


5. Create Windows Server 2008 R2 Failover Cluster

At this stage, we’ve completed all the requirements and validated our configuration successfully. In the next following steps, we are going to see the simple procedure to configure our Windows Server 2008 R2 Failover Cluster.

5.1 In the “Failover Cluster” console, select the option for “Create a cluster”.


5.2 A similar wizard will appear as in the validation tool. The first thing to do is add the servers we would like to cluster; click on “Next”.

5.3 In the next screen we have to select the cluster name and the IP address assigned. Remember that in a cluster, all machines are represented by one name and one IP.


5.4 In the summary page click on “Next”.

After a few seconds the cluster will be created and we can also review the report for the process.


Now in our Failover Cluster console, we’ll get the complete picture about the cluster we’ve created: Nodes involved, storage associated to the cluster, networks and the events related to cluster.


We will close up our step-by-step guide here and open the clustering series for maybe a detailed explanation about type of Failover Clusters, including Multi-Site cluster which I had the chance to present on a Microsoft event here in Buenos Aires.


After reviewing the process to create clusters using StarWind iSCSI SAN software, here are some of the things I’ve noticed:

  • StarWind iSCSI software is a simple tool to install and simpler to use and administer shared storage; not only for using it in a Failover Cluster, but for all scenarios when we need shared storage.
  • If we have the proper environment for iSCSI, StarWind iSCSI can save us a lot of money comparing the costs of an enterprise hardware solution to use iSCSI.
  • The tool provides also important differences with other similar in the market, for example, granularity of permissions we can achieve to guarantee a secure environment.
  • Setting up the right environment for iSCSI using StarWind can represent a complex situation. There is no golden rule, this “right environment” depends on a proper sizing and planning about the scenario and the services we’ll be providing; but for scalable and powerful solution you would probably need SAS hard drives, 10gb network cards, possible NIC teaming and other configurations.
    Even though we are discussing this as a negative aspect, still is common sense to think about this complex scenario as a trade-off for an expensive hardware solution.
  • The price seems very accessible for most companies, but StarWind removed the free version of this tool. I hope they can bring it back any time soon; fits perfectly when we want to make our own simple labs, or like in my case that I’ve used several times in presentations :)

More Resources

Here are more resources to look into for Windows Server 2008 R2 Failover Cluster and StarWind iSCSI Software:

Windows Server 2008 R2 and Windows 7: BranchCache

July 26, 2009 at 12:55 pm | Posted in BranchCache, Windows 7, Windows Server 2008 R2 | 6 Comments
Tags: , , , ,


The arrival of Windows Server 2008 R2 and Windows 7 is just around the corner and I don’t have to tell you that there are a lot of expectations. Common users are concentrating almost all the attention with the client operating system, but I can assure you that having those new platforms, Windows Server 2008 R2 and Windows 7, will give a new perspective for all users and IT guys.

One of the highlights that you can watch having this two boys together is BranchCache, focused mainly in optimizing your WAN bandwidth using special cache options.

As the name says it, BranchCache works in scenarios with branch offices where clients interact and request files from the headquarters. A common and current scenario is related when you access an internal website with the servers located in the main office, each branch office client will request the files directly with the headquarters every time a user intends to communicate with the site, significantly affecting the WAN link with the same data transmitted over and over.

BranchCache is a simple idea that caches every content downloaded from the main office using a server or other branch clients, so every time that a second client tries to download the content, the request is directly handled within the branch office optimizing the WAN link and downloading time.

How Does It Work?

There are no complex configurations and you can even use an option that does not include a server. There are two types of BranchCache deployment options: Distributed Cache (no server) and Hosted Cache Mode (Windows Server 2008 R2 server involved as the cache server).

Keep in mind that the environment will only work with Windows Server 2008 R2 and Windows 7 clients.

Distributed Cache

Windows 7 branch office clients store a copy of the content that is downloaded from the main office, and makes it available to other clients in the branch office every time that they try to retrieve those files.

Hosted Cache

Within this scenario, all the cache content is stored and controlled in a Windows Server 2008 R2 that retrieves all the requests made from branch clients and keeps all the data locally to answer any other requests for the same content.


Microsoft recommends to use this mode on branch offices with over 10 clients.

What About Cache Authorization and Updates?

These are common questions that you may be asking yourself right now:

Q: If the files are stored in a local cache within the branch office (distributed among clients or on a server), that means that all branch users will have access to these files?

A: No. There is an authorization phase that the requestor must complete before receiving the file. In a distributed BranchCache mode, when the client requests the data, the server (main office) authorizes, or not, the cache content to be delivered to the branch office client. In a Hosted Cache mode, the cache server keeps identifiers with the permissions for each cached content, giving access only to authorized clients.

Q: What about if the file changes when it was already cached by clients or a server? The file is distributed out-to-date to branch clients?

A: No. Whenever a change is made on a folder that is distributed with BranchCache, a new identifier (the same used for access authorization) it’s send to branch cache clients (if the mode is set as Distributed Cache); or send it directly to the cache server (if the mode is configured as Hosted Cache).

Configuring BranchCache

In this section I’ll give you small step-by-step BranchCache procedure. There are basically three steps to complete the environment:

1. Configure the headquarters Windows Server 2008 R2 that contains the data that must be cached.

2. Configure the Windows 7 branch clients that will use the cached content.

3. Configure the Windows Server 2008 R2 as Hosted Cache server, if that’s the option you selected for your environment.

The complete reference to achieve this deployment can be found inBranchCache Early Adopter’s Guide.

1. Configuring the File/Web Server

a. Add the feature from Server Manager: BranchCache.


Remember, it’s a feature not a role.

b. If this is going to be a file server, you must add the “File Services” role and the service “BranchCache for remote files”.


c. Configure the Group Policy to enable BranchCache.

Active Directory it is not a requirement for BranchCache, but surely it is recommended for centralized management. You can use an Active Directory or local policy to apply to this server.

The GPO can be located in Computer Configuration > Policies > Administrative Templates > Network > Lanman Server > Hash Publication for BrandCache


The options when you Enable this GPO are self explained: For all shares, files shares tagged and disallow hash publications.


2. Client Configuration

Ok, now you have the server configured to be able to distribute the BranchCache shares. Now it’s time to configure the clients to understand this type of cache. It is easily done with Group Policies, and again, this can be done in a domain environment by linking GPOs or just using Local Group Policies.

a. Access GPOs editing MMC: Computer Configuration > Policies > Administrative Templates > Network > Turn on BranchCache > Enabled.


b. On the same GPO list, you’ll find the rest of the necessary configurations according to the chosen model.

If you are using Distributed Cache, enable “Turn on BranchCache – Distributed Caching Mode”. And the same for hosted cache, “Turn on BranchCache – Hosted Cache mode”.

c. [optional] You can also set other interesting values using this set of GPOs, like latency values or setting a percentage of your disk space dedicated to this cache.

d. Ensure that you have configured the firewall inbound policies to allow BranchCache connections. More info about this on the document mentioned above: BranchCache Early Adopter’s Guide.

3. Configure the Cache Server

For obvious reasons, the communication between the parties involved must be secured and the data available must be guaranteed as updated and correct. That’s why if you are using Hosted Cache Mode, a certificate will be present to achieve a SSL communication and guarantee that data is not modified by an attacker.

It is important to note that the presence of a Certificate Authority (CA) server it is not a requirement, the certificate can be prepared directly from the file/web server and then imported to the Hosted Cache server.

a. First, enable the BranchCache feature from Server Manager.

b. Deploy the certificate inside Certificates (Local Computer) > Personal.


c. Access the certificate properties, the details page will show you the “Thumbprint” field. Copy to the clipboard.

d. Link the certificate to BranchCache with “netsh”:

NETSH HTTP ADD SSLCERT IPPORT= CERTHASH=<thumbprint> APPID={d673f5ee-a714-454d-8de2-492e4c1bd8f8}

More Resources

Here are some other guides and interesting links you can find about this feature.

That’s pretty much in this BranchCache overview and kind of walkthrough.



Windows Server 2008 R2 Live Migration: “Overview & Architecture” and “Step-by-Step Guide” Documents Released

January 30, 2009 at 4:55 pm | Posted in Documentation, Hyper-V, Virtualization, Windows Server 2008, Windows Server 2008 R2 | 1 Comment
Tags: , , , ,

Microsoft released in the last days two new more documents about one of the most expected technologies on Windows Server 2008 R2: Live Migration. This new technology will allow you to move any running virtual machine using Hyper-V from Windows Server 2008 R2 or Hyper-V Server 2008 R2 (the free hypervisor offered by Microsoft) to another machine with any of those operating systems, without any downtime or disruption of the service.

Here are the two links for the new articles:

Windows Server 2008 R2 & Microsoft Hyper-V Server 2008 R2 – Hyper-V Live Migration Overview & Architecture

Step-by-Step Guide to Using Live Migration in Windows Server 2008 R2

Here’s an example graphic of how Live Migration setup handles Configuration Files of the virtual machines:

It is important for you to notice that Live Migration requires Failover Clustering to be configured on all hosts, access to a shared storage (like in NAS or SAN environments) and a special network configured between them to be used only for Live Migration feature.

For more information about Hyper-V Failover Clustering check this guide:Hyper-V Step-by-Step Guide: Hyper-V and Failover Clustering

Complete list of requirements for Live Migration:

  • Windows Server 2008 R2 x64 Enterprise Edition
  • Windows Server 2008 R2 x64 Datacenter Edition
  • Live migration is also supported on Microsoft® Hyper-V™ Server 2008 R2.
  • Microsoft Failover Clustering must be configured on all physical hosts that will use live migration
  • Failover Clustering supports up to 16 nodes per cluster
  • The cluster should be configured with a dedicated network for the live migration traffic
  • Physical host servers must use a processor or processors from the same manufacturer
  • Physical hosts must be configured on the same TCP/IP subnet
  • Physical hosts must have access to shared storage

Other interesting links about Hyper-V, Hyper-V Server and Failover Cluster:

Hyper-V Planning and Deployment Guide
Failover Cluster Deployment Guide
Failover Cluster Step-by-Step Guide: Validating Hardware for a Failover Cluster
The Microsoft Support Policy for Windows Server 2008 Failover Clusters
Hyper-V Server 2008 R2 Beta Available for Download
Hyper-V Server: Installing, configuring and troubleshooting


Blog at
Entries and comments feeds.