[Step-by-Step] Creating a Windows Server 2012 R2 Failover Cluster using StarWind iSCSI SAN v8

March 27, 2014 at 10:27 pm | Posted in Cluster, Windows Server 2012, Windows Server 2012 R2 | Leave a comment
Tags: , , , , , , ,

 

If you don’t know StarWind iSCSI SAN product and you currently handling clusters that require a shared storage (not necessarily Windows), I highly recommend to take a look around to this the platform. To summarize, StarWind iSCSI SAN represents a software which allows you to create your own shared storage platform without requiring any additional hardware.

3node_big

I created a post a while ago about “Five Easy Steps to Configure Windows Server 2008 R2 Failover Cluster using StarWind iSCSI SAN” to explain how can a Failover Cluster can be easily configured with the help of StarWind iSCSI SAN. Since there has been some changes in the latest releases of Windows Server and StarWind iSCSI SAN has a brand new v8 of its platform, I thought it would be a good idea to create a new article to achieve an easy way to create our own cluster.

As I did, for the previous post, the main idea about this article is to show a simple step-by-step process to get a Windows Server 2012 R2 Failover Cluster up and running, and without requiring to use an expensive shared storage platform to complete it. The steps involved are:

  1. Review and complete pre-requisites for the environment.
  2. Install StarWind iSCSI SAN software.
  3. Configure and create LUNs using StarWind iSCSI SAN.
  4. Install Failover Cluster feature and run cluster validation.
  5. Create Windows Server 2012 R2 Failover Cluster.

1. Review and Complete Pre-Requisites for the Environment

Windows Server 2012 introduced some changes into the Failover Cluster scenarios, even though those are important and improved changes, the basic rules of Failover Cluster has not changed. Here are the requirements for a Windows Server 2012 R2 Failover Cluster.

Requirements for Windows Server 2012 R2 Failover Cluster

Here are the requirements in Windows Server 2012 R2 for Failover Clusters:

  • Two or more compatible servers: You need hardware that is compatible with each other, highly recommended to always use same type of hardware when you are creating a cluster. Microsoft requires for the hardware involved to meet the qualification for the “Certified for Windows Server 2012 logo”, the information can be retrieved from the Windows Server catalog.
  • A shared storage: This is where we can use StarWind iSCSI SAN software.
  • [Optional] Three network cards on each server, one public network (from which we usually access Active Directory), a private for heartbeat between servers and one dedicated to iSCSI storage communication. This is actually an optional requirement since using one network card is possible but not suitable in almost any environment.
  • All hosts must be member from an Active Directory domain. To install and configure a cluster we don’t need a Domain Admin account, but we do need a Domain account which is included in the local Administrators of each host.

Here are some notes about some changes introduced in Windows Server 2012 regarding requirements:

We can implement Failover Cluster on all Windows Server 2012 and Windows Server 2012 R2 editions, including of course Core installations. Previously on Windows Server 2008 R2 the Enterprise or Datacenter Edition were necessary.

Also the concept for “Active Directory-detached cluster” appears in Windows Server 2012, which means that a Failover Cluster does not require a Computer object in Active Directory, the access is performed by a registration in DNS. But, the cluster nodes must still be joined to AD.

Requirements for StarWind iSCSI SAN Software

Here are the requirements for installing the component which will be in charge of receiving the iSCSI connections:

  • Windows Server 2008 R2 or Windows Server 2012
  • Intel Xeon E5620 (or higher)
  • 4 GB of RAM (or higher)
  • 10 GB of disk space for StarWind application data and log files
  • Storage available for iSCSI LUNs: SATA/SAS/SSD drive based arrays supported. Software based arrays are not supported in iSCSI.
  • 1 Gigabit Ethernet or 10 Gigabit Ethernet.
  • iSCSI ports open between hosts and StarWind iSCSI SAN Server. The iSCSI ports are 3260 and 3261 for the management console.
General Recommendations for the Environment

In this scenario, there are several Microsoft and StarWind recommendations we must fulfill in order to get the best supportability and results. Keep in mind that each scenario could require different recommendations.

To mention some of the general recommendations:

  • NIC Teaming for adapters, except iSCSI. Windows Server 2012 improved significantly the performance and of course supportability of network adapters teaming and is highly recommended to use that option for improved performance and high-availability. But we must avoid configure teaming on iSCSI network adapters.

Microsoft offers a very detailed document about handling NIC teaming in Windows Server 2012: “Windows Server 2012 NIC Teaming (LBFO) Deployment and Management” and also check this article “NIC Teaming Overview”.

  • Multi-path for iSCSI network adapters. iSCSI network adapters prefer handling MPIO instead of NIC teaming, because in most scenarios the adapter throughput is not improved and moreover there could be some increases in response times. Using MPIO is the recommendation with round-robin.
  • Isolate network traffic on the Failover Cluster. It is almost mandatory that we separate iSCSI traffic from the rest of networks, and highly recommended to isolate the rest of traffic available. For example: Live Migration in Hyper-V clusters, management network, public network, or Hyper-V replica traffic (if the feature is enabled in Windows Server 2012).
  • Drivers and firmware updated: Most of hardware vendors will require prior to start any configuration, like a Failover Cluster, to have all drivers and firmware components updated to the latest version. Keep in mind that having different drivers or firmware between hosts in a Failover Cluster will cause to fail the validation tool and therefore the cluster won’t be supported by Microsoft.
  • Leave one extra LUN empty in the environment for future validations. The Failover Cluster Validation Tool is a great resource to retrieve detailed status about the health of each cluster component, we can run the tool whenever we want and it will not generate any disruption. But, to have a full “Storage Validation” it is required to have at least one LUN available in the cluster but not used for any service or application.

For more information about best practices, review the following link: “StarWind High Availability Best Practices”.

One important new feature introduced by StarWind iSCSI SAN v8 is the use of Log-Structured File System (LSFS). LSFS is a specialized file system that stores multiple files of virtual devices and ensures high performance during writing operations with a random access pattern. This file system resolves the problem of slow disk operation and writes data at the speed that can be achieved by the underlying storage during sequential writes.

At this moment LSFS is experimental in v8, use it carefully and validate your cluster services in a lab scenario if you are planning to deploy LSFS.

2. Install StarWind iSCSI SAN software

After we reviewed and verified the requirements, we can easily start installing StarWind iSCSI SAN software, which can be downloaded in trial-mode. This represents the simplest step in our list, since the installation does not have any complex step.

stwnd02

In the process, the Microsoft iSCSI service will be required to add to the server and the driver for the software.

stwnd03

After the installation is complete we can access our console and we will see as a first step necessary is to configure the “Storage pool” necessary.

We must select the path for the hard drive where we are going to store the LUNs to be used in our shared storage scenario.

stwnd04

3. Configure and create LUNs in StarWind iSCSI SAN

When we have the program installed, we can start managing it from the console and we will see the options are quite intuitive.

stwnd06

We are going to split the configuration section in two parts: Hosting iSCSI LUNs with StarWind iSCSI SAN and configuring our iSCSI initiator on each Windows Server 2012 R2 host in the cluster.

Hosting iSCSI LUNs with StarWind iSCSI SAN

We are going to review the basic steps to configure the StarWind iSCSI to start hosting LUNs for our cluster; the initial task is to add the host:

3.1 Select the “Connect” option for our local server.

3.2 With the host added, we can start creating the storage that will be published through iSCSI: Right-click the server and select “Add target” and a new wizard will appear.

3.3 Select the “Target alias” from which we’ll identify the LUN we are about to create and then configure to be able to cluster. The name below will show how we can identify this particular target in our iSCSI clients. Click on “Next” and then “Create”.

stwnd07

3.4 With our target created we can start creating “devices” or LUNs within that target. Click on “Add Device”.

stwnd10

3.5 Select “Hard Disk Device”.

stwnd11

3.6 Select “Virtual Disk”. The other two possibilities to use here are “Physical Disk” from which we can select a hard drive and work in a “pass-through” model.

stwnd12

And “RAM Disk” is a very interesting option from which we can use a block of RAM to be treated as a hard drive or LUN in this case. Because the speed of RAM is much faster than most other types of storage, files on a RAM disk can be accessed more quickly. Also because the storage is actually in RAM, it is volatile memory and will be lost when the computer powers off.

3.7 In the next section we can select the disk location and size. In my case I’m using E:\ drive and 1GB.

stwnd13

3.8 Since this is a virtual disk, we can select from either thick-provision (space is allocated in advance) or thin-provision (space is allocated as is required). Thick provisioning could represent, for some applications, as a little bit faster than thin provisioning.

stwnd14

The LSFS options we have available in this case are: “Deduplication enabled” (procedure to save space since only unique data is stored, duplicated data are stored as links) and “Auto defragmentation” (helps to make space reclaim when old data is overwritten or snapshots are deleted).

3.9 In the next section we can select if we are going to use disk caching to improve performance for read and writes in this disk. The first opportunity we have works with the memory cache, from which we can select write-back (asynchronous, with better performance but more risk about inconsistencies), write-through (synchronous, slow performance but no risk about data inconsistency) or no cache at all.

stwnd15

Using caching can significantly increase the performance of some applications, particularly databases, that perform large amounts of disk I/O. High Speed Caсhing operates on the principle that server memory is faster than disk. The memory cache stores data that is more likely to be required by applications. If a program turns to the disk for data, a search is first made for the relevant block in the cache. If the block is found the program uses it, otherwise the data from the disk is loaded into a new block of memory cache.

3.10 StarWind v8 adds a new layer in the caching concept, using L2 cache. This type of cache is represented in a virtual file intended to be placed in SSD drives, for high-performance. In this section we have the opportunity to create an L2 cache file, from which again we can select to use it as write-back or write-through.

stwnd16

3.11 Also, we will need to select a path for the L2 cache file.

stwnd17

3.12 Click on “Finish” and the device will be ready to be used.

3.13 In my case I’ve also created a second device in the same target.

stwnd19

Configure Windows Server 2012 R2 iSCSI Initiator

Each host must have access to the file we’ve just created in order to be able to create our Failover Cluster. On each host, execute the following:

3.14 Access “Administrative Tools”, “iSCSI Initiator”.

We will also receive a notification about “The Microsoft iSCSI service is not running”, click “Yes” to start the service.

3.15 In the “Target” pane, type in the IP address used for the target host, our iSCSI server, to receive the connections. Remember to use the IP address dedicated to iSCSI connections, if the StarWind iSCSI SAN server also has a public connection we can also use it, but the traffic will be directed using that network adapter.

3.16 Click on “Quick Connect” to be authorized by the host to use these files.

stwnd21

Once we’ve connected to the files, access “Disk Management” to verify we can now use these files as storage attached to the operating system.

stwnd22

3.17 And as a final step, just using the first host in the cluster, put “Online” the storage file and select also “Initialize Disk”. Since these are treated as normal hard disks, the process for initializing a LUN is no different than initializing a physical and local hard drive in the server.

Now, let’s take a look about the Failover Cluster feature.

4. Install Failover Cluster feature and Run Cluster Validation

Prior to configure the cluster, we need to enable the “Failover Cluster” feature on all hosts in the cluster and we’ll also run the verification tool provided by Microsoft to validate the consistency and compatibility of our scenario.

4.1 In “Server Manager”, access the option “Add Roles and Features”.

4.2 Start the wizard, do not add any role in “Server Roles”. And in “Features” enable the “Failover Clustering” option.

stwnd26

4.3 Once installed, access the console from “Administrative Tools”. Within the console, the option we are interested in this stage is “Validate a Configuration”.

stwnd28

4.4 In the new wizard, we are going to add the hosts that will represent the Failover Cluster in order to validate the configuration. Type in the server’s FQDN names or browse for their names; click on “Next”.

stwnd29

4.5 Select “Run all tests (recommended)” and click on “Next”.

stwnd30

4.6 In the following screen we can see a detailed list about all the tests that will be executed, take note that the storage tests take some time; click on “Next”.

If we’ve fulfilled the requirements reviewed earlier then the test will be completed successfully. In my case the report generated a warning, but the configuration is supported for clustering.

Accessing the report we can get a detailed information, in this scenario the “Network” section generated a warning for “Node <1> is reachable from Node <2> by only one pair of network interfaces. It is possible that this network path is a single point of failure for communication within the cluster. Please verify that this single path is highly available, or consider adding additional networks to the cluster”. This is not a critical error and can easily be solved by adding at least one new adapter in the cluster configuration.

stwnd33

4.7 Leaving the option “Create the cluster now using the validated nodes” enabled will start the “Create Cluster” as soon as we click “Finish”.

5. Create Windows Server 2012 R2 Failover Cluster

At this stage, we’ve completed all the requirements and validated our configuration successfully. In the next following steps, we are going to see the simple procedure to configure our Windows Server 2012 R2 Failover Cluster.

5.1 In the “Failover Cluster” console, select the option for “Create a cluster”.

5.2 A similar wizard will appear as in the validation tool. The first thing to do is add the servers we would like to cluster; click on “Next”.

5.3 In the next screen we have to select the cluster name and the IP address assigned. Remember that in a cluster, all machines are represented by one name and one IP.

stwnd36

5.4 In the summary page click on “Next”.

stwnd37

After a few seconds the cluster will be created and we can also review the report for the process.

Now in our Failover Cluster console, we’ll get the complete picture about the cluster we’ve created: Nodes involved, storage associated to the cluster, networks and the events related to cluster.

stwnd39

The default option for a two-node cluster is to use a disk as a witness to manage cluster quorum. This is usually a disk we assign the letter “Q:\” and does not store a large amount of data. The quorum disk stores a very small information containing the cluster configuration, its main purpose is for cluster voting.

To perform a backup for the Failover Cluster configuration we only need to backup the Q:\ drive. This, of course, does not backup the services configured in the Failover Cluster.

Cluster voting is used to determine, in case of a disconnection, which nodes and services will be online. For example, if a node is disconnected from the cluster and shared storage, the remaining node with one vote and the quorum disk with also one vote decides that the cluster and its services will remain online.

This voting is used as a default option but can be modified in the Failover Cluster console. Modifying it depends and is recommended in various scenarios: Having an odd number of nodes, this case will be required to use as a “Node Majority” quorum; or a cluster stretched in different geographically locations will be recommended to use an even number of nodes but using a file share as a witness in a third site.

For more information about quorums in Windows Failover clusters, review the following Microsoft TechNet article: “Configure and Manage the Quorum in a Windows Server 2012 Failover Cluster”.

More Resources

To review more information about Windows Server 2012 R2 clusters and StarWind iSCSI SAN review the following links and articles:

About these ads

Starwind iSCSI SAN 5.7 Available

August 6, 2011 at 8:11 pm | Posted in Cluster | Leave a comment
Tags: , , , ,

 

Starwind released recently a new version of their iSCSI SAN solution, Starwind 5.7. It includes several new features that scale up this already great SAN solution, providing some important improvements regarding performance, monitoring and usability for IT administrators.

Some of the improvements included:

  • Re-worked and re-designed completely from the scratch all-new HA (high-availability) engine 2x-3x faster compared to previous versions.
  • Quality of Service (QoS) options added.
  • Data de-duplication with variable block size (512 byte – 256 KB) to save on storage especially in hypervisor scenarios.
  • Performance monitor included in Starwind console.
  • Snapshot manager.
  • Targets and servers can be arranged in groups.
  • Event notification in system tray.

image

Within this post we’ll review some of the newest features included, testing them in some scenarios. Here’s what we are going to do:

1. Reviewing Starwind iSCSI SAN 5.7 installation

2. Reviewing improvements in Starwind management console.

3. Reviewing usability and GUI new features.

4. Configuring HA and synchronization priority.

You can download Starwind iSCSI SAN software using this link, previous registration required.

For a detailed step-by-step of creating and configuring a Windows Server 2008 R2 cluster using Starwind check my previous article: Five Easy Steps to Configure Windows Server 2008 R2 Failover Cluster using Starwind iSCSI SAN.

Reviewing Starwind iSCSI SAN 5.7 Installation

As any other version of this solution, the installation steps are pretty simple. Just completing the wizard we’ll have it ready to use it.

image

We can also install the components separately, in case we want to remotely manage the iSCSI platform.

image

Once installed, to get started we need to use the “Connect” option and as a reminder, the default credentials used in Starwind are:

User: root
Password: starwind

image

Reviewing Improvements in Starwind

The management console looks pretty much the same, but adding some interesting tweaks that will be very helpful for IT admins.

The first one is one of the most important ones, the “Performance” tab included. Within this window we can monitor in real time the performance in the current load of the server and targets.

image

We can retrieve the following graphics: CPU/RAM load (for a quick comparison), CPU load, RAM load, total IOPS and total bandwidth.

image

The second one is a really simple one, but very helpful. The possibility to create target groups, this lets us easily identify in our servers the right collection.

image

As a third improvement in the usability of this management console we have the Event notifications in system tray, with this tweak we can receive notification as soon as there is a change in the configuration and availability of our servers and targets.

image

Configuring HA and synchronization Priorities

As mentioned earlier, the main feature regarding performance changes is the one represented by including asynchronous mode for HA targets.

If you are taking your initial steps with Starwind you are probably wondering about HA targets and asynchronous options, let’s take a quick look about the definitions on each of these concepts:

What are HA (High Availability) Targets?

As we’ve reviewed in my previous chapter, creating clusters and provide high availability for a Windows Server service using Starwind is one of the main purposes of this solution and a very simple task to do. But Starwind also includes the possibility to apply high availability in the shared storage we are providing.

image

When we are creating our Starwind device (basically the LUN we are going to share among hosts) we can configure it as a “High Availability device”, which can be present in two different Starwind servers. In case the primary server fails, the second one (partner server) can still offer the device without affecting availability.

What about synchronization of the device?

When we are using HA devices, there are two shared disks: one in primary server and the other one in partner server. These are two different devices, meaning that they must be replicated and synchronized.

Here is when the synchronous/asynchronous mode appears.

If the replication is presented synchronous, every change (“write”) in the device must be replicated to the partner device to complete the operation. If we don’t have a good design or the bandwidth presented between these two is not a good one, the performance of HA targets reduces significantly.

Synchronous replication example (image taken from cisco.com):

image

If we are using asynchronous replication, the “write” operation does not have to wait for the replication between the two devices to be completed, improving performance.

Asynchronous replication example (image taken from cisco.com):

image

The use of synchronous or asynchronous replication must be carefully studied analyzing all factors.

Using HA devices and synchronization options

Creating a target requires only running a simple wizard (as seen in my previous chapter); using HA devices only differs in a few options:

1. Select the “Add target” option.

2. Select “Target Alias” and using “Hard Disk” to create our HA device.

3. Select “Advanced Virtual”.

image

4. Select “High Availability device” and click on “Next”.

image

5. Specify the partner server options by providing IP address (or FQDN), port, type of authentication and credentials.

6. In “Virtual Disk Parameters” complete the path for these two devices, primary and partner.

7. In “Data synchronization channel parameters” configure the synchronization interface, heartbeat interface and priority of each server.

image

8. Select the option “Clear virtual disks” if we are using devices we’ve just created.

image

Note that these options available here can be very helpful if we created a non-HA device and later we decide to convert it in HA.

9. Select the desired option in “HA device cache parameters”.

image

10. Complete the wizard and we’ll have our high available device ready.

Once completed the wizard, we get the chance to configure the synchronization options.

image

Where we can set the synchronization priorities: “Faster synchronization” represents the synchronous mode (waiting the replication to complete the operation) or “Faster client requests processing” represents the asynchronous mode.

image

The improvements regarding performance will vary depending on the environment we are using, but it could represent 2 or 3 times faster than previous implementations. Cheers to that!

Note:

There are some reports about slow performance in cluster environments using iSCSI, pass-through disks on Hyper-V hosts; this is a known issue in Windows Server 2008 R2 and there’s a Microsoft KB available to solve this problem: http://support.microsoft.com/kb/2020559

More Resources

Here are more resources you can find to take a deep dive in Starwind and Windows Server cluster solutions:

Five Easy Steps to Configure Windows Server 2008 R2 Failover Cluster using StarWind iSCSI SAN

December 12, 2010 at 9:02 pm | Posted in Cluster, Windows Server 2008 R2 | 18 Comments
Tags: , , , , , ,

 

When there’s a no direct requirement about the business, people usually avoid the term “cluster” in their platform; mostly just because partial ignorance about the technology. For a long time, no matter if you were using open source platforms, Windows or any other, there was the believing that installing, configuring and maintaining a cluster is just a hard thing to do. The idea within this post is show you in a simple few steps, with no complex requirements, how to create a Windows Server 2008 R2 Failover cluster using another simple and effective solution as StarWind iSCSI SAN.

cluster051StarWind iSCSI SAN software represents one of the most popular solutions available in the market to create your own iSCSI shared storage (or Storage Area Network) without the need to acquire expensive hardware solutions. StarWind iSCSI SAN provides also the fastest solution to create, configure and maintaining these type of storage, having the chance to make available LUNs to any operating system capable of using an iSCSI initiator.

Let’s take a look about this step-by-step guide to create and configure a Windows Server 2008 R2 Failover Cluster, here are the steps involved:

1. Review and complete pre-requisites for the environment.

2. Install StarWind iSCSI SAN software.

3. Configure and create LUNs using StarWind iSCSI SAN.

4. Install Failover Cluster feature and run cluster validation.

5. Create Windows Server 2008 R2 Failover Cluster.

1. Review and complete pre-requisites for the environment

Requirements for clustering in Windows Server 2008 R2 changed significantly. You don’t longer need complex aspects for hardware to be compatible with Failover Cluster:

Requirements for Windows Server 2008 R2 Failover Cluster

Here’s a review of the minimum requirements to create a Windows Server 2008 R2 Cluster:

  • Two or more compatible servers: You need hardware that is compatible with each other, highly recommended to always use same type of hardware when you are creating a cluster.
  • A shared storage: This is where we can use StarWind iSCSI SAN software.
  • Two network cards on each server, one public network (from which we usually access Active Directory) and a private for heartbeat between servers. This is actually an optional requirement since using one network card is possible but not suitable in almost any environment.
    When we are using iSCSI protocol for our shared storage Microsoft recommends three network cards on each host: Public network, private, and one dedicated to iSCSI communication from servers to the storage, which in our case will be represented by a server using StarWind iSCSI software.
  • Windows Server 2008 R2 Enterprise or Datacenter Editions for hosts which will be part of the cluster. Always keep in mind that cluster is not supported in Standard Editions.
  • All hosts must be member from an Active Directory domain. To install and configure a cluster we don’t need a Domain Admin account, but we do need a Domain account which is included in the local Administrators of each host.
Requirements for StarWind iSCSI SAN Software

Here are the requirements for installing the component which will be in charge of receiving the iSCSI connections:

  • Windows Server 2008 or Windows Server 2008 R2
  • 10 GB of disk space for StarWind application data and log files
  • [Highly Recommended] 4 GB of RAM
  • 1 Gigabit Ethernet or 10 Gigabit Ethernet.

You can download StarWind iSCSI SAN software using this link, previous registration required.

Optimize TCP/IP stack to improve iSCSI performance

Before using StarWind as iSCSI target it’s recommended you’d "accelerate" TCP/IP stack to make sure it runs at full speed.

1. Enable 9K Jumbo frames for your GbE network adapter.

2. Change the following TCP parameters in the registry: [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters]

GlobalMaxTcpWindowSize = 0x01400000 (DWORD)
TcpWindowSize = 0x01400000 (DWORD)
Tcp1323Opts = 3 (DWORD)
SackOpts = 1 (DWORD)

3. Reboot.

2. Install StarWind iSCSI SAN Software

Ok, after reviewing and completing the requirements for the environment we should start installing the StarWind iSCSI SAN software.

The product is available for download in this link, and you only need a previous registration first which will also generate the license key you need to register the product.

Installing StarWind iSCSI software is probably the easiest in all of these five steps, since you only need to complete a wizard to accomplish it.

2.1 After you’ve downloaded the installation file, just double click it and the wizard will start.

cluster039

2.2 Follow the wizard normally as any installation. In the process you will find one of the interesting features about it: You can install the service separately from the console from which you can administer the StarWind iSCSI.

cluster043

This way you can install the console on any machine compatible to access the server or servers with StarWind iSCSI and manage storage, permissions, etc. In this case, I’ll be selecting the full installation.

The next steps are pretty straight forward so you won’t have any problem. Once the final steps are completed you’ll get a warning about the iSCSI Service needed before installing the StarWind iSCSI Service.

cluster048

You just need to access the “Services” console and set the service as started and automatic.

cluster049

After you click install, the process only takes a few seconds and you will additionally see some drivers that will be installed on the operating system; click “Install”.

cluster050

3. Configure and create LUNs using StarWind iSCSI SAN

With the program installed, using and configuring it won’t give us any trouble.

The StarWind iSCSI console is similar as any other console you may already use it. In the “General” screen we’ll find the summary information plus how to connect to local or remote StarWind host.

cluster052

In the “Configuration” section we can find the common parameters to configure iSCSI StarWind, for example the “Network” options which enable the iSCSI communications (port 3260) on any of the network adapters identified.

cluster053

If we are using a special LAN/VLAN to separate our iSCSI traffic as recommended, then we should only enable the IP address used for that purpose.

Now let’s get started with the StarWind configuration.

Configuring StarWind iSCSI

We are going to review the basic steps to configure the StarWind iSCSI to start hosting LUNs for our cluster; the initial task is to add the host:

3.1 Select the “Connect” option and type in the credentials to manage the iSCSI host. The defaults used by StarWind are: User “root”; Password “starwind”.

3.2 With the host added, we can start creating the storage that will be published through iSCSI: Right-click the server and select “Add target” and a new wizard will appear.

cluster017

3.3 Select the “Target alias” from which we’ll identify the LUN we are about to create and then configure to be able to cluster. In my case I’m using a simple name “w2k8r2-clstr”, click on “Next”.

cluster004

3.4 Since we are going to be using hard drives to present our storage, in “Storage Type” select “Hard Disk”, click on “Next”.

cluster005

3.5 In “Device Type” please note that we can use physical as virtual drives to present to our clients using iSCSI. We are going to select “Basic Virtual”, from which we’ll create a file (.img) that will represent the LUN; click on “Next”.

cluster006

3.6 Select “Image File device” and click on “Next”.

cluster007

3.7 Since we are creating a new one, select “Create new virtual disk” and click on “Next”.

cluster008

3.8 In the following screen, select the destination and size for the file we are creating. In my case, I’m using a separate drive where I’m going to save all of my LUNs.

cluster009

3.9 In the following options leave selected “Asynchronous mode” for the LUN, which will enable multithreaded disk operations (recommended for NTFS file system) and check “Allow multiple concurrent iSCSI connections (clustering)” which, of course, will provide the possibility for several hosts to be able to connect to this image file; click on “Next”.

cluster010

3.10 In the cache parameters, leave the default options selected “Normal (no caching)”; click on “Next”.

cluster011

3.11 In the last screen, just click on “Finish” and we’ll have our LUN ready.

As optional and recommended review the options for “CHAP permissions” and “Access Rights”. Within these options we can configure all the parameters needed for secure environments.

Once we’ve completed this, we can access this file from a Windows Server 2008 R2 host.

Configure Windows Server 2008 R2 iSCSI Initiator

Each host must have access to the file we’ve just created in order to be able to create our Failover Cluster. On each host, execute the following:

3.12 Access “Administrative Tools”, “iSCSI Initiator”.

3.13 In the “Target” pane, type in the IP address used for the target host, our iSCSI server, to receive the connections.

cluster013

In my case, I’ve created two LUNs available for the cluster.

3.14 Click on “Connect” to be authorized by the host to use these files.

Once we’ve connected to the files, access “Disk Management” to verify we can now use these files as storage attached to the operating system.

cluster015

3.15 And as a final step, just using the first host in the cluster, put “Online” the storage file and select also “Initialize Disk”. Since these are treated as normal hard disks, the process for initializing a LUN is no different than initializing a physical and local hard drive in the server.

Now, let’s take a look about the Failover Cluster feature.

4. Install Failover Cluster feature and run cluster validation

Prior to configure the cluster, we need to enable the “Failover Cluster” feature on all hosts in the cluster and we’ll also run the verification tool provided by Microsoft to validate the consistency and compatibility of our scenario.

4.1 In “Server Manager”, access “Features” and select “Failover Cluster”. This feature does not need a reboot to complete.

cluster002

4.2 Once installed, access the console from “Administrative Tools”. Within the console, the option we are interested in this stage is “Validate a Configuration”.

cluster021

4.3 In the new wizard, we are going to add the hosts that will represent the Failover Cluster in order to validate the configuration. Type in the server’s FQDN names or browse for their names; click on “Next”.

cluster023

4.4 Select “Run all tests (recommended)” and click on “Next”.

cluster024

4.5 In the following screen we can see a detailed list about all the tests that will be executed, take note that the storage tests take some time; click on “Next”.

If we’ve fulfilled the requirements reviewed earlier then the test will be completed with no warning.

cluster026

We can also have a detailed report about the results on each test.

cluster027

5. Create Windows Server 2008 R2 Failover Cluster

At this stage, we’ve completed all the requirements and validated our configuration successfully. In the next following steps, we are going to see the simple procedure to configure our Windows Server 2008 R2 Failover Cluster.

5.1 In the “Failover Cluster” console, select the option for “Create a cluster”.

cluster028

5.2 A similar wizard will appear as in the validation tool. The first thing to do is add the servers we would like to cluster; click on “Next”.

5.3 In the next screen we have to select the cluster name and the IP address assigned. Remember that in a cluster, all machines are represented by one name and one IP.

cluster031

5.4 In the summary page click on “Next”.

After a few seconds the cluster will be created and we can also review the report for the process.

cluster033

Now in our Failover Cluster console, we’ll get the complete picture about the cluster we’ve created: Nodes involved, storage associated to the cluster, networks and the events related to cluster.

cluster034

We will close up our step-by-step guide here and open the clustering series for maybe a detailed explanation about type of Failover Clusters, including Multi-Site cluster which I had the chance to present on a Microsoft event here in Buenos Aires.

Conclusions

After reviewing the process to create clusters using StarWind iSCSI SAN software, here are some of the things I’ve noticed:

Pros
  • StarWind iSCSI software is a simple tool to install and simpler to use and administer shared storage; not only for using it in a Failover Cluster, but for all scenarios when we need shared storage.
  • If we have the proper environment for iSCSI, StarWind iSCSI can save us a lot of money comparing the costs of an enterprise hardware solution to use iSCSI.
  • The tool provides also important differences with other similar in the market, for example, granularity of permissions we can achieve to guarantee a secure environment.
Cons
  • Setting up the right environment for iSCSI using StarWind can represent a complex situation. There is no golden rule, this “right environment” depends on a proper sizing and planning about the scenario and the services we’ll be providing; but for scalable and powerful solution you would probably need SAS hard drives, 10gb network cards, possible NIC teaming and other configurations.
    Even though we are discussing this as a negative aspect, still is common sense to think about this complex scenario as a trade-off for an expensive hardware solution.
  • The price seems very accessible for most companies, but StarWind removed the free version of this tool. I hope they can bring it back any time soon; fits perfectly when we want to make our own simple labs, or like in my case that I’ve used several times in presentations :)

More Resources

Here are more resources to look into for Windows Server 2008 R2 Failover Cluster and StarWind iSCSI Software:

Blog at WordPress.com. | The Pool Theme.
Entries and comments feeds.

Follow

Get every new post delivered to your Inbox.

Join 139 other followers