Yada Yada Cloud: Azure Stack, what’s your story?

December 30, 2016 at 12:01 am | Posted in Azure, Yada Yada Cloud | Leave a comment
Tags: , , , , ,


It’s been a while since my last post, a whole year to be exact. Also, before my last post I share some thoughts regarding “writers and bloggers block” and how we usually get stuck without developing or writing any material, even though we should be able to.

I decided to create a new set of articles related to cloud, which is one the main topics I’m currently working on my day job. The “Yada Yada Cloud” concept represents the way I’ll try to simplify these subjects, and avoiding all the “chitchat” or platitudes we sometimes get when we are researching some new cloud concept.

Five points if you already got the Seinfeld reference: “The Yada Yada”.



There’s a lot of buzz related to Azure Stack (formerly called Windows Azure Pack vNext) within the last few months, and the tendency clearly states that it’s not going to change. So let’s start reviewing the concepts behind Azure Stack without getting lost with all the excitement.

In this post I’ll review the following topics:

  • Azure Stack Definition: Simple as you can get.
  • What can Azure Stack provide: Review about the services we can expect to use from Azure Stack.
  • How can I install Azure Stack?: Requirements review and the step-by-step to deploy Azure Stack.
  • Azure Stack Integrated Systems and the operational model: Implications about the operations within Azure Stack and the integrated systems.
  • What’s the representation of an Azure Stack instance?: Simple definition about how Azure Stack instance is represented.
  • Azure Stack cost: What we know so far about Azure Stack cost.
  • Where’s Windows Azure Pack (WAP) and Cloud Platform System (CPS) in all of the yada yada?: What are the differences and how WAP and CPS fit in the Azure Stack world

Azure Stack Definition

Simplifying, Azure Stack represents the way to run Azure in your datacenter. You will have an Azure implementation within hardware you own, allowing you and other companies to offer the public cloud services within your own datacenter.

One of the key components in Azure Stack is about extensibility and elasticity. Here are some of the important features around Azure Stack:

  • Azure Stack can be easily integrated with Azure (they are using the same code and binaries), therefore customers can expand their resources as they need it and still be using the Azure Stack portal to provide their services.
  • Customers can build and offer their own type of applications (like PaaS platforms), services and customizations in the environment as their own catalog with charge-back possibilities, same as in Azure.
  • Azure Stack will be offered, initially, on what’s called “integrated systems”. Dell EMC, HPE and Lenovo will be the only vendors to offer the platform pre-installed on their hardware.

What Can Azure Stack Provide

Since we are talking about Azure Stack offering the same capabilities as Azure, here are the services we can consume using this platform:

  • Compute: Virtual machines (supporting Windows and Linux) and any customizations we can provide with these virtual machines. Provides Azure Virtual Machines (offering Windows and Linux VMs on demand) and VM Extensions (allowing VMs customization).

Not all sizes will be possible initially Azure Stack (very small or very large). Instances available will be A (0 to 7), D (1 to 4 and 11 to 13) and D v2 (1 to 4 and 11 to 13).

  • Storage: Built on Windows Server 2016 SDS (Software-defined-storage), it offers blobs (what we usually use as the OS or data disks in VMs), Tables (NoSQL key/value store), and Queues (letting cloud software communicate via messages).

  • Networking: Built based on Windows Server 2016 SDN (Software-defined-networking), provides Virtual Networks (allowing the creation of isolated networks in the cloud, including the integration with Azure), Load Balancers (layer-4, to balance load between different VMs), and VPN Gateway (allowing connections among virtual networks and more).

  • Platform as a Service (PaaS): Provides App Service (supporting Web Apps, Mobile Apps, and API Apps created using .NET, Java, PHP, or other technologies) and Service Fabric1 (offering a foundation for micro services applications).

  • Security: Key Vault (for securely storing encryption keys). This is basically the “secrets repository”, where all certificates and passwords are stored.

  • Azure Resource Manager (ARM): Capabilities for automated deployment and a vast number of features and services. ARM exposes RESTful APIs to Azure Stack services and allows the creation of templates to automate the deployment of Azure resources.

  • Management and extensibility: Azure Stack Portal and support via Azure Resource Manager (ARM) for other clients (Visual Studio, PowerShell, and a command-line interface (CLI) for Linux, Macintosh, and Windows).


How Can I Install Azure Stack?

Currently you can find available Azure Stack Technical Preview 2 (TP2) ready for download, General Availability (GA) does not have a definitive date just yet but is expected to appear in mid-2017 (near end of Microsoft’s fiscal year).

Microsoft only supports the Azure Stack deployment in a Proof-of-Concept mode (POC) in a single-node. Saying the obvious, Microsoft will not support production deployments of Azure Stack in this stage.

Azure Stack Requirements

Here is the recommended configuration mentioned by Microsoft:


Unofficially Microsoft stated that Azure Stack Technical Preview 3 (TP3) will come with an integrated version of Active Directory.

Microsoft offers the Deployment Checker for Azure Stack Technical Preview 2 to confirm that your hardware meets all the requirements.

To get a full list of requirements and recommendations for the Azure Stack POC, access the following link: “Azure Stack deployment prerequisites”.

Azure Stack Deployment

Once you have all pre-requisites in place, the recommended steps for deployment from Microsoft is the following: “Deploy Azure Stack POC”.

Here’s an overview about the step-by-step to implement Azure Stack (it may take up to 2 or 3 hours to complete):

1. Download Azure Stack Technical Preview, accessing this link.

2. Extract Azure Stack files and copy the CloudBuilder.vhdx file into the C:\ drive.

3. Download the Azure Stack TP2 support files using PowerShell script.

# Variables
$Uri = ‘https://raw.githubusercontent.com/Azure/AzureStack-Tools/master/Deployment/’
$LocalPath = ‘c:\AzureStack_TP2_SupportFiles’
# Create folder
New-Item $LocalPath -type directory
# Download files
( ‘BootMenuNoKVM.ps1’, ‘PrepareBootFromVHD.ps1’, ‘Unattend.xml’, ‘unattend_NoKVM.xml’) | foreach { Invoke-WebRequest ($uri + $_) -OutFile ($LocalPath + ‘\’ + $_) }

4. Run the PrepareBootFromVHD.ps1 script (confirm required parameters). Reboot will be required since the machine will boot into the VHDX.

.\PrepareBootFromVHD.ps1 -CloudBuilderDiskPath C:\CloudBuilder.vhdx –ApplyUnattend

5. Execute the “Install Azure Stack POC” PowerShell. Here’s one of the examples using one Azure Active Directory:

cd C:\CloudDeployment\Configuration
$adminpass = ConvertTo-SecureString “<LOCAL ADMIN PASSWORD>” -AsPlainText -Force
$aadpass = ConvertTo-SecureString “<AAD GLOBAL ADMIN ACCOUNT PASSWORD>” -AsPlainText -Force
$aadcred = New-Object System.Management.Automation.PSCredential (“<AAD GLOBAL ADMIN ACCOUNT>”, $aadpass)
.\InstallAzureStackPOC.ps1 -AdminPassword $adminpass -AADAdminCredential $aadcred

6. Connect to the Azure Stack POC using RDP or VPN, following the guideline: “Connect to Azure Stack”.


7. And now you are ready to start working with Azure Stack, you can try “Provision a virtual machine”.


When Azure Stack gets GA, the installation process is going to change?

Yes, significantly: There’s not going to be any. Azure Stack will be offered with the “Integrated Systems” as an OEM. Azure Stack will come pre-installed in the hardware that you are buying, these Integrated Systems will be Dell EMC, HPE and Lenovo. There will be no option on buying on different hardware, for now.

Curious fact: In 2010, in the World Partner Conference (WPC) event, Microsoft mentioned the existence of an “Azure appliance” that will be offered by Dell, HP and Fujitsu. They even stated that eBay was one of the early adapters, servicing with that appliance some of the web applications to the public.

Azure Stack Integrated Systems and the Operational Model

Microsoft mentioned, without any guarantee, that in the future there could be a scenario where customers can actually install on their own Azure Stack on the hardware they want to, with specific hardware recommendations from MS. The release of Azure Stack on these Integrated Systems is mainly because two reasons:

  1. Microsoft wants to offer Azure Stack on a platform that is guaranteed to function properly. The number of hardware vendors, components and combinations are far too many to get a newly product in the market and make it compatible with all of those variables.
  2. Customers need to focus on providing services and administering the Azure Stack platform, instead of worrying about compatibility issues.

Azure Stack will have, of course, updates being released periodically for the platform. Microsoft will deliver these updates and all the details on the operational model supported for Azure Stack.

Therefore, as they have for Azure, Microsoft and the integrated systems will have a detailed manual and processes defined for: Patching operating systems, disk controllers, drivers, and firmware; replacing hardware components; and any other operational task required.

The Azure Stack patching and updates releases, same as Windows Azure, will be pre-validated for software and firmware and designed to not disrupt tenant workloads.

Microsoft introduced its Patch and Update Framework (P&U) with the standard edition of its Cloud Platform System (CPS). CPS was designed to run Microsoft’s previous Azure Pack software, and has been re-engineered to be an on-ramp of sorts for Azure Stack.


What’s the Representation of an Azure Stack Instance?

We just discussed about the integrated systems and Azure Stack deployments, but what’s exactly is the Microsoft definition for an Azure Stack instance? There are several components to define this, but let’s start:

An Azure Stack instance is defined by the following:

  • Single instance of Azure Resource Manager (ARM)
  • 1 or more Regions under management of ARM
  • 1 or more Scale Units within a Region
  • 4 or more servers within a Scale Unit

Let’s break down these concepts.

Azure Stack Region
  • Set of Scale Units that share same “physical location”
  • Under one physical and logical “administrator”
  • Networking requirements: High-bandwidth / Low Latency

Azure Stack Scale Unit

  • Associated with a single Region. 1 or more Scale Units can exist in that Region
  • Unit of Capacity Expansion. The smallest scale unit will be 4 servers.
  • Fault Domain (Azure Consistency)
  • Alignment of Hardware SKU within the integrated system (Homogenous within Scale-Unit)
  • Servers must share top-of-rack (ToR) switch
  • Servers part of the same Failover Cluster
  • Each scale unit can have different hardware generations

With those definitions we can say then:

Azure Stack Scale = Scale Unit x Scale Units per Region x Number of Regions

Unofficially Microsoft says the maximum number of servers in an Azure Stack instance will be 65,000. It is also expected that number will increase in time.

Currently Azure Stack POC in Technical Preview only supports a single-node for deployment, which of course is not supported for production environments (this means you won’t be able to initiate a support ticket with Microsoft). The good news stated by Microsoft is that the POC single-node option will be always available after GA for free.

Azure Stack Cost

There is no official statement from Microsoft about the cost model on Azure Stack, but there were some hints here and there about it:

  • Microsoft wants to align the cost model to the hybrid cloud scenario for customers.
  • Pay-as-you-go is the preferred method for Azure Stack. The idea is that you won’t be paying for an operating system license per server you are buying, but the cost will be related to an ownership model around the hybrid cloud the customer is providing with Azure Stack.
  • Microsoft wants a unified billing for public and private cloud.
  • Microsoft stated that customers “ready to deploy an Azure-consistent cloud on their premises now should buy Microsoft Cloud Platform Solution (CPS)”. Customers will be able to use Azure Stack to manage CPS resources thereby preserving investments in CPS.
  • The integrated systems (Dell EMC, HPE and Lenovo) will still offer their portfolio (or at least some variations of it) of support contracts and subscriptions on their hardware where Azure Stack will be deployed.

Where’s Windows Azure Pack (WAP) and Cloud Platform System (CPS) in all of the Yada Yada?

Windows Azure Pack (WAP), appeared in 2012 with Windows Server 2012 and System Center 2012 releases, is Microsoft’s software-defined hybrid cloud and bundles Windows Server, System Center and more into a package that can run VMs created in Azure and then downloaded to your own datacenter.

Cloud Platform System (CPS) debuted in 2014 with the Azure appliance approach, pre-installed and configured hardware with the integrated systems, offering automation and integration with Windows Azure. WAP appears also as one of CPS components to achieve this similar hybrid cloud scenario with Windows Azure. CPS overall objectives and features are very similar that we can find now in Azure Stack. Although all signals are pointing that Azure Stack will be the preferred Azure appliance or Azure-in-a-box offering.

WAP and CPS are meant to be maintained (at least for now) as a complement for the Azure Stack offering, even though there is going to be some significant overlapping. Microsoft intend is to maintain and encourage all customers that already have WAP and/or CPS to extend capabilities with Azure Stack.

To that notion, Microsoft will be releasing the WAP/CPS connector with Azure Stack. This connector will have the following features:

  • Connecting any existing WAP/CPS platform with Azure Stack in order to allow customer to preserve any investment already made in these platforms.
  • Enabling tenants to access VMM IaaS resources from Azure Stack portal through seamless integration.
  • Will be released through WAP/CPS Update Rollup at Azure Stack General Availability (GA).



As you can see, Azure Stack has a large number of concepts and features that can be extended far beyond this article. The idea is to set the “Yada Yada Cloud” as the place I can share and discuss these type of topics.

We’ll see each other soon!

Taking a Quick Look to Software Defined Storage (SDS) – Part I

December 26, 2015 at 10:54 pm | Posted in Software Defined Storage (SDS) | Leave a comment
Tags: , ,


Do not be afraid; meet this new (or not so new) player in town: Software Defined Storage (SDS). Moreover, why is it important for you to know about its existence? Why should you start considering evaluating it in your solutions deployed within your organizations or customers? Let us take a quick look into the technology, what represents, and the main benefits.

Before getting into Software Defined Storage, let us recap about the storage definitions we had before SDS:

  • DAS: Direct-attach storage, like the ones you have on your notebook, desktop or tablet, hard drives attached to a mainboard. No complications here, DAS types differ from each other on the way they are connected to that mainboard.
  • SAN: Storage Area Networks also connected to the hosts (mainly servers) but via a backplane using adapters and cables. These type of connections vary from Fiber Channel, iSCSI (which is SCSI, Small Computer Systems Interface, but over internet) and Serial Attached SCSI or SAS. SANs still are by far the main storage deployment for servers we can find in organizations.
  • NAS: Network Attached Storage, is what we commonly understand for “file servers”. Servers with storage attached (which can be DAS, SAN or SDS) prepared for file sharing. Accessing a NAS it is not performed as DAS or SAN work, using SCSI protocol, but using a network file system like SMB from Microsoft, which works over TCP/IP.


So, when actually RAID technology starts to play?

RAID (Redundant Array of Independent Disks) appeared a long time ago as an answer to achieve some protection and improvement within the data we stored in the hard drives, showing as a single drive a combination of several disks.

RAIDs are configured using software installed in a server or within an array controller. Some people call the latter “hardware arranged RAID”, but it is software nevertheless installed on an array controller.

This software dependent storage called RAID introduced improvement on how we considered storage, but with a large complexity costs. Specialized hardware is needed to achieve RAID, interconnections and arrays prepared (vendors specific); software to integrate with all those components. Each components has to be compatible with each other, if any problem appears, the data within that storage could not be accessible.

Outlining Software Defined Storage

SDS introduced the concept of separating completely the hardware from the software that managed this storage. This idea of decoupling the hardware from the solution, easily relates with the virtualization concept: SDS does for storage what virtualization does to servers, which is why we also call SDS as virtual SANs.

There are varies definition about experts that define storage virtualization as not the same concept as Software Defined Storage. We will tackle the differences along the way in the series of posts I have prepared.

SDS is no other than a software layer handling the data access to hardware, but we can even say, “That’s exactly what RAID technology does!” and we will be somehow right. However, RAID will use software to transform several disks into one, that is just one characteristic we can achieve from SDS. Using Software Defined Storage, we can define different features, ways of access the data, and policies like QoS, integrate several different components (hardware and software) which is usually not possible with RAID, and so on.


Image used from http://headintotheclouds.com

Regarding the technologies and vendors we can find offering SDS solutions, there will be differences as well. StarWind concepts about SDS differs from what you VMware has to offer; and Microsoft concept is different about VMware’s conception. We will take a closer look also of those in the following articles.

Main Benefits about Using SDS

Before getting to know the alternatives, vendors and technologies within SDS we can find, let us take a review about the main benefits on using Software Defined Storage for our solutions:

  • Decoupling software from hardware gives you more possibilities when handling your storage/s solutions: As mentioned before, using different disks as one is just one section in SDS, we can define different ways on how to access data, policies applying to different workloads, hypervisors; automation possibilities; and several other features.
  • Heterogeneous storage is possible: Related to the benefit mentioned above, adding a software layer also allows us to use different type of storage, different type of disks, arrays and even hardware vendors to manipulate the same SDS we define. You will no longer depend on a specific vendor when considering to improve or expand your storage solution.
  • Control performance and capacity: Much easier to control on how the storage is used by the solutions hosted; managing the performance our workloads can access depending on our definitions; dynamically expanding capacity and performance as our systems needs it is one of the main benefits within Software Defined Storage.
  • SDS solutions can provide the possibility to reduce storage costs immensely: Software Defined Storage concept, depending on the vendor you select, work on commodity and inexpensive hardware. TCO (Total Cost of Ownership) is greatly reduced since we reduce the cost not only for the hardware, but also about maintenance and operations.

As we can see, SDS represents the dynamic solutions our customers require daily, in a market where 24x7x365 represent no longer as an “added value” but a necessity. Without Software Defined Storage, the solutions we implement in our datacenter will continue maintaining dependencies and will be restricted on scaling out.

In the following articles, we will review on the alternatives we have with SDS, vendors, comparing solutions and so on.

To Write or not to Write… And Some Best Practices to Avoid Bloggers Block

April 24, 2015 at 8:43 pm | Posted in Blogging | 3 Comments
Tags: , , , ,


Or being able or not being able to write, those are the questions. It’s being a while since my last post, so I thought it could be a good idea to dedicate it to the “hard time” that represents for me writing. I’m pretty sure that there are plenty of us that experience the same issues.

In this post what I would like to review (more like a retrospective to myself I must say) about the common issues that bloggers might find when we try to convince that we “should being writing/posting/blogging something”. And also some of the best practices that I’m currently finding on trying to avoid writers or bloggers block.


Image used from article: “Unblocking Writer’s Block

The Importance of Writing

For me, writing has being always an important part of my life and, being honest, having a blog gave me tons of opportunities and for sure made me a better professional and communicator. And that’s the keyword you should think of, it’s not about writing something you know, it’s about sharing and communicating.

Communication is a key element in every professional’s life, you can be the best in what you do but if you are not able to communicate then you will not get the recognition you deserve. And I’m not mentioning “recognition” as a narcissism quality, if you work with customers (and I do believe everyone works with customers in different types, shapes and forms), effective communication is the one thing that will keep customers satisfied about your service, product or both.

I’m planning on having a set of posts regarding effective communication and other topics later on, but I just want to share one quick example on the professional differentiator for having a blog. A few years back, I had a small project for a customer, they wanted to try out a new and emerging technology as a proof-of-concept: App-V.

Back then, the App-V current version was the 4.5 and there were not available a lot of documentation about it, just quick start guides. So, I started working and of course a lot of troubleshooting started from installing and sequencing applications. Finally the proof-of-concept ended really well, the customer and myself loved the technology. And I decided, since I had a “hard time” getting all the pieces right, to write a set of posts to share what I’ve learned (Part I, II, III and IV of the App-V Step by Step set of articles).

book Those set of posts, for a long time were actually the first search result when someone typed “app-v step-by-step”. That lead to Packt Publishing offer me to write a book about App-V, the “Getting Started” guide which also led to the “Advanced Guide” for App-V one year later. And a customer found my blog by a search and requested for me to provide a specialized and customized App-V training course in the US; that was my first trip to NY City and the US.

Writing those books or teaching a course were never a goal of mine, I just wanted to share what I’ve learned about the technology and improve my communication in the process, eventually I ended up gaining a whole lot more than that.

The issues that I found in the last couple of months to get back on track on some of the writing were many. Here’s a list that I’m sure there will be lots of you with the same problems:

  • I can never find time: Writing takes time, for some of us much more time than others, and with a demanding day job and trying to use the weekends to get some rest, sometimes seems just impossible.
  • Words and ideas just don’t flow out of my system: The simpler way to explain it is the “bloggers block” and the more you delayed your writing, the harder it gets.
  • Procrastinating soon as I sit down and start writing: Since it’s always harder to get back on the writing motion, soon as I start trying, I start thinking and doing other things not related to my main goal.
  • There’s too much information I’m interested to review all and write about it: Like most of you geeks out there, I have several interests in the IT world that I try to keep up and with that generating my own opinion, contents and such. But sometimes seems too much information all the time to try to read it all that is available.

10 Tips and Best Practices to Avoid Bloggers Block

Again, these are just a few I’m trying to apply in my own personal life, hopefully 2015 will work out better than the year before regarding writing and publishing.

1. In order to write, you need to read

3454 And no, this is not a catch-phrase using the same method as the Sphinx in “Mystery Men”: “He who questions training only trains himself at asking questions” (?)

Enhancing your capabilities to elaborate more easily requires for you to train your mind in the right direction. Reading blogs, articles, forums and comments gives some traction to your mind to think regarding opinions and perspective, which will lead for your writing to flow without much problems.

All of you should already know Feedly to organize your RSS feeds, it is the best tool available since Google Reader disappeared. I also personally use Nextgen Reader in my desktop and Windows Phone to keep up with the reading.

2. Tweet, Comment and Interact

This is also a great way to motivate your mind into perspective and writing, the more you interact the easier will be to start writing. Putting your opinions in motion will help you on finding the things you want to write about; Twitter, as the “micro-blogging” network, is a great option to start.

Interacting in other people’s blog in their comments is going to be really useful as well. I always try to ensure that my profile shows up in every comment I make, people interacting with me in comments can review my blog and provide ideas as well on the things I’ve written.

3. Free writing. Write along

Free your mind from any structure, forget about titles, subtitles. Never mind trying to get the right introduction or a catchy phrase that can lure readers. Just write, write along.

This is a great technique I’ve been trying lately when the ideas don’t come into my head. I just start writing about anything that comes into my mind. This post actually started like that, thinking about “I don’t know what to write about. I need time to read and understand something new, but always takes me too much time”.

And if you are already working with an article, write the things you have in your head without giving it too much formality or shape. The idea is to make the writing flow, and good ideas will come eventually.

4. Start the article backwards or start from the middle

This is something related to the last recommendation about free writing. Once you have a solid idea about what you want to write about, don’t structure yourself to get the entire format of the article at the beginning. Start with the part that suits you better, the “low-hanging fruit”.


It’s quite common for me to get an idea about what I want to write, but getting trapped just in the start because I can’t find a right introduction. It happened quite often when I wrote the books, starting a chapter was too hard.

When that happens, I try to start by the easy part. Sometimes could be the ending, sometimes could be the middle of the article. For example, if it’s something technical, start writing with the step-by-step process; the rest will come along later.

5. Commit to consistency and deadlines, but be realistic

When you start writing commit yourself to two things: Consistency/periodicity regarding your writing and deadlines, and only do it if it’s going to be a realistic commitment.

If you have a bloggers block don’t start saying “I’ll dedicate two hours every day and publish one article per week”. Sounds like a great plan, but being honest with yourself, if you didn’t write at all for weeks (or even months like me), don’t try to jump into something that is going pretty unlikely to be completed and will only frustrate you.

Maybe a good place to start would be writing on Saturdays and a monthly post. Once you get the consistency, you can move forward with stronger commitments.

6. Write your thoughts at any place and any time

I often find myself during the day, in any type of activity, with ideas that comes into my head, and if I don’t write them down, just as they are, raw and simple, that thought is gone. I must admit that I do have memory problems, so this is a big issue for me.

What I’m trying to do is to write those ideas in my phone (try Evernote if you didn’t already), or write them down in a notebook. Later on I review them to keep on a train of thought I had hours or days ago.


The moment a man sets his thoughts down on paper, however secretly, he is in a sense writing for publication” – Raymond Chandler

7. Short posts work as well as any other

If what you want to write is concise, simple and short, then so be it. A short post can have the same or even better impact than a larger one.

This is something that it’s hard for me even to this day, I have my mind set already that when I want something to write should be using an extended topic. But if you want to share an opinion, something you learned recently, and it doesn’t take much writing, then post it.

Using this will also facilitate your focus and you will practicing your writing. Sharing frequently in your blog will also encourage readers to interact more often.

8. Maintain a healthy environment while you are writing

This is a broad topic and could easily be a post on its own. Having a healthy environment it refers to several meanings and options, here’s just a summary about my best practices on that:

  • Keep your desk neat, clean and organized. Order inspires order, an organized workplace will keep your mind focused.

messy-desk Can you actually be motivated to write if your desk looks like this?

  • Your writing place should be a distraction-free space. It could sound obvious, but sometimes we don’t notice that a TV nearby or street sounds if you sit by the window could be quite distracting.


  • Always keep a bottle of water in your desk and have healthy snacks. Drinking water regularly helps your digestion, circulation, absorb nutrients more easily and maintaining the body temperature. All of those things are important to maintain your mind sharp. Healthy snacks as fruits also gives the necessary components to your body and brain to function properly, eating trash meals and snacks will do the exact opposite.


  • Coffee it’s not my main option while writing. Even though caffeine is a must for several of technology guys, I try to stay away during writing hours. Coffee forces the body to eliminate water and nutrients are harder to absorb while you are drinking.


  • 670px-Sit-at-a-Computer-Step-6-Version-2Ergonomic position while you in front of the monitor. Not only sitting correctly but also having your keyboard, mouse and monitor in the right position. You usually don’t notice, but getting used to an unhealthy position can lead for back, neck, wrists pains and chronic problems. Here’s a nice and complete overview of some of the best practices regarding sitting correctly: “How to Sit at a Computer”.


  • Take regular and scheduled brakes. And don’t stay in front of the computer while lunch or dinner, leave the working area.


  • Try to be fully dressed while you are writing. Doesn’t matter if you write during early morning, or late at night, if you are wearing pajamas or just your underwear, it’s most likely that you won’t be completely focused on writing but more about your bed or laying in your couch instead.
9. Use Microsoft Word and thesaurus within

theasaurus I always start writing my articles in Microsoft Word, and it’s not only to use review the spelling and grammar, the other important tool I have is the thesaurus providing synonyms and related concepts.

My native language it’s not English, so finding the right words could be difficult sometimes and could be a stopper if you are in the middle of an idea that you cannot set the right words. Using Shift+F7 in any word written, you’ll get in the right panel the synonyms you need for that thing you wanted to express differently.

10. Sometimes it just going to take more time that you thought so

Don’t get frustrated, please don’t. And don’t get too perfectionist, once you start reading your article again and again, you will always find something you want to change, add or delete; don’t get yourself into an infinite loop about getting the right article.

Writing sometime takes time, for some of us longer than others, it’s just the way it is. And there’s one thing certain: Practice is the only medicine to solve that.

That’s pretty much it for now.

How do you handle your own bloggers block? What type of issues do you find the most while trying to write?

[Step-by-Step] Creating a Windows Server 2012 R2 Failover Cluster using StarWind iSCSI SAN v8

March 27, 2014 at 10:27 pm | Posted in Cluster, Windows Server 2012, Windows Server 2012 R2 | 1 Comment
Tags: , , , , , , ,


If you don’t know StarWind iSCSI SAN product and you currently handling clusters that require a shared storage (not necessarily Windows), I highly recommend to take a look around to this the platform. To summarize, StarWind iSCSI SAN represents a software which allows you to create your own shared storage platform without requiring any additional hardware.


I created a post a while ago about “Five Easy Steps to Configure Windows Server 2008 R2 Failover Cluster using StarWind iSCSI SAN” to explain how can a Failover Cluster can be easily configured with the help of StarWind iSCSI SAN. Since there has been some changes in the latest releases of Windows Server and StarWind iSCSI SAN has a brand new v8 of its platform, I thought it would be a good idea to create a new article to achieve an easy way to create our own cluster.

As I did, for the previous post, the main idea about this article is to show a simple step-by-step process to get a Windows Server 2012 R2 Failover Cluster up and running, and without requiring to use an expensive shared storage platform to complete it. The steps involved are:

  1. Review and complete pre-requisites for the environment.
  2. Install StarWind iSCSI SAN software.
  3. Configure and create LUNs using StarWind iSCSI SAN.
  4. Install Failover Cluster feature and run cluster validation.
  5. Create Windows Server 2012 R2 Failover Cluster.

1. Review and Complete Pre-Requisites for the Environment

Windows Server 2012 introduced some changes into the Failover Cluster scenarios, even though those are important and improved changes, the basic rules of Failover Cluster has not changed. Here are the requirements for a Windows Server 2012 R2 Failover Cluster.

Requirements for Windows Server 2012 R2 Failover Cluster

Here are the requirements in Windows Server 2012 R2 for Failover Clusters:

  • Two or more compatible servers: You need hardware that is compatible with each other, highly recommended to always use same type of hardware when you are creating a cluster. Microsoft requires for the hardware involved to meet the qualification for the “Certified for Windows Server 2012 logo”, the information can be retrieved from the Windows Server catalog.
  • A shared storage: This is where we can use StarWind iSCSI SAN software.
  • [Optional] Three network cards on each server, one public network (from which we usually access Active Directory), a private for heartbeat between servers and one dedicated to iSCSI storage communication. This is actually an optional requirement since using one network card is possible but not suitable in almost any environment.
  • All hosts must be member from an Active Directory domain. To install and configure a cluster we don’t need a Domain Admin account, but we do need a Domain account which is included in the local Administrators of each host.

Here are some notes about some changes introduced in Windows Server 2012 regarding requirements:

We can implement Failover Cluster on all Windows Server 2012 and Windows Server 2012 R2 editions, including of course Core installations. Previously on Windows Server 2008 R2 the Enterprise or Datacenter Edition were necessary.

Also the concept for “Active Directory-detached cluster” appears in Windows Server 2012, which means that a Failover Cluster does not require a Computer object in Active Directory, the access is performed by a registration in DNS. But, the cluster nodes must still be joined to AD.

Requirements for StarWind iSCSI SAN Software

Here are the requirements for installing the component which will be in charge of receiving the iSCSI connections:

  • Windows Server 2008 R2 or Windows Server 2012
  • Intel Xeon E5620 (or higher)
  • 4 GB of RAM (or higher)
  • 10 GB of disk space for StarWind application data and log files
  • Storage available for iSCSI LUNs: SATA/SAS/SSD drive based arrays supported. Software based arrays are not supported in iSCSI.
  • 1 Gigabit Ethernet or 10 Gigabit Ethernet.
  • iSCSI ports open between hosts and StarWind iSCSI SAN Server. The iSCSI ports are 3260 and 3261 for the management console.
General Recommendations for the Environment

In this scenario, there are several Microsoft and StarWind recommendations we must fulfill in order to get the best supportability and results. Keep in mind that each scenario could require different recommendations.

To mention some of the general recommendations:

  • NIC Teaming for adapters, except iSCSI. Windows Server 2012 improved significantly the performance and of course supportability of network adapters teaming and is highly recommended to use that option for improved performance and high-availability. But we must avoid configure teaming on iSCSI network adapters.

Microsoft offers a very detailed document about handling NIC teaming in Windows Server 2012: “Windows Server 2012 NIC Teaming (LBFO) Deployment and Management” and also check this article “NIC Teaming Overview”.

  • Multi-path for iSCSI network adapters. iSCSI network adapters prefer handling MPIO instead of NIC teaming, because in most scenarios the adapter throughput is not improved and moreover there could be some increases in response times. Using MPIO is the recommendation with round-robin.
  • Isolate network traffic on the Failover Cluster. It is almost mandatory that we separate iSCSI traffic from the rest of networks, and highly recommended to isolate the rest of traffic available. For example: Live Migration in Hyper-V clusters, management network, public network, or Hyper-V replica traffic (if the feature is enabled in Windows Server 2012).
  • Drivers and firmware updated: Most of hardware vendors will require prior to start any configuration, like a Failover Cluster, to have all drivers and firmware components updated to the latest version. Keep in mind that having different drivers or firmware between hosts in a Failover Cluster will cause to fail the validation tool and therefore the cluster won’t be supported by Microsoft.
  • Leave one extra LUN empty in the environment for future validations. The Failover Cluster Validation Tool is a great resource to retrieve detailed status about the health of each cluster component, we can run the tool whenever we want and it will not generate any disruption. But, to have a full “Storage Validation” it is required to have at least one LUN available in the cluster but not used for any service or application.

For more information about best practices, review the following link: “StarWind High Availability Best Practices”.

One important new feature introduced by StarWind iSCSI SAN v8 is the use of Log-Structured File System (LSFS). LSFS is a specialized file system that stores multiple files of virtual devices and ensures high performance during writing operations with a random access pattern. This file system resolves the problem of slow disk operation and writes data at the speed that can be achieved by the underlying storage during sequential writes.

At this moment LSFS is experimental in v8, use it carefully and validate your cluster services in a lab scenario if you are planning to deploy LSFS.

2. Install StarWind iSCSI SAN software

After we reviewed and verified the requirements, we can easily start installing StarWind iSCSI SAN software, which can be downloaded in trial-mode. This represents the simplest step in our list, since the installation does not have any complex step.


In the process, the Microsoft iSCSI service will be required to add to the server and the driver for the software.


After the installation is complete we can access our console and we will see as a first step necessary is to configure the “Storage pool” necessary.

We must select the path for the hard drive where we are going to store the LUNs to be used in our shared storage scenario.


3. Configure and create LUNs in StarWind iSCSI SAN

When we have the program installed, we can start managing it from the console and we will see the options are quite intuitive.


We are going to split the configuration section in two parts: Hosting iSCSI LUNs with StarWind iSCSI SAN and configuring our iSCSI initiator on each Windows Server 2012 R2 host in the cluster.

Hosting iSCSI LUNs with StarWind iSCSI SAN

We are going to review the basic steps to configure the StarWind iSCSI to start hosting LUNs for our cluster; the initial task is to add the host:

3.1 Select the “Connect” option for our local server.

3.2 With the host added, we can start creating the storage that will be published through iSCSI: Right-click the server and select “Add target” and a new wizard will appear.

3.3 Select the “Target alias” from which we’ll identify the LUN we are about to create and then configure to be able to cluster. The name below will show how we can identify this particular target in our iSCSI clients. Click on “Next” and then “Create”.


3.4 With our target created we can start creating “devices” or LUNs within that target. Click on “Add Device”.


3.5 Select “Hard Disk Device”.


3.6 Select “Virtual Disk”. The other two possibilities to use here are “Physical Disk” from which we can select a hard drive and work in a “pass-through” model.


And “RAM Disk” is a very interesting option from which we can use a block of RAM to be treated as a hard drive or LUN in this case. Because the speed of RAM is much faster than most other types of storage, files on a RAM disk can be accessed more quickly. Also because the storage is actually in RAM, it is volatile memory and will be lost when the computer powers off.

3.7 In the next section we can select the disk location and size. In my case I’m using E:\ drive and 1GB.


3.8 Since this is a virtual disk, we can select from either thick-provision (space is allocated in advance) or thin-provision (space is allocated as is required). Thick provisioning could represent, for some applications, as a little bit faster than thin provisioning.


The LSFS options we have available in this case are: “Deduplication enabled” (procedure to save space since only unique data is stored, duplicated data are stored as links) and “Auto defragmentation” (helps to make space reclaim when old data is overwritten or snapshots are deleted).

3.9 In the next section we can select if we are going to use disk caching to improve performance for read and writes in this disk. The first opportunity we have works with the memory cache, from which we can select write-back (asynchronous, with better performance but more risk about inconsistencies), write-through (synchronous, slow performance but no risk about data inconsistency) or no cache at all.


Using caching can significantly increase the performance of some applications, particularly databases, that perform large amounts of disk I/O. High Speed Caсhing operates on the principle that server memory is faster than disk. The memory cache stores data that is more likely to be required by applications. If a program turns to the disk for data, a search is first made for the relevant block in the cache. If the block is found the program uses it, otherwise the data from the disk is loaded into a new block of memory cache.

3.10 StarWind v8 adds a new layer in the caching concept, using L2 cache. This type of cache is represented in a virtual file intended to be placed in SSD drives, for high-performance. In this section we have the opportunity to create an L2 cache file, from which again we can select to use it as write-back or write-through.


3.11 Also, we will need to select a path for the L2 cache file.


3.12 Click on “Finish” and the device will be ready to be used.

3.13 In my case I’ve also created a second device in the same target.


Configure Windows Server 2012 R2 iSCSI Initiator

Each host must have access to the file we’ve just created in order to be able to create our Failover Cluster. On each host, execute the following:

3.14 Access “Administrative Tools”, “iSCSI Initiator”.

We will also receive a notification about “The Microsoft iSCSI service is not running”, click “Yes” to start the service.

3.15 In the “Target” pane, type in the IP address used for the target host, our iSCSI server, to receive the connections. Remember to use the IP address dedicated to iSCSI connections, if the StarWind iSCSI SAN server also has a public connection we can also use it, but the traffic will be directed using that network adapter.

3.16 Click on “Quick Connect” to be authorized by the host to use these files.


Once we’ve connected to the files, access “Disk Management” to verify we can now use these files as storage attached to the operating system.


3.17 And as a final step, just using the first host in the cluster, put “Online” the storage file and select also “Initialize Disk”. Since these are treated as normal hard disks, the process for initializing a LUN is no different than initializing a physical and local hard drive in the server.

Now, let’s take a look about the Failover Cluster feature.

4. Install Failover Cluster feature and Run Cluster Validation

Prior to configure the cluster, we need to enable the “Failover Cluster” feature on all hosts in the cluster and we’ll also run the verification tool provided by Microsoft to validate the consistency and compatibility of our scenario.

4.1 In “Server Manager”, access the option “Add Roles and Features”.

4.2 Start the wizard, do not add any role in “Server Roles”. And in “Features” enable the “Failover Clustering” option.


4.3 Once installed, access the console from “Administrative Tools”. Within the console, the option we are interested in this stage is “Validate a Configuration”.


4.4 In the new wizard, we are going to add the hosts that will represent the Failover Cluster in order to validate the configuration. Type in the server’s FQDN names or browse for their names; click on “Next”.


4.5 Select “Run all tests (recommended)” and click on “Next”.


4.6 In the following screen we can see a detailed list about all the tests that will be executed, take note that the storage tests take some time; click on “Next”.

If we’ve fulfilled the requirements reviewed earlier then the test will be completed successfully. In my case the report generated a warning, but the configuration is supported for clustering.

Accessing the report we can get a detailed information, in this scenario the “Network” section generated a warning for “Node <1> is reachable from Node <2> by only one pair of network interfaces. It is possible that this network path is a single point of failure for communication within the cluster. Please verify that this single path is highly available, or consider adding additional networks to the cluster”. This is not a critical error and can easily be solved by adding at least one new adapter in the cluster configuration.


4.7 Leaving the option “Create the cluster now using the validated nodes” enabled will start the “Create Cluster” as soon as we click “Finish”.

5. Create Windows Server 2012 R2 Failover Cluster

At this stage, we’ve completed all the requirements and validated our configuration successfully. In the next following steps, we are going to see the simple procedure to configure our Windows Server 2012 R2 Failover Cluster.

5.1 In the “Failover Cluster” console, select the option for “Create a cluster”.

5.2 A similar wizard will appear as in the validation tool. The first thing to do is add the servers we would like to cluster; click on “Next”.

5.3 In the next screen we have to select the cluster name and the IP address assigned. Remember that in a cluster, all machines are represented by one name and one IP.


5.4 In the summary page click on “Next”.


After a few seconds the cluster will be created and we can also review the report for the process.

Now in our Failover Cluster console, we’ll get the complete picture about the cluster we’ve created: Nodes involved, storage associated to the cluster, networks and the events related to cluster.


The default option for a two-node cluster is to use a disk as a witness to manage cluster quorum. This is usually a disk we assign the letter “Q:\” and does not store a large amount of data. The quorum disk stores a very small information containing the cluster configuration, its main purpose is for cluster voting.

To perform a backup for the Failover Cluster configuration we only need to backup the Q:\ drive. This, of course, does not backup the services configured in the Failover Cluster.

Cluster voting is used to determine, in case of a disconnection, which nodes and services will be online. For example, if a node is disconnected from the cluster and shared storage, the remaining node with one vote and the quorum disk with also one vote decides that the cluster and its services will remain online.

This voting is used as a default option but can be modified in the Failover Cluster console. Modifying it depends and is recommended in various scenarios: Having an odd number of nodes, this case will be required to use as a “Node Majority” quorum; or a cluster stretched in different geographically locations will be recommended to use an even number of nodes but using a file share as a witness in a third site.

For more information about quorums in Windows Failover clusters, review the following Microsoft TechNet article: “Configure and Manage the Quorum in a Windows Server 2012 Failover Cluster”.

More Resources

To review more information about Windows Server 2012 R2 clusters and StarWind iSCSI SAN review the following links and articles:

About Packaging (and not virtualizing) Applications – Part II: First Approach for Silent Installations

January 14, 2014 at 9:39 pm | Posted in Application Packaging | Leave a comment
Tags: , , ,


After the general overview about application packaging, benefits and best practices in Part I of this series, it is time to start working understanding existing installation packages, handle some examples and reviewing step-by-step processes we will need to start packaging applications.

In this second post, the focus will be packaging applications without the use of any 3rd party tool, basically achieving applications deployment with silent and customized installations. What we will review in this article:

1. MSI Files are your New Best Friends
a) Reviewing the Windows Installer

2. Understanding and Identifying Installer Vendors
a) InstallShield Overview
b) Wise Installation System Overview
c) Inno Setup Overview
d) SetupBuilder and ActualInstaller Overview

3. Packaging Examples for Silent and Customized Installations: First Approach
a) Finding our MSI when there’s no MSI
b) Creating and Using “.iss” Answer Files
c) Troubleshooting InstallShield Deployments

MSI Files are your New Best Friends

So, after we’ve decided that silent installations is going to be our first stop in packaging applications, we must know that MSI files provides the best way to achieve that.

An MSI file is a container that works with an embedded engine in the OS (Windows Installer service) in order to facilitate applications installations and uninstallations. This platform (MSI + Windows Installer service) contains all necessary instruments to accomplish a customized, automated and silent installation.

Later in the next following posts we will discuss on the components in an MSI file, but initially we can take a quick overview about the Windows Installer platform, the MSI files and how we should interact with them.

Reviewing the Windows Installer

The Windows Installer service appeared embedded in the Windows OS since Windows 2000 and with the solution to several problems about inconsistencies in the software vendors on how they were using the application installers, typically in setup.exe files.

The Windows Installer services function is to support the application life-cycle: installation and customization, maintenance (auto-repair and patching for example) and retirement. To process all of these steps is msiexec.exe. MSIEXEC takes all the information contained in the MSI file (installation database, a summary information stream, and data streams for various parts of the installation) and other optional (in most cases) files, to perform the installation or desired process.

These other files can be MST (Windows Installer Transform), which contains customizations for the installation (for example, features to be installed); MSP (Windows Installer Patch) representing application update/patch.

The complete list for installation files available:


It is important to remember that MSI files support silent installations natively; meaning that every application containing one will be suited for this type of installation.

We will take a closer look to the MSI components in the following posts when we start building our own. For now, we will focus on how to use them in silent installations.


As we were talking, msiexec.exe represents the engine from which the Windows Installer performs any of the life-cycle tasks of each application. Using msiexec, if you haven’t before, it’s simple and should not represent any problem.

Once we have our MSI file, using it for silent installations should be in most cases as the following example:

msiexec.exe /I <path_to_MSI.msi> /qn

  • Using “/I” parameter at the beginning we are configuring MSIEXEC to install or configure the application (also known as “product”).
  • The “/qn” parameter represents for a “quiet” (q) installation (no user interaction) and “No User Interface” (n). This way, the application will complete its installation without prompting any notification for the user.

If everything about packaging and deploying applications would be represented by MSI files, then these articles won’t be necessary. We’ll see that most of the times, we will have to deal with EXE files for deployment.

Also, using these EXE files could have different methods according the vendor that created the installer. In the following section we will review how to handle those situations.

Understanding and Identifying Installer Vendors

Even though there are plenty of EXE files which contain MSI files within (to be reviewed later in this post), there are other files with no MSI file available but supporting a silent installation method. To identify how we can silent install the application we must first understand which installer vendor (if any) was used for this app. Identifying is not always easy, the installer icon available could be helpful and sometimes we will just have to guess.

The most common vendors involved in installation files are:

  • InstallShield, from Flexera Software
  • Wise Installation System (formerly InstallMaster), from Symantec (previously from Wise)
  • SetupBuilder, from LinderSoft
  • ActualInstaller, from Softeza Development
  • Inno Setup, from JR Software (open source project)
InstallShield Overview

This is probably the most common vendor involved for developing installation files. We can easily identify their installers by reviewing the logo involved.



There are also two ways we can deploy InstallShield packages, which actually depends if the MSI file was included in the installation package:


1. If the MSI is included in the package, the InstallShield installer should support a silent installation using this example (the /v parameter is optional for verbose):

setup.exe /s /v"/qn"

2. If the MSI is not included there’s an interesting method for some legacy installers to use “answer files”, InstallShield calls this “Package for the Web”. We will need to run an installation with specific parameters to generate our answer file (“.iss”) that we can later use for deploying the application silently and with specific parameters. We will discuss the details later in this article.

The “Package for the Web” has been discontinued by Flexera Software, and it will no longer be available for InstallShield packages. But, most likely, in most organizations we can still find this alternative as a valid method.

To review the complete parameters available for InstallShield check this link: “Setup.exe Command Line Parameters

Wise Installation System Overview

You will find this installation file type in some legacy applications in your environment. Wise Solutions was the creator of this platform, which originally started with CompuServe; later on was acquired by Altiris Inc, which also had been acquired by Symantec on 2007. The last release of this suite was 9.0 but has been already discontinued by Symantec.


The applications created with this platform support the classic “/s” parameter for silent installations.

setup.exe /s

But unfortunately there are no other options available for customization and the exit status when you executed the process was inconsistent, so it might get a little bit confusing.

Inno Setup Overview


This open source project has been receiving significant contribution of the community in all the years it has been available (since 1997), supports all Windows OS, including Windows 8 and Windows Server 2012.

The command line used for supporting silent installations in this suite are:

setup.exe /sp- /verysilent /norestart

  • “/sp-“: Disables the “This will install… Do you wish to continue?” prompt at the beginning of Setup.
  • “/verysilent”: Will execute silently the installation process. The “/silent” parameter is available but, even though it will not require users’ intervention will show a progress bar.
  • “/norestart”: Prevents automatic restarts, which can be very useful if the “verysilent” parameter is used since it will automatically restart the OS without the user intervening.

For more information about Inno command lines available: “Setup Command Line Parameters”.

SetupBuilder and ActualInstaller Overview

SetupBuilder and ActualInstaller represent a couple of somehow new installation packagers’ vendors available in the market. Like many of these recent suites, both platforms support Windows 8 and Windows Server 2012.

One of the most common standards for recent installation packagers is the use of the same type of parameters for the silent and customized installations. SetupBuilder and ActualInstaller have the same silent installation method:

setup.exe /S

Among other providers we will find the same parameters that can be used, we have to be aware that sometimes the parameters are case-sensitive.

Packaging Examples for Silent and Customized Deployments: First Approach

Let’s review some of the examples we can find when we want to package applications using silent and customized deployments.

Finding our MSI when there’s no MSI

One common situation that we might find is that the application we are trying to deploy does not contain an MSI file, instead we are provided with an exe file that could not support silent installations. But, fortunately, most times there are some secrets hiding in plain sight.

When we are handling EXE installer files (usually as setup.exe), several times we are using an installer file that is actually a packaged set of files that contain an MSI, installation instructions and other optional files that could be used during the installation.

Companies use this alternative to have more scalable installation procedures that cannot be achieved with one MSI file. For example: Adding several MSI files which can be activated depending on the type of installation selected by the user. One example for this situation is Apple’s iTunes.

There are a two ways to extract the needed MSI file/s. Let’s review them using Google Earth as our application:

In the particular case of Google Earth, we need to download the standalone installer for the application available in this link.

1. Double click the EXE installer file and when the installation wizard starts access %TEMP% folder to locate our MSI. This process is called “expanding EXE packages”.

Most of the times, the files are located in a folder using a random name (in this case %TEMP%\._msige61).


2. Use 7-Zip File Manager to browse inside the EXE file. This free tool is quite useful, all we need to do is use the option “Open Inside” and we can browse around the installer and handle all the files within.


3. Some software vendors provide a command line to extract the files without the need to double click them or using 7-Zip File Manager.

Once we have our MSI, the process for a silent installation is simple:

msiexec /i “Google Earth.msi” /qn

Creating and Using “.iss” Answer Files

As we reviewed earlier in this article, InstallShield represents one of the vendors commonly used by software companies to create their installers. When we handle this type of installer, we get the simple option to use the “/s /qn” parameter or we might need to create an answer file.

This answer or response file is represented by our “.iss”. This file must be created running a command line which will install the software normally with all the desired options, but creating in the process the .iss file to be used in other deployments. Since every time we need to create an answer file we will need to install the software, using virtual machines and snapshots will be quite useful at this point.

InstallShield also supports the use of different compiled scripts, .ins files. Only applies for InstallScript and InstallScript MSI projects only, InstallShield version 12 and earlier.

The process for creating response files and then deploying the package is the following:

1. Locate your setup.exe (or the installer’s name) and a target machine to deploy the application (ideally a virtual machine).

2. Open a command line and execute the following to create the “setup.iss” response file:

setup.exe /r

The response file will be created in %WINDIR% (typically C:\Windows), but we can alternatively change the path by using the following command line:

setup.exe /r /f1”C:\setup.iss”

Note that there’s no space between –f1 and the “<>“ with the destination path.

3. Install the application with all the customization’s necessary.

4. Once is completed, locate the setup.iss file created.

5. To deploy the application, if we are using the .iss file with the default name (“setup.iss”) and is located in the same folder as the installer, we can deploy our package using the following command line:

setup.exe /s /v/qn

If we changed the name of the response file or we have it located in a different folder, the command line should be the following:

setup.exe /s /f1”<.iss name and location>”

Keep in mind that there’s no space between –f1 and the “<>“ with the file location.

6. [Optional] We can use /f2 as a parameter to create a log file for the deployment:

setup.exe /s /f1”<.iss name and location>” /f2”<.log name and location>

If we don’t set log file name and location, it will automatically generate “setup.log” file.

Troubleshooting InstallShield Deployments

In case there’s a deployment issue with our package, creating the setup.log would be mandatory in order to find out the exact problem we have.

The Setup.log file contains three sections.

  • [InstallShield Silent]: Identifies the version of InstallShield Silent used in the silent installation.
  • [Application]: Details Installed application’s name and version, and the company name.
  • [ResponseResult]: The result code indicating whether or not the silent installation succeeded.

The result codes can be one of the following:

-0 Success.
-1 General error.
-2 Invalid mode.
-3 Required data not found in the Setup.iss file.
-4 Not enough memory available.
-5 File does not exist.
-6 Cannot write to the response file.
-7 Unable to write to the log file.
-8 Invalid path to the InstallShield Silent response file.
-9 Not a valid list type (string or number).
-10 Data type is invalid.
-11 Unknown error during setup.
-12 Dialogs are out of order.
-51 Cannot create the specified folder.
-52 Cannot access the specified file or folder.
-53 Invalid option selected.

In the next article we will review more details about silent and customized deployment, including more examples in more complex scenarios.

About Packaging (and not virtualizing) Applications – Part I

August 15, 2013 at 1:57 pm | Posted in Application Packaging | 3 Comments
Tags: , , ,


It’s been a while since I don’t write new entries in my blog, so I decided to get back on track with a few articles about packaging applications. But, the approach won’t be from the virtual applications perspective, will be focusing on that strange matter of building the right installation, configuration and deployment process for your applications.

What We Will Cover

Packaging applications represents a very vast set of topics that we can discuss, even if we take the virtualization off the table. Here’s a short summary of what we will cover in this set of posts:

  1. Understanding Application Packaging
  2. Silent installations vs. MSI customized packages
  3. Benefits of Application Packaging
  4. Packaging Best Practices
  5. Packaging Applications without 3rd Party Tools (silent installations): Reviewing best practices; Packaging Examples; Troubleshooting .
  6. Packaging Application with 3rd Party Tools: Reviewing the Windows Installer; Reviewing best practices; Packaging Examples; Troubleshooting.

Understanding Application Packaging

Let’s take a moment to talk about application packaging and the statement I’m using about using or not 3rd party tools to achieve it.

Application packaging can actually have several definitions by itself, and it can definitely vary depending on each situation and the outcome we are looking for. But, in a simple manner, I can say that application packaging represents the process from which we can get an automated method to deploy applications with the proper configurations for our environment.


To achieve that, basically, we have two alternatives: Using silent and customized installations or using 3rd party tools (like AdminStudio or Advanced Installer). This second alternative can generate for us, from “capturing” a valid installation, a customized MSI file with the personalization we need for that application (keep in mind that MSI files are, by definition, installation files that can be silently installed).

Silent installations vs. MSI customized packages

Considering silent installations and MSI customized files as valid forms of packaging, we need to establish when we want silent installations or capturing MSI files for deployment.

The answer will depend on these two initial questions:

1. Does the software provider support silent installations and how many parameters can we introduce?

Many providers give us the opportunity to achieve silent installations with all the necessary configurations for our environment. Autodesk and Adobe are great examples, since they provide their own engine to generate “answer files” for the installation process and achieve automated and customized installations.

Many others include in their “User Guide” the right parameters we can use to get similar results in the installation process and some just not provide any method for an automated process.

This is what Autodesk offers us to create our own packages (applies for any suite). A simple wizard to follow and create a “deployment type”; the outcome we can later deploy automatically with all necessary customizations.


2. How many customizations I need to perform in my package in order to have it ready for deployment?

In some cases we have the possibility to perform silent installations with personalized configurations, but sometimes these are not enough. There are several examples of licensed software that require introducing licenses using a manual process; this could be an example of applications that might need for 3rd party tools for packages creation.

Take a closer look to the documentation available for the application and review the parameters we can use to deploy the applications without generating a customized MSI.

In my personal opinion I always try to use the silent installation method as the preferred one. This is because, if the software provider gives us the chance to install and customize the software automatically, then we have an official support and “guarantee” that this software is going to be installed in a proper manner.

If we decide to capture and create an MSI of our own, the package will depend on the software and type of “capture” we will be using for this application; and of course having the risk to exclude files and registries that will be required later using the app.

Benefits of Application Packaging

Let’s take a quick moment to analyze why application packaging is highly recommended in almost any company:

1. Customizing the application so it can fit to our company’s needs and policies: No matter if you are using silent installations or customized MSI, you should be looking for a way to configure your applications properly among all users.

2. Provides an easy, predictable and repeatable way to deploy our applications: Removing the complexity in deployments and the possibility of user misconfigurations not only saves us troubleshooting time but money we need to invest in admins for deployment and support.

3. Helps us standardize our users’ desktops: Related to the previous benefit mentioned, having the standard method to deploy applications also facilitates us in the OS and applications life-cycle; easier to maintain and update.

4. Gives you the possibility to limit the administration rights delegation: When you are using non-packaged apps, you will depend on users or delegated admins to install, reconfigure and/or uninstall applications. Having packaged applications and a deployment tool to deliver the applications, you can annul distributing admins rights to users or help desk personnel for example.

First Approach to Packaging Best Practices

Prior to get our hands into some examples of packaging applications, I want to take a moment to review some of the general best practices when we are packaging applications. When we start talking about some examples, we will take a deeper look into these best practices and some other specific for silent installations or using 3rd party tools for packaging.

1. Understand the application and the environment you are using.

Review applications requirements, documentation (if any) and the scenario you will need to deploy each application. Understand the purpose of the application and targeted users. Understand the environment you will be using for deploying the applications: The distribution method available, operating systems involved, and so on. We usually tend to use a comfortable and predictive environment for packaging the application, this most likely won’t be the environment you’ll be deploying it.

2. Review and write down common use cases for each application so you can test it properly.

Once we completed packaging an application we need, of course, to test it. And it is important to remember that we cannot complete a test just by verifying the installation has completed, we need to have a small group of tests to run on each application, consider having a talk earlier with each application owner to verify the tests you might need to run in order to validate the app functionality.

3. Don’t start packaging until you have the final OS image ready.

Operating systems represents an important part in the packaging process; applications behavior should vary depending on the Service Pack level installed, features implemented (a good example is .Net Framework), other basic components and software to be installed, configurations to be used (for instance, UAC enabled/disabled). If we are using an image as an assumption for the targeted operating systems, we could find several problems with packages not being able to install or applications malfunctions.

4. Prepare a virtual machine with OS “gold image” and use snapshots for packaging.

It is important to have a fresh OS image available whenever you are going to package a new application. And using the term “fresh” doesn’t refers on having a Windows OS without any application installed, should be the “gold image” from which the applications will be later deployed. Having the possibility to virtual machines and snapshots will be crucial to facilitate the process.

5. Try using silent installations as the first option for packaging.

Like I mentioned earlier, using silent installations is the preferred method to start packaging an application. With that, we are ensuring that our app will be installed and supported by the vendor involved. Using a customized MSI file we could have a higher level of personalization and even improving package size and deployment, but it could also result on having to later troubleshoot the application functionality because we excluded necessary files in the app environment.

Also keep in mind that several applications are not suited to be re-packaged into a new MSI and even though you can create a customized packaged, the deployment process won’t succeed.

6. Try not to think applications to be packaged as independent from their OS or other applications.

Since we are not virtualizing applications, these packages won’t exist in a “bubble”, they will have dependencies with OS components as well with other applications. We usually try to package an application in the best scenario, like a fresh installation using from a virtual machine snapshot. We must pay attention if the application could require a reboot for completion, if we are deploying several applications our process could be stopped waiting for a reboot from the previous package. Also, one application requirement could be conflicting with the requirement from a different application.

Try to think out-of-the-box and review any dependencies that could appear in production.

7. Be methodical in the application packaging process

Plan your packaging and deployment processes; document what you are going to do and the tests you perform; prepare checklists and any other thing that can help you and others along the process. If any constraints are found in the packaging process (for instance, applications that require a reboot to complete deployment) write it down and use that information to adjust your planning in the deployment process.

Having a predictable way to package applications will help you and colleagues in their jobs.


In the next post I’ll be reviewing some examples and hands-on processes for applications silent and customized installations.

Free eBook: Windows 8 for IT Professionals

October 23, 2012 at 11:25 pm | Posted in Books, Free Stuff | 1 Comment
Tags: , , ,


I haven’t got much time for publishing new articles in my blog, but I found this brand new publication and I thought it could be useful for a lot of people around there: “Introducing Windows 8, an overview for IT Professionals” (preview version).


This book has some quite important topics that every IT guy which is considering implement Windows 8 in their company should read it carefully. Here’s a short summary for the topics included (I’m just naming a few; the entire list is available in the download):

  1. 1. Overview
  2. 2. Experiencing Windows 8
  3. 3. Windows 8 for IT Pros
    • Customizing and configuring Windows 8
    • Client Hyper-V
    • Redesign NTFS
    • PowerShell 3.0
  4. 4. Preparing for Deployment
    • Windows 8 SKUs
    • Application compatibility
    • User state migration
    • Windows To Go
  5. 5. Deploying Windows 8
    • Windows Assessment and Deployment Kit
    • Deployment and Imaging
    • User state migration tool
    • MDT 2012 Update 1
    • e. SCCM 2012 with SP1
    • f. Desktop Virtualization
  6. 6. Delivering Windows Apps
  7. 7. Windows 8 Recovery
    • DaRT
  8. 8. Windows 8 Management
    • Group Policy Improvements
    • Windows Intune
    • Mobile device support
  9. 9. Windows 8 Security
  10. 10. Internet Explorer 10
  11. 11. Windows 8 virtualization
    • Virtual Desktop Infrastructure
    • Application Virtualization
    • User State virtualization

I did not read it completely, but for what I’ve seen so far the content is not fully detailed with step-by-steps but contains valuable information and guidance that must be read it if you are implementing / managing Windows 8.


App-V Advanced Book Giveaway and the Happy Winner

July 31, 2012 at 7:44 pm | Posted in App-V, Books | Leave a comment
Tags: , , , ,


The giveaway contest for my latest book: Microsoft Application Virtualization Advanced Guide ended June 30th. Now, we can confirm that our happy winner received the book.

Rajesh Attaluri from the UK received is the winner for this contest and is sharing with us a pic. Thank you Rajesh!


Thank you all for participating in this contest and as a reminder, the book is available in the following stores:

App-V Advanced Guide Book Giveaway!

June 11, 2012 at 12:48 pm | Posted in App-V, Books, Cool Stuff, Free Stuff | 1 Comment
Tags: , , , , , ,


As I did for my first book, celebrating the publication of my second App-V book: Microsoft Application Virtualization Advanced Guide, I’m giving away a free paperback copy among my readers.


Here’s a short summary for those who want to participate:

  • Email me at augusto@augustoalvarez.com.ar with the subject: “App-V Advanced Book”.
  • Include in the email body your full name plus the address where you would like for us to send the copy.
  • I’ll close up the contest on June 30 (2012, just in case). All the emails sent until that date will be included in the election, which will be completely random.
  • I’ll notify the winner in the following days and we’ll ship a free copy of “Microsoft Application Virtualization Advanced Guide”.

To avoid any problems, here are some disclaimers:

  • Only one email by person will be included. Do not use different mail accounts to participate several times.
  • Emails that don’t include person’s full name and address will not be considered valid.
  • We’ll cover the expenses regarding shipment but we are not responsible for extra fees or taxes other countries may include in the package.
  • Please don’t send any email requesting exceptions to this contest (like asking for a digital copy of the book), I’m not allow to do any of those.

Remember that the book is available in the following stores: Packt Publishing; Amazon.com; Amazon.co.uk; Barnes & Noble and Safari Books Online.

[Interview] Question and Answer Session with Rod Trent (CEO, myITforum.com)

May 21, 2012 at 12:42 pm | Posted in Interviews, System Center | 1 Comment
Tags: , , , ,


A while ago I started this Q&A series of posts interviewing Aaron Parker (App-V MVP and reviewer of both of my App-V books); and continuing with this series, this time is turn for Rod Trent (myITforum.com owner and CEO).

Rod has been contributing to the IT community in several ways: Evangelizing technical communities, books and articles written, and of course engaging with the large System Center community around the globe with myITforum.com.

You can find more about Rod following him on Twitter: @rodtrent.

Here’s the Q&A:

1. To start with the interview, can you give us a quick synopsis about yourself, your experience and myITforum.com?


I’m a father or four children, ranging from 3 years old to 21 years old and my wife and I, Megan, just recently celebrated our 22nd wedding anniversary. I am a faithful, church-going Christian, an avid gadget fan, a die-hard old television show and cartoon buff, a health and exercise freak, a long-time evangelist of System Center products, and a part-time missionary to the Chinese people.

I have written many books, thousands of articles, and speak at various conferences and User Groups during the year, but my main, professional focus is evangelizing technical community on the web and in “real life”.

I have been in IT for over 25 years. I actually got my start as a computer salesman, moved on to managing a computer repair center, and then finally migrated IT in the early 1990’s, working for a large accounting firm. I have been blessed with a real aptitude to be able to just “fix things.” IT is the perfect industry for that. So, while I’ve kept the techie side, over the years, due to working with myITforum.com, I’ve also become a marketing person. myITforum.com flourishes through the way we highlight and interact with our sponsors. We don’t charge to be part of the community, instead our sponsors help offset our costs, allowing everything on myITforum.com to be free. We don’t believe that support, community, or content should ever have a price tag. So, while it was never an intention of mine to become a marketer, my experience over the last 10 years has been vast, and has served of sponsors well enough to continue building a strong and large community.

Way back in the SMS 1.x days, the product was barely supported by Microsoft itself. So a grass-root, community for SMS 1.x was started on an old, BackOffice support site called, Swynk.com. I ran the SMS section of the web site. I posted articles and tips for SMS since I worked with it daily in my job in a top Accounting Firm. SMS 1.x actually saved our lives at work, giving us the ability to manage desktops with only a handful of support people. So, I simply started sharing my knowledge online. After a few months of success, I was offered a section manager job that paid something like $15 per month and a free technology book.

Traffic and popularity continued to increase, and the community grew larger and larger. Then, at the SMS and Windows 2000 conference (what MMS was called prior to being the Microsoft Management Summit) I was told that Swynk.com was going to be sold to Internet.com. This created a huge issue, as the community cried out with the knowledge that it could, potentially, be broken up.

So, with a little ingenuity and a great set of industry contacts, myITforum.com was born. The community was offline for maybe, 2 months, and then we unveiled myITforum.com 1.0. myITforum.com has gone through numerous changes over the years, but the basic premise has remained. We just help people – and give folks a central location for support, education, training, networking, and giving honest, direct feedback on System Center products.

There’s much, much more to it than that, but that’s the basics. And, the basics work. myITforum.com now receives a little over 145,000 visitors a day, all looking for help and all looking to learn how to manage their environments.

As many probably know our success with myITforum.com and great partnership with Microsoft has led to additional opportunities to be able to reach out and provide community support services for various Microsoft conferences like TechEd and the Microsoft Management Summit. We always look forward to having the great opportunity to “communitize” boring, technical events, and give them that something special to make them memorable and extra valuable.



2. You’ve been really close to System Center evolution over the past years, what do you think about the 2012 editions of this suite? Having all products with the 2012 editions will make a difference in the market?


Pulling the products together is a smart idea, and should really prove gains for Microsoft. ConfigMgr has been the real winner for so long, with very little uptake for the other products. Hopefully, by bringing the products together, particularly from a licensing standpoint, will allow the other products to catch hold. It will be interesting to watch as customers try-out products they would not have before simply because the products are now part of the license agreement.


3. In the past there weren’t many companies that decided to implement the full System Center suite (SCCM, SCVMM, SCDPM, and SCOM) and chose other technologies or simply did not use any tool for specific tasks. Which are the challenges a company should take note in order to consider implementing this complete suite (including also Service Manager)?

Glad you mentioned Service Manager as part of the question. There are pieces of the System Center suite that still seem a bit convoluted and still beta-esque. Service Manager is one of those components. But, whether it’s Service Manager, or some other product, education is the key. Those that have been familiar with specific products, and the way they work, will need to get up to speed quickly, and understand how they can all work together. Orchestrator, in my opinion, is the glue between all of the products, and that may be the single most important piece to understand first.


4. Which are the products and features that you think CTOs and IT decision makers should pay attention the most in the System Center 2012 suite?

From a CTO and decision maker standpoint, I believe the provisioning of the private cloud will be one of the most important aspects. As shown at MMS 2012, you can provision a private cloud in less than 30 seconds. This enables IT organizations to save time, money, and provide SLA and support at unimaginable levels.


5. In the System Center 2012 suite, is there a feature or product that you think might be improved or a missing functionality that should be included in a R2?

I hate to keep harping on Service Manager, but additional functionality is needed to make it easier to work with. Also, in ConfigMgr, Microsoft has to somehow get past the Exchange connector function when supporting mobile devices.


6. Regarding companies’ investments in IT (hardware, services, products, human resources), how do you feel the next few years would be like? Is the cloud model taking over company’s strategies?

The Cloud is on the horizon, but from what I hear in real circles, Microsoft believes it’s closer than anyone else does. It makes sense that they believe this, though, since they are putting so much behind development and marketing of the Cloud from the Microsoft perspective. I believe Microsoft will be highly successful in the private cloud – as to when, that’s up to the customer to decide.


017. With thousands of companies using iPads, iPhones, Android phones and tablets and so on; to maintain compliance, availability and security onto these devices, which is the best approach for client device management?

In line with what I mentioned previously, ConfigMgr and even Windows Intune help to better manage devices like iOS and Android. In fact, Windows Intune may actually do it a bit better than ConfigMgr right now, but I fully expect those functions to show up in ConfigMgr over the life of the 2012 product. If a company wants to manage these devices right now from a full management perspective, they will still need to source a 3rd party; one that provides full integration with ConfigMgr. Odyssey Software (now part of Symantec) is still the best 3rd party solution available.


8. About virtualization and particularly VDI, do you think this trend regarding virtualizing desktops will increase significantly in the following years?

I definitely believe this will happen. In fact, I believe this will become so seamless and so common that it becomes part of normal operating procedure. Right now, VDI gives organizations the ability to not have to worry about compatibility issues with corporate software as they migrate to new technologies. In the very near future, possibly, this will lead to PC operating systems supplied solely from the cloud.

I hope you enjoyed this interview; I’ll get back soon enough with more Q&A sessions.

Next Page »

Create a free website or blog at WordPress.com.
Entries and comments feeds.