How to setup VMFleet to stress test your Storage Spaces Direct deployment

26 May

As an outcome of the Microsoft’s demos about Storage Spaces Direct at Ignite conference and the Intel Developer Forum, Microsoft published recently a bunch of PowerShell Script known as “VMFleet“.

VMFleet is a basically a collection of PowerShell script to easily create a bunch of VMs and run some stress tests in it (mostly with the DISKSPD tool) to test the performance of an underlying Storage Space Direct deployment or simply for demonstration purposes.

After you have downloaded the files it is not quite obviously how to get started as the included documentation does not give you some simple step by step guidelines how to get all up and running.

So I decided to write my own short guide with the needed steps to get the VMFleet up and running.

Update 03.01.2017: Since I wrote this post in May 2016, Microsoft has apparently extended the VMFleet with some new scripts to setup the environment. So I updated the setup steps below. Thanks to my colleague and coworker Patrick Mutzner for pointing that out!

Prerequisites:

  • A  VHDX file with installed Windows Server 2012 R2 / 2016 Core and the password of the local Administrator account set. (sysprep must not be run!)
    A fixed VHDX file is recommended to eliminate “warmup” effects when starting the test runs.
    Note: The VHDX should be at least 20GB big because the VMFleet Scripts will create a load test file of 10GB inside the VMs.
  • A functional Storage Spaces Direct Cluster (with existing Storage Pool and no configured Virtual Disks)

VMFleet Setup:

  1. First you need to create one Volume/CSV per node to store the test VMs.
  2. Then create an additional CSV to store the VMFleet scripts and for the collection of the test results from the VMs.
  3. Extract the ZIP file with the VMFleet scripts on one of the cluster nodes.
    (For example extract the downloaded ZIP file to C:\Source on the first cluster node)
  4. Run the install-vmfleet.ps1 Script. This will create the needed folder structure on the “collect” CSV. (The volume created at Step 2)
  5. Copy the DISKSPD.exe to C:\ClusterStorage\Collect\Control\Tools

    Note: Everything in this folder will later automatically be copied into every test VM. So you can also copy other files to this folder if you need them later inside the test VMs.

  6. Copy template VHDX to C:\ClusterStorage\Collect 
  7. Run the update-csv.ps1 (unter C:\ClusterStorage\Collect\Control) script to rename the mountpoints of the CSV volumes and to distribute the CSV even in the cluster so every node is the owner of one CSV.
    update-csv
  8. Now it’s time to create a bunch of test VMs on the four CSVs. This is done by the create-vmfleet.ps1 Script.

    As parameters you must specify the path to the template VHDX file, how many VM per Node you like to create, the Administrator password of the template VHDX and a Username and its Password which has access to the Cluster Nodes (HostAdmin in this example). This account will be used inside the VMs to connect back to the cluster so that the script running inside the VMs get access to  the collect CSV  (C:\ClusterStorage\collect).
  9. Because the create-vmfleet.ps1 creates the VMs with the default values from the New-VM Cmdlet you should run now the set-vmfleet.ps1 Script to change the count of vCPUs and the memory size of the test VMs to your desired values.
  10. (Optional) Now before you begin with the tests you can check your Storage Spaces Direct Cluster configuration with the test-clusterhealth.ps1 script. The Script will check on all nodes if there are any RDMA configuration errors and it will do some basic health checks of the Storage Spaces Direct setup.
    test-cluster 
  11. As the last preparation step start the watch-cluster.ps1 script. This will give you a nice dashboard like overview of the most interesting storage performance counters across all nodes in the cluster. So you get an overview of the storage load and performance.
    watch-cluster 
  12. At this Point you are ready to start you first test run. 
    Simply start all test VMs
     manually or with the start-vmfleet.ps1 script. After the VMs have booted up they will automatically look for the run.ps1 script on the collect CSV (at C:\ClusterStorage\Collect\Control). But by default the test run is in paused state. So to start the actual load test simply run the clear-pause.ps1 script. This will kick off the dskspd.exe in every VMs and you can observe how the numbers from the watch-cluster.ps1 will explode… 😉

To change the test pattern simply change the parameters in the run.ps1 Script and either restart all test VMs (with stop-vmfleet.ps1 and start-vmfleet.ps1 scripts) or pause and resume the tests with the set-pause.ps1 and clear-pause.ps1 scripts.

That’s it. VMFleet is ready. So have fun while testing your S2D deployment and while looking at (hopefully) some awesome IOPS numbers. 🙂

Replicate or migrate VMware VMs with a client OS to Azure with Azure Site Recovery

11 Apr

The official (not working 🙁 ) way:
To replicate VMware VMs to Azure you have to install the ASR Mobility Service in the VM. But what is when in the VM is running a Client OS (Windows 7, 8.1, 10) instead of Windows Server? Officially this is not supported by Azure Site Recovery and when you try to install the Mobility Service you get the following nice, or not so nice 😉 , message:

ASR_Error_ClientOS

The unofficial but working way: 
However, beside the fact that a single VM in Azure does not qualify for a SLA guarantee and may have down times, there is technically actually no reason why you cannot run a Client OS in an Azure VM. Especially if the VMs are used for Dev/Test scenarios. So why should it then not be possible to replicate or migrate these VMs to Azure with ASR, you may ask?  And you know what? With a little trick (installing the MSI directly on command line) it’s actually really possible. Here are the steps needed to get the Mobility Service running on a Client OS:

  1. Get the Mobility Service .exe file from your ASR Process Server and copy it to temporary location on the VM which you want to replicate to Azure. You can find the setup file in the install folder of the Process Server under home\svsystems\pushinstallsvc\repository (e.g. D:\Program Files (x86)\Microsoft Azure Site Recovery\home\svsystems\pushinstallsvc\repository\Microsoft-ASR_UA_9.0.0.0_Windows_GA_31Dec2015_Release.exe)
  2. Run the exe and make notice of the folder to where the files get extracted by the installer
    2016-04-08_13-42-50
  3. Keep the Setup Wizard open and copy the content of the folder from step 2 to a temporary location
  4. Now you can install the Mobility Service MSI directly with msiexec by executing the following command line.
  5. Finally start “C:\Program Files (x86)\Microsoft Azure Site Recovery\agent\hostconfigwxcommon.exe” an enter the Passphrase of the ASR Process Server to connect the Agent to the ASR Server.
    2016-04-08_14-48-21

That’s it. Now you can replicate and failover the VM with ASR like any other Windows Server VM. Success! 🙂

Install Azure Stack POC into a VM

1 Feb

Last week Microsoft released a first preview of the Microsoft Azure Stack. The software stack which allows you to run Azure in your own datacenter.

Official a physical server with quite a lot of CPU cores and memory is required to deploy the Azure Stack Technical Preview. Because I do not have any spare servers in my home lab to use exclusively for the Azure Stack Technical Preview I looked for an alternative and I tried to deploy it in a VM. And here is a short walkthrough how you do it and yes it actually works. Smile

Requirements:
First of all you need the following:

  • A Hyper-V Host installed with Windows Server 2016 TP4
    (TP4 is needed for nested virtualization feature)
  • The “Microsoft Azure Stack Technical Preview.zip” file which you can get from here: https://azure.microsoft.com/en-us/overview/azure-stack/
  • At least 32GB of RAM and 150GB of free Disk space available

Preparation:
Frist extract the Microsoft Azure Stack Technical Preview.zip on to the local hard drive of the Hyper-V Host. This will lead you to a folder with an .exe and 6 .bin files.

image

Run the Microsoft Azure Stack POC.exe to extract the actually data to deploy the Azure Stack Preview. This created the “Microsoft Azure Stack POC” folder.
image

Then copy the “WindowsServer2016Datacenter.vhdx” outside of the “Microsoft Azure Stack POC” folder and rename it to e.g. MicrosoftAzureStackPOCBoot.vhdx.

image

Mount (double click) the copied VHDX and copy the whole “Microsoft Azure Stack POC” folder into it.
image

Then dismount the VHDX through Explorer or by PowerShell (Dismount-VHD)image

Now it’s time to create a “litte” Winking smile VM with 32GB of RAM at minimum and as much vCPU as your hardware can suffer.
Note: Dynamic Memory must be disabled on this VM!
SNAGHTML3d386e9

Use the copied VHDX form above as the first disk (boot disk) of this VM and add 4 more empty data disks. (min. 140GB each)
image

image

Enable MAC address spoofing on the Network Adapter.
This is need for network connectivity of the nested VMs which the Azure Stack Setup will create.SNAGHTML696a02f

Lastly the nested visualization feature (new in TP4) must be enabled on the vCPU of the VM. Do this with the following PowerShell command:

Azure Stack Deployment:
Now start the VM and answer the question of the Windows Setup and the login with local Administrator account.

If you have less than 96GB RAM assigned to the VM you have to tweak the deployment script before you start the setup. Daniel Neumann has written an excellent blog post about the necessary modifications: http://www.danielstechblog.de/microsoft-azure-stack-technical-preview-on-lower-hardware/

Now, finally, you can run the PowerShell deployment script (Deploy Azure Stack.ps1) as it is described in the original documentation from Microsoft. The script will take several hours to finish. So better get you a cup of coffee or have a “little” break and hope everything goes well. Smile If it does, you will get a functional Azure Stack installation in a VM.

Update 09.03.2016:
Although the setup just works fine in the VM and you can even provisioning Subscriptions and Tenant VMs there are some serious issues with networking when using this nested setup. As soon as you connect to a fabric VM (with RDP or VM Console) the VM with the virtual Hyper-V Host will crash.
Many thanks to Alain Vetier for pointing this out and sharing his finding here!
See also his comments below.

Configuring Hyper-V Hosts with converged networking through PowerShell DSC

10 Dec

Lately I had to rebuild the Hyper-V Hosts in my home lab several times because of the release of the different TPs for Windows Server 2016. This circumstance (and because I am a great fan of PowerShell DSC Winking smile) gave me the Idea to do the whole base configuration of the Hyper-V Host, including the LBFO NIC Teaming, vSwitch and vNICs for the converged networking configuration, through PowerShell DSC.
But soon I realized that the DSC Resource Kit from Microsoft provides DSC resources only for a subset of the needed configuration steps. The result was some PowerShell modules with my custom DSC resources.

My custom DSC resources for the approach:
image

cLBFOTeam: To create and configure the Windows built-in NIC Teaming
cVNIC: To create and configuring virutal network adapters for Hyper-V host management
cPowerPlan: To set a desired Power Plan in Windows (e.g. High Performance Plan)

You can get the moduels from the PowerShell Gallery (Install-Module) or from GitHub. They will hopefully help everyone who has a similar intend  Winking smile

More to come:
Yes, I am not quite finished yet and I have more in the pipline. Winking smile
Currently I am also working on a fork of the xHyperV Module with a adopteted xVMSwitch resource with a paramter to specify the MinimumBandwidth mode of the Switch.
Futuremore I am also planing to add support for the SET (Swicht Embedded Teaming) in Windows Server 2016 to the xVMSwitch resource.

So you may soon read more about this topic here. In the meantime, happy DSCing! Smile

Azure Backup the future (replacement) of DPM?

9 Oct

As Aidan Finn (and probably many others) wrote on his blog Microsoft has published a new Version of the Azure Backup Software. The new Software has now the ability to Backup workloads such as Hyper-V VM, SQL-Server, SharePoint and Exchange on premise to disk (B2D) and backup to the Cloud for long term retention. All in all, it sounds very similar to a DPM installation with the Azure Backup Agent. So it seems that DPM has a reborn, apart from the System Center Suite, as Azure Backup. So I decided to do a test installation and here is a how it looks like:

  1. Firs you need an Azure Subscription with Backup Vault. For my Test I create a new Vault:
    06-10-_2015_21-33-53
  2. Once the Backup Vault is created you can Download the new Azure Backup Setup:
    06-10-_2015_21-36-02
  3. In additional to the Azure Backup setup you must also download the Vault credentials which you need later in the setup:
    06-10-_2015_21-45-36
  4. After the Download you need to extract the files and then start setup.exe. And then the Setup Wizard start. If you are familiar with DPM you will notice the remarkable resemblance. Note the Link for DPM Protection Agent, DPM Remote Administration on the first Screen 😉

Finally, after Setup you have a Server with Azure Backup. The Console looks still like a DPM clone. Expect that the ability for Backup to Tape is missing everything is very similar to the Management Console from DPM 2012 R2:07-10-_2015_07-54-30

If MS will really use DPM as basis for the Azure Backup I am very curious to see how MS will tune the underlying DPM in the future to handle big data source like files servers with multiply TBs of Data which is not necessary abnormally these days. But that’s where DPM has really big drawback at the moment. We will 🙂

Build a Windows Server 2016 TP3 VHD with minimal GUI

6 Sep

In the TP3 the installation option was changed. Therefore, when you create a VHD(X) directly from the ISO with the Convert-WindowsImage.ps1 Script you have choice to create a VHD with Core Server or the full GUI with Desktop Experience but nothing in between. To create a VHD with the minimal server interface (core server with Server Manger and mmc GUIs) or the Server Graphical Shell (without Desktop Experience) you have to add the corresponding features with DISM.

This is how you add the minimal server interface to a VHD with the core server installation:

  1. Create a Core Server VHD with the Convert-WindowsImage.ps1 Script
  2. Mount the Windows Server 2016 TP3 ISO (double click it)
  3. Mount the install.wim from the ISO with DISM
    (Index 4 for Datacenter, 2 for Standard Edition)
  4. Add the Server-Gui-Mgmt-Infra (for Minimal Server Interface) or the Server-Gui-Shell (for full Server Graphical Shell) feature to the VHD by specify the mounted WIM as source.
Update, 11/20/2015:
This does not work any more with the TP4 which is now public available as the feature “Server-Gui-Mgmt-Infra” is gone now. You can add the feature “Server-Gui-Mgmt” with DISM which gives you a similar experience. But the feature is not even listed in PowerShell (Get-WindowsFeature) so I think this is probably far form supported.
With other words: No “Minimal Server Interface” in TP4 anymore.
 

Registry keys to tune the data source colocation in DPM 2012

23 Aug

By default, DPM will create for every data source two volumes (a replica and a shadow copy volume). For Hyper-V and SQL Database DPM can colocation multiple data sources on a single replica an shadow copy volume. This is relatively well known setting. The option is especially useful for backup a large numbers of Hyper-V VMs.

What is less know, is the possibility to tune the initial size of the replica volume which DPM will choose when a new Protection Group with colocation is created. Continue reading

How to Build a Source for Windows 10 Enterprise from the official Insider Preview ISO (Build 10162)

4 Jul
The official Windows 10 Preview ISO from Microsoft installs only the Pro or Core Version. So it can not be used to install or upgrade the Enterprise Edition. However the Sources can be easily “upgraded” to the Enterprise Edition using DISM on a existing Windows 10 Installation:

 

  1. Download the official ISO from the Windows Insider Preview Website in your desired Language
  2. Mount the ISO file and copy the content to a new folder. e.g. C:\temp\W10_10162
    W10_10162
  3. Start a PowerShell as admin and mount the install.wim from the sources Folder:
  4. Change the edition of the install.wim file with the Set-WindowsEdition command:
  5. After that dismount the Image:

Now you can run directly the setup.exe in the C:\temp\W10_10162 to do an inplace upgrade of a older Windows 10 Enterprise Build. If you prefer a clean Installation, copy the files to a bootable USB Stick and reboot.

If you get asked for a product key, this one should work:
https://social.technet.microsoft.com/Forums/en-US/065a430e-a5c2-4536-b2d0-1c62134e4fa8/cant-activate-win-10-enterprise-after-upgrading-to-build-10162?forum=WinPreview2014Setup

PowerShell DSC resource to enable/disable Microsoft Update

16 Jun

Ever get tired to manually set the check box for Microsoft Update in the above screen on a bunch of servers (e.g. in a new test lab setup or so)? Then this one is for you.

 I wrote recently, mostly as an exercise, a PowerShell DSC Module with a resource to enable (or disable) the Microsoft Update.

 I have then published the Module von GitHub to get another exercise. 😉
So if you interested you can get the Module from here:

https://github.com/J0F3/cMicrosoftUpdate

After you get the module, enabling the Microsoft Update settings will look like this:

Happy DSCing! 🙂

The connection between Hyper-V Network Virtualization (NVGRE) and MTU Size (and Linux)

26 Apr

In a network with Hyper-V Network Virtualization (using NVGRE encapsulation) the MTU (Maximum Transmission Unit) size is 42 Bytes smaller than in a traditional Ethernet network (where it is 1500 Bytes). The reason for this is the NVGRE encapsulation which needs the 42 Bytes to store his additional GRE Header in the packet. So the maximum MTU size with Hyper-V Network Virtualization is 1458 Bytes.

The problem with Linux: VMs:
For VMs running Windows Server 2008 or newer this should not be a Problem because Hyper-V has a mechanism which lowers the MTU size for the NIC of the VM automatically if needed. (Documented on the TechNet Wiki).
But with VMs running Linux you could run in a problem because the automatically MTU size reduction seem to not function correctly with Linux VMs:
https://support.microsoft.com/en-us/kb/3021753/
This has the effect that the MTU size in the Linux VMs stays at 1500 and therefore you can experience some very weird connection issues.

The Solution:
So there are two options to resolve this issue:

  • Set the MTU size for the virtual NICs of all Linux VMs manually to 1458 Bytes
  • Enable Jumbo Frames on the physical NICs on the Hyper-V Hosts. Then the there is no need to lower the MTU size in the VMs.
  • (wait for kernel updates for your Linux distribution which has the fix from KB3021753 implemented)