Useful PowerShell snippets for Storage Spaces Direct (S2D) operations

30 May

This is a very quirk post to just share some simple PowerShell snippets I have created or found elsewhere which I find very handy when it comes to troubleshooting, setup or operating Storage Spaces Direct (S2D).

1. Get the count of physical disk in each node:
When S2D is enabled in a cluster Get-Physicaldisk returns the physical disk from all cluster nodes. But when you want the disk count of each node the following script can be quite useful:

2. Get the processed and completed GBytes of running Storage Jobs (e.g. Rebuild Jobs):
This one comes from the Jaromir Kaspar and his ws2016lab Project on GitHub, which is btw a very nice  project to build a test lab in a highly automated way. So all credit goes to him for this script.
It shows you the amount of GBytes the running storage jobs have already processed and how much must still to be processed until the jobs are finished.

(original source: ws2016lab project)

Speaking at Experts Live Switzerland in Bern

9 Apr

On 3rd of May the one day country event of the Experts Live conference series will take place at the “Workspace Welle 7” in Bern and I am really looking forward to join this event as a speaker. Together with my colleague and fellow MVP Michael Rueefli  I will have a talk about Infrastructure as Code and DevOps. 

We will show you what the buzzword Infrastructure as Code actually means and how Infrastructure as Code can be used in the real world in real projects and environments. Furthermore we will give you some guidelines how you can leverage DevOps principals for platform and infrastructure automation so you, as an IT Pro, will be ready for the new agile and and fast pacing future of IT.

If you never heard about the Experts Live Switzerland Event here some more details why you should join us on May 3rd in Bern 🙂

  • One day Conference
  • Completely held in German
  • 17 Sesssions
  • 3 parallel Tracks
  • Top Community Speakers
  • Max. 180 attendees
  • Exhibitors area for partner
  • Initiated and managed by the community
  • modern and easily accessible location
  • Focus on Microsoft Cloud, Datacenter and Workplace topics

Ah! And if you have not already, register you right away. Only a very small amount of tickets are left!

Simple and fast way to ensure a PowerShell script runs always “as Administrator”

22 Jan

Sooner or later when you are writing PowerShell scripts you have the situation where you want to ensure that the Script is running with elevated user rights (aka “run as Administrator”). Often this is the case when the script should make some configurations changes or some Cmdlets, used in the script, works only with elevated user rights.

When you search the web you can find several solutions with functions or if statement to check the right of the user under which the script currently run and then abort if he does not have admin rights.
But actually there is a simple, builtin, way to ensure the the script runs only in a PowerShell session which was started with “run as Administrator”.
Simply add the following line (with the #) at the top of your script:


When the Script is then started in a normal (not elevated) PowerShell session it fails with the following, very clear, error message:

The script ‘youscriptname.ps1’ cannot be run because it contains a “#requires” statement for running as Administrator. The current Windows PowerShell session is not running as Administrator. Start Windows PowerShell by using the Run as Administrator option, and then try running the script again.

This works with PowerShell 4.0 and later and there also other ‘Requires’ statements which an be used in Script. For example to ensure a specific version of a PowerShell Module is installed.
A full reference an be found on in the online PowerShell Documentation.

Azure Stack – The Azure in your data center

28 Aug

With the beginning of this year Microsoft Inspire conference (formerly Microsoft Worldwide Partner Conference – WPC) the long-awaited Microsoft Azure Stack became GA and is now order able from hardware vendors. But before you order your own Azure Stack instance it’s important to know what Azure Stack exactly is and if it makes even sense for you.

The continued Cloud OS Vision

(image source: Microsoft)

Long, long time ago 😉, about 4 years ago, together with the release of Windows Server and System Center 2012 R2, Microsoft came up with the vision to give the customers a platform which is consistent to Azure. The idea behind that is that regardless if your application is running in Azure, in your On-Premises data center or in the data center of a local Service Provider, always the same platform is underneath. But when we look back, with the 2012 R2 suit and the Windows Azure Pack as customer facing Self-Service portal, this goal was not really reached. In the meantime, even the consistent experience of the Self-Service portal is gone. Azure Pack was based on the “old” Azure Management Portal which is now, in public Azure, mostly replaced by the new ARM based Azure Portal.

Azure Stack the successor of Windows Azure Pack?
Since the announcement of Azure Stack (which is now nearly two year ago by the way) there is ongoing some confusion in the IT world. For many Azure Stack seems to be the successor and replacement for System Center and Windows Azure Pack or simply anything was Microsoft has released for the data center before Azure Stack. But, this is not what Azure Stack is supposed to be. Even when the Cloud OS vision is clearly still recognizable in Azure Stack it is, however, a completely new product category which has Microsoft never done in this form before. Even more it is not an alternative for Azure or the replacement of traditional virtualization infrastructures (based on System Center and Hyper-V, VMware or whatever). Azure Stack is much more Azure or part of your Azure strategy. And therefore, you must commit to Azure when you want to use Azure Stack.

(image source: Microsoft)

The integrated system experience – Or the Azure Stack Appliance
So, what does “a new product category” mean? It is relatively simple. Azure Stack is not delivered as a software which you can setup on your own defined hardware and configure for your individual needs. Azure Stack will be basically delivered as an appliance which is specified, build and updated by the hardware vendor of your choice and Microsoft. Or in other words Azure Stack is a SAN equivalent system which provides not storage but Azure Services in your data center. This means, for you as a customer, that you have more time to focus on running applications, provide value-added services to your customers and develop modern cloud applications instead of keeping your virtualization infrastructure up & running.

New IT roles for operating the “Appliance”
In Microsoft eyes to running an IT infrastructure in this new “Appliance” form, leads to two new roles in IT. The Cloud Architect and the Cloud Administrator or Operator.

The Cloud Architect is the one who does ensure that the Azure Stack “Appliance” can get properly integrated in the existing IT infrastructure (Network, Monitoring Systems, Identity System etc.). He does also plan the offering on the Azure Stack for internal or external customers. These are short-term tasks which can also perfectly done by an external partner.

After Azure Stack is integrated in your IT infrastructure the Cloud Operator or Administrator is responsible to operating the Azure Stack. But this is not a very high skilled role and probably also not a very time intensive task. Because of the appliance approach Azure Stack is operated by a simple management web interface (like the Azure Portal) and not by complicated Administrator consoles for which a deep knowledge of the whole system is necessary. The Cloud Operator will mainly monitor the integrated Azure Stack system and when a red light comes up he will either do simple remediate actions (e.g. restarting a service or apply an update) or he will contact the support which is provided jointly by Microsoft and the hardware vendors.

(image source: Microsoft)

Do I need Azure Stack?
Azure Stack is and will not be the all mighty platform for everyone and every use case. Azure Stack is for you when you want to adopt the cloud model and develop and run modern cloud applications, which are depends at most partially on (IaaS) VMs. But because of various reasons you cannot go directly to Azure. Such reasons can for example be requirements for low latency, law and regulations which restricts to store data outside of a specific country or bad or no internet connectivity. For all other use cases there is still Hyper-V, System Center and Windows Azure Pack. They will be fully supported and maintained from Microsoft for, at least, the next 5 years. Windows Azure Pack, for example, is compatible with Windows Server 2016 and will be support until 2027.

So in short this means:

Azure Stack is for you when:

  • You want to adopt the cloud model and focus on delivering services instead on building and operating infrastructures (no DIY infrastructure)
  • You want to develop or run modern cloud applications based on Azure services
  • But you cannot go to Azure (because of regulations, latency, bad connectivity, etc.)

Azure Stack is not your platform when:

  • You need traditional virtualization or even physical servers
  • You do not want or you cannot adopt the cloud model and use public cloud or Azure at all
  • You have a lot of legacy application which have the need for old operating systems (2008, 2008 R2, 2012…)

So, I need one. Where can I get it and what does it cost?
First, you must select your preferred hardware vendor. Today you have the choice between HPE, Dell EMC or Lenovo. In the future, also systems from Cisco and Huawei will be available. When you have selected a hardware vendor you must decide which size of the integrated system you need. Currently configuration with 4, 8 or 12 nodes are available which cannot be extended in the first 6 Month. After that Microsoft promises to come up with an update which add the functionality to extend the Azure Stack integrated systems.

After you have chosen your preferred vendor and size you will order the integrated system (hardware) directly from the hardware vendor and the hardware pricing is defined by the hardware vendor.

When it comes to licensing cost, Azure Stack works the same as Azure which means you pay only what you use (pas-as-you-use). That means every service and every VM you are provisioning on Azure Stack will be billed on hours or transactions base. Exactly like it is in public Azure. However, the prices are a bit lower because you already payed for hardware, power, connectivity etc. For completely disconnected Azure Stack setup, Microsoft offers also a “capacity model”, which allows you to license the whole capacity at once. This way you will pay a fixed yearly fee, based on the counts of physical cores in your system. For more details about the prices the pricing datasheet from Microsoft gives you a great overview.

(This blog post has also been posted under http://itnetx.ch/blog)

Get insights about the performance of your Windows systems with Grafana

12 May

Ever dreamed about some mission control like dashboards to get a quick insight about the performance of your Windows systems? 😊

If yes, then you probably like a view like this:

So here is how you get such a dashboard for your system in 6 simple steps in under an hour:

Install a VM with Ubuntu Linux 16.04.2 LTS

Even when it is Linux, no rocket science is needed here 😊. Just download the ISO image from the Ubuntu Website, attach it to your VM and boot form it. After that you get asked some simple questions about time zone, keyboard and partition settings. The most you can accept with the defaults or choose simple your preferred languages etc. Quite easy.

Set time zone to UTC

Login in to your Ubuntu system and change the time zone to UTC. As the InfluxDB (the backend) uses UTC time internally it is a clever idea to set the time zone for the system also to UTC.
To do so run the following command. Then choose “Non of the above” > “UTC”.

Install InfluxDB

InfluxDB is the he backend of the solution where all data is stored. It is a database engine which is built form the ground up to store metric data and for doing real-time analytics.
To install InfluxDB run the following commands on the Linux VM:

Install Grafana

Grafana is the frontend which will generate your nice-looking dashboard with the data stored in the InfluxDB. To install Grafana run the following commands on the Linux VM:

Install Telegraf on your Windows system

Now we are ready to collect data from our systems with the Telegraf, a small agent which can collect data from many various sources. One of these source is Windows Perfmon Counters which we will use here.

1. Download the Windows version of the Telegraf agent
https://dl.influxdata.com/telegraf/releases/telegraf-1.2.1_windows_amd64.zip
2. Copy the content of the zip file to C:\Program Files\telegraf on your systems
3. Replace the telegraf.conf with this one. -> telegraf.conf
So all needed perform counters get collected which are needed for the example dashboard in the last step.
4.  Also in the telegraf.conf, update the urls  paramter so it point to the IP address of your Linux VM


5. Install Telegraf as service and start it

Create Dahsboards and have fun! 🙂

The last step is to create your nice dashboards in the Grafana web UI. A good starting point is the “Telegraf & Influx Windows Host Overview” dashboard which can directly imported from the grafana.net repository

Login into the Grafana Web UI -> http://<your linux VM IP>:3000 (Username: admin, Password: admin)

First Grafana need to know it’s data source. Click on the Grafana logo in the top left corner and select “Data Source” in the menu. Then click on “+ Add data source“.

Define an Name for the Data Source (e.g. InfluxDB-telegraf) and choose “InfluxDB” as Type.
The URL is http://localhost:8086 has we have installed the InfluxDB locally. “Proxy” as the access type is correct.
The telegraf agent will automatically create the data base “telegraf”. So enter “telegraf” as Database name. As user you can enter anything. InfluxDB does not need any credentials by default but the Grafana interface wants you to enter something. (otherwise you can not save the data source)

Now go ahead and import your first dashboard.  Select Dashboard > Import in the menu

Enter “1902” and click on “Load

Change the Name if you like and select the data source just created in the step above (InfluxDB-telegraf) and then click on Import.

And tada! 🙂

Further steps

Now the Telegraf / InfluxDB setup is collecting performance data of your windows machines. With Grafana the collected data can visualized in a meaningful way so the determination the health of your system gets easy.

To further customize the data and visualization to your specific needs you can:

Script to build streched file server cluster with Storage Replica

7 Mar

One possible scenario for use of Storage Replica in Windows Server 2016 is to build a stretched file server cluster based on two VMs on two different sites. With this configuration you can build a highly available file server across two sites without the need of replicated SAN or similarly. Instead you can simply use the Storage which is locally available at each site and leverage Storage Replica to replicate the data volumes inside the VMs. In case one of the Sites fails, the File Server Role will automatically fail over to the second site and the end user will probably not even notice it.

Recently I have made some tests with such a set up in my Homelab where I had the need to rebuild quickly the whole environment. Therefore I made a simple script with all the needed PowerShell commands.

You can get a copy of the Script at my GitHub Repository

The Script is intended to run on a third machine, like for example a Management Server which has the Windows Server 2016 RSAT Tools installed. Especially the Hyper-V, Failover Cluster and Storage Replica Cmdlets are required.

After you set the correct parameter values and you are really sure everything is right 😉 , you can run the script in one step. Or, probably the more interesting approach, is to open the script in the PowerShell ISE and run the individual steps one by one.
For this purpose the script has comments which mark the indivudaul steps:

So have fun with PowerShell and Storage Replica. A very nice combination! 🙂

SCVMM: When the deployment of new VM template suddenly fails

2 Mar

Recently I ran in a very strange behavior when deploying a VM template with Server 2016 through VMM 2012 R2. First of all to enable the full support of Windows Server 2016-based VMs in VMM 2012 you need at least Update Rollup 11 Hotfix 1 installed. But even after installing  the latest UR (UR12 in my case), the deployment of a Server 2016 VM has failed.

The Issue:
Everytime when a new VM is deployed from a Server 2016 VM template the process fails at specialize phase of the sysprep. However all other existing templates with Server 2012 were working as expected.

Because in in this phase also the domain join happens I decided to give another try with a VM template which has no domain join configured. And tada, the VM was deployed successfully. 

The root cause:
With this finding my assumption was that, when the VM template is configure for domain join, VMM adds something in the unattend.xml which Server 2016 does not like that much. So I inspected the unattend.xml file of a failed deployment and there I found the following section which has looked a litte bit strange:

 Somehow the Domain of the domain join account was missing. 

The Solution:
So I checked the VMM Run As Account which was specified as domain join credentials in the VM template. And as you can see, we have also no domain information here.  

After changing the username to “domain\vm domain join” the deployment went through smooth as it should. Inspecting the unattend.xml file showed that the domain is now also correctly filled in.

Conslusion:
When the deployment of a new VM Template in VMM suddenly fails at the domain join step, double check the run as account and be sure that there is also the domain name in the username field.
In my case it was a template with Server 2016. But I think chances are good as the same could also happens with new VM templates with another guest OS.

Be aware of DSC pull server compatibility issues with WMF 5.0 and 5.1

20 Feb

Apparently, there are some incompatibilities when WMF 5.0 computers wants to communicate with a DSC pull server running on WMF 5.1 or vice versa. This is especially the case when the “client” node and the pull server are not running the same OS version. For example, when you have a DSC pull server running on Server 2012 R2 (with WMF 5.0) and some DSC nodes running on Server 2016 (which as WMF 5.1 built in).

Currently I experienced two issues:

  1. A DSC pull client running on WMF 5.1 cannot send status reports when the DSC pull server is running still on WMF 5.0. This is because WMF 5.1 has invented the new “AdditinalData” parameter in the status report. I have reported this bug also on GitHub: https://github.com/PowerShell/PowerShell/issues/2921 
  2. A DSC pull client running von WMF 5.0 cannot communicate at all with a DSC pull server running on WMF 5.1.
     

Solution / Workaround for issue 1:
As the WMF 5.1 RTM no (again) available the simplest solution would be to upgrade the server and/or client to WMF 5.1. However, when you have to upgrade the DSC pull server then you must create a new EDB file and reregister all clients. Otherwise the issue preserve because the “AdditionalData” field is still missing in the database.

Solution / Workaround for issue 2:
The root cause of this issue can be found in the release notes of WMF 5.1:
“Previously, the DSC pull client only supported SSL3.0 and TLS1.0 over HTTPS connections. When forced to use more secure protocols, the pull client would stop functioning. In WMF 5.1, the DSC pull client no longer supports SSL 3.0 and adds support for the more secure TLS 1.1 and TLS 1.2 protocols.”

So, starting with WMF 5.1 the DSC pull server does not support TLS 1.0 anymore, but in reverse a DSC pull client running on WMF 5.0 is still using TLS 1.0 and can therefore not connect anymore to the DSC pull server.

The solution, without deploying WMF 5.1 to all pull clients, is to alter the behavior of the DSC pull server so he accepts again TLS 1.0 connections. This can be done by changing the following registry key on the DSC pull server:

Change Value from 0x0 to 0x1 and reboot the DSC pull server.
Afterward DSC pull clients running on WMF 5.0 can connect again to the DSC pull server.

How to enable CredSSP for PowerShell Remoting through GPO

19 Oct

In a domain environment CredSSP can easily enabled through a GPO. To do so there are three GPO settings to configure:

  1. Computer Configuration > Administrative Templates > Windows Components > Windows Remote Management (WinRM) > WinRM Client > Allow CredSSP Authentication (Enable)
    image
  2. Computer Configuration > Administrative Templates > Windows Components > Windows Remote Management (WinRM) >  WinRM Service > Allow CredSSP Authentication (Enable)
    image
  3. Computer Configuration > Administrative Templates  > System > Credential Delegation > Allow delegation of fresh credentials (add wsman/*<.FQDN of your domain>)
    image
  4. If in your environment are computers in an other, not trusted, AD domain to which you want connect using explicit credential and CredSSP you have to enabled also the following GPO setting.
    Computer Configuration > Administrative Templates  > System > Credential Delegation > Allow delegation of fresh credentials with NTLM-only server authentication (add wsman/*<.FQDN of your other domain>)
    image

Now you are ready to use CredSSP within your PowerShell remote sessions.

And a final word of warning! 😉
When you are using CredSSP your credentials were transferred to the remote system and your account is then a potential target for a pass-to-hash attack. Or with other words an attacker can steal your credentials. So only use CreddSSP with your PowerShell Remote session if you really have a need for it!

Webinar “Azure Automation and PowerShell DSC” (German)

10 Oct

Tomorrow, on Tuesday October 11 2016 at 2pm (CEST) I will do a webinar in German about Azure Automation and PowerShell DSC . I will explain the basic concepts of Azure Automation, Automation Runbook and PowerShell DSC.

A main part of the webinar will be a example scenario to automatically deploy and configure a VM using Azure Automation Runbooks and Azure Automation DSC. I will configure the whole scenario live during the webinar.

image

When you interested in the scripts, which I am using to configure the scenario, you can get it here.

If you like to attend the webinar  you can still register here for free.