As most of you already know the "-ExpandProperty" parameter let's you enumerate the values of an incoming object as single value.  For example, if you run the command below without the "-ExpandProperty".

Get-AzLocalNetworkGateway -ResourceGroupName "network-rg"

You will get output like this:

Name                     : my-lng

ResourceGroupName        : network-pd-rg

Location                 : southcentralus

Id                       : /subscriptions/GUID/resourceGroups/network-pd-rg/providers/Microsoft.Network/localNetworkGateways/qco-houdc-lng

Etag                     : W/"GUID"

ResourceGuid             : GUID

ProvisioningState        : Succeeded

Tags                     :

                           Name                     Value

                           =======================  =======

                           owner                    Network

                           external-facing          No

                           cost-center              IT

                           regulatory-data          no

                           project-name             PD

                           department               IT

                           critical-infrastructure  Yes

                           environment              PD


GatewayIpAddress         :

LocalNetworkAddressSpace : {

                             "AddressPrefixes": [








BgpSettings              : null


But what if you just want a simple list of all LocalNetworkAddressSpace?  You can use -ExpandProperty but the trick is you have to use it twice!  See sample below.  If you only pass it once you will still not get desired list, you must expand the properties two times because the expanded property is a list in this case.

Get-AzLocalNetworkGateway -ResourceGroupName "network-rg" | Select -ExpandProperty LocalNetworkAddressSpace | Select -ExpandProperty AddressPrefixes | FT

This will product the output desired as shown below.


Hope this helps someone!


1. Create SSH Public/Private key pairs

2. Convert the public key to SSH2 format

ssh-keygen -e -f


3. Verify permissions on private key are 600.  If not, CHMOD to 600

 chmod 600 /root/.ssh/mykey.private


4. SSH to VM via public IP or Private if you have VPN tunnel.

 This email address is being protected from spambots. You need JavaScript enabled to view it.';document.getElementById('cloak077fbb513c42bc4b13ef65a5b6e800fd').innerHTML += ''+addy_text077fbb513c42bc4b13ef65a5b6e800fd+'<\/a>';


Azure also recently introduced the Azure Bastion service which allows RDP & SSH access to VMs via the portal using port 443.

I have been hearing about containers for some time now but have been too busy with work (have not introduced containers yet...) to take a good look at this technology.  I have spent a few weekends over the past ~6 months reading up on this technology including the different orchestration platforms like Docker Swarm or Kubernetes and I have to say I'm not only impressed with the varying possibilities but believe this could be one of the ways to safely migrate workloads to the public cloud without fear of vendor lock-in.

Although I'm no expert in this field right now I wanted to share a quick tutorial on getting Kubernetes installed and running on a vSphere environment.  There are many tutorials on AWS & Azure but I did not find too many for vSphere which I think is important because this represents one way to have a private cloud presence.

Before jumping into steps let's take a look at a high-level conceptual diagram that shows what we are trying to accomplish.  We need to deploy a kubernetes cluster to our vSphere environment.  Next we will do a deployment using a pre-existing YAML file.  This will provision a two node cluster running our deployment.  Finally we will expose this deployment using a kubernetes service.


STEP 1 - k8s cluster on vSphere 6.x

The following guide is very good and it's what I used to get a k8s cluster setup on vSphere 6.0.

once you have k8s portal up you can test an nginx deployment and expose external with NodePort.  No SLB required, although for production you would need some form of load balancing as unlike AWS or Azure there is no built-in support for automatic load balancing setup.

STEP 2 - k8s NodePort

The following guide was very helpful in understanding how to expose your k8s cluster to an external network.  By default the k8s cluster is only available to the private network it resides in.

STEP 3 - k8s deployment

The code below will create a simple nginx deployment.  You can grab the yaml file here.  You can create the file and save locally or point directly to URI.

kubectl create -f nginx-deployment.yaml #Create a new nginx deployment for testing

 To view/verify new deployment run the command below.  You should see output displaying the Name, Namespace, creation time, and other useful information.

kubectl describe deployment nginx-deployment #details of your deployment

Once we verify our deployment was successful we can start to gather the information we need to expose our deployment externally.  Remember that by default our new deployment is only available inside the k8s internal network.  The command below will get the we need to pass to the expose command.  We see two deployments, we want the "nginx-deployment".

kubectl get deployments
nginx-deployment   2               2               2                   2                 23d
rss-site                  2               2               2                   2                 25d

STEP 4 - k8s service

Armed with the name we need to pass to the expose command we are now ready to proceed.  the --name parameter is the name of your new "exposed" deployment, which is a k8s service now.

kubectl expose deployment nginx-deployment --type=NodePort --name=my-nginx

Now we should have a new service created called "my-nginx".  Let's verify by running the command below.  Verify the new service is displayed.  You will see the cluster IP, this is an internal IP that you don't need to worry about at this time.  Notice the EXTERNAL-IP is empty, this is normal.  You do need to capture the Port mapping.  The first port is the port in use by the nodes, the second is the mapped port.  In this example the port mapping is 80:30575.  The second port "30575" is the one we can use to access nginx from the external network.

kubectl get services
kubernetes        <none>          443/TCP          27d
my-nginx       <nodes>       80:30575/TCP   23d

 The next step is to figure out which pods are running our nginx-deployment.  The command below will do this for us.  The command will output the status of the pod, age, IP (Internal IP), and node.  At this time we are only interested in the Node, the IP is a private IP and not useful at this time.

STEP 5 - k8s external IP

kubectl get pods -o=wide

let's find where nginx is running and note the name of each pod.

kubectl get pods --namespace=default

 We take the Names of the output from the above command and use below to find out the node and external IP.

kubectl describe pod nginx-deployment-4234284026-nwr2h --namespace=default|grep Node

 The command below will also pull the IP Address.

kubectl describe node node3| grep Address

With the external IP of the VM running the node, we can then go to  You can follow step 5 to get the name of the second node and find its external IP.  The port will be the same on both nodes.  You can now easily add a Netscaler, A10, F5 or other load balancer in front of the two nodes and along with DNS provide a friendly name for your cluster VIP.


Helpful commands for docker newbies like myself.

# List all docker images you have.

PS C:\> docker images

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
nginx               latest              46102226f2fd        12 days ago         109 MB
centos-rl           latest              647c13af08c7        2 weeks ago         302 MB
ubuntu              latest              6a2f32de169d        3 weeks ago         117 MB
centos              latest              a8493f5f50ff        4 weeks ago         192 MB
d4w/nsenter         latest              9e4f13a0901e        7 months ago        83.8 kB

# Pull a new docker image:

PS C:\> docker pull <image_name>

# List docker containers both active and not actively running

PS C:\> docker ps -a

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                     PORTS                         NAMES
c8ef567a8bdc        centos-rl           "/bin/bash"              3 hours ago         Exited (0) 4 minutes ago                                       centos-rl-tools
7f9f3afa7a3f        nginx               "nginx -g 'daemon ..."   4 hours ago         Up 4 hours            >80/tcp

# Rename docker container

PS C:\> docker container rename Old-Name New-Name

# Remove a container

PS C:\> docker rm conainter_Name


PS C:\> docker rm conainter_ID

# Delete image

PS C:\> docker rmi Image_ID

# Run docker container on Windows 10 with host volume

PS C:\> docker run -it -v D:/docker/centos-rl:/data centos-rl

# The "run" command should only be used the very first time you run the image.  This creates a new container.

PS C:\> docker start -i centos-rl-tools

# Use "start" for subsequent uses.  This starts the container that was previously created.  The image remains unmodified.  Your volume will still be mapped, along with any other parameters you used with "run" command.

# Run docker container in detached mode:

PS C:\>docker run --name centos-rl -p 8080:80 -e TERM=xterm -d nginx

# To access the container:

PS C:\>docker exec -it <CONTAINER_ID> bash

# Export docker container to tar file.

PS C:\> docker ps -a  #to list image name.

PS C:\> docker save -o D:/Temp/centos-rl.tar centos-rl

 # Import/Load docker image

PS C:\> docker load -i D:/Temp/centos-rl.tar