Terraform and Active Directory

I have worked a lot with AD during the past years mostly with powershell. This time I needed to create a AD computer object with terraform and started to look into how to setup terraform AD provider.

First you need to configure the provider. I am running terraform on my non domain joined laptop. Terraform needs winrm access to a domain joined server with Active directory powershell modules installed. It’s important to use capital letters in all FQDNs for kerberos to work both in provider.tf and krb5.conf.

provider.tf

provider "ad" {
  winrm_hostname         = "SERVER.HOMELAB.DOMAIN.COM"
  winrm_username         = var.aduser
  winrm_password         = var.adpassword
  winrm_port             = 5986
  winrm_proto            = "https"
  winrm_pass_credentials = true
  krb_realm              = "HOMELAB.DOMAIN.COM"
  krb_conf               = "krb5.conf"
  krb_spn                = "SERVER"
  winrm_insecure         = true
}

We also need to create krb5.conf in order to set up kerberos authentication

[libdefaults]
   default_realm = HOMELAB.DOMAIN.COM
   dns_lookup_realm = false
   dns_lookup_kdc = false


[realms]
    HOMELAB.DOMAIN.COM = {
        kdc     =   DC01.HOMELAB.DOMAIN.COM
        admin_server = DC01.HOMELAB.DOMAIN.COM
        default_domain = HOMELAB.DOMAIN.COM
        master_kdc = DC01.HOMELAB.DOMAIN.COM
    }

[domain_realm]
    .kerberos.server = HOMELAB.DOMAIN.COM
    .homelab.domain.com = HOMELAB.DOMAIN.COM
    homelab.domain.com = HOMELAB.DOMAIN.COM

Now the provider should be configured and ready to use. Next step is to create a AD computer object in main.tf.

resource "ad_computer" "c" {
  name        = "test01"
  container   = "OU=Servers,OU=Stockholm,OU=SWE,DC=homelab,DC=domain,DC=com"
  description = "My TF AD object"
}

To test the code run terraform apply -var aduser=adadmin -var adpassword=secretpw123
and wait for output.and type yes if everything seems fine.

Now we have a new computer object in AD managed with terraform.

Next thing I wanted to test was a bit more complex. I wanted to create a OU structure with several sub OUs for each office. In powershell you can solve it with a nested foreach loop.

$sites = ("Malmo", "Ystad", "Karlstad")
$subOU = ("Servers","Computers","Groups","Users")

foreach ($site in $sites){
    New-ADOrganizationalUnit -Name $site -Description "My office in $($site)" -Path "OU=SWE,DC=homelab,DC=domain,DC=com"
    foreach ($ou in $subOU){
        New-ADOrganizationalUnit -Name $ou -Description "OU for $($ou)" -Path "OU=$($site),OU=SWE,DC=homelab,DC=domain,DC=com"
    }
}

In terraform we need to create two variables as lists. One containing each office and one with our sub OUs. We also use locals to combine them with the setproduct function.

variable "sites" {
  type = list
  default = ["Malmo", "Ystad", "Karlstad"]
}

variable "siteOUs" {
  type = list
  default = ["Servers", "Users", "Groups", "Computers"]
}

locals {
  ous = setproduct(var.sites, var.siteOUs)
}


In order to get this to work we first need to create all Office OUs with for_each and our variable sites. For all sub OUs we use our locals named ous as a source and loops trough all combinations that we created with setproduct. We pick the name from the second array and a part of the path from the first array containing office names. Note that we make sure all office OUs are created first with depends_on.

resource "ad_ou" "ou" { 
  for_each = toset(var.sites)
    name = each.value
    path = "OU=SWE,DC=homelab,DC=domain,DC=com"
    description = "OU for ${each.value} Office"
    protected = false
}

resource "ad_ou" "o" {
  for_each = {
    for o in local.ous : "${o[0]}-${o[1]}" => {
      name = o[1]
      path = "OU=${o[0]},OU=SWE,DC=homelab,DC=domain,DC=com"
      description = "OU for ${o[1]} in ${o[0]} Office"
    }

  }
  name        = each.value.name
  path        = each.value.path
  description = each.value.description
  protected   = false

    depends_on = [
    ad_ou.ou
  ]
}

This is the final result i AD console. I learned a lot while figure out how to solve this in terraform.

Setup and run a Azure automation runbook

A runbook can help you to run scripts on a schedule or trigger them with a webhook. You can either run them in Azure or on your on-premises server. This example will show how to run a script on a on-premises that is connected to Azure Arc.


First step is to create an automation account in your favorite region. You might need to create a resource group as well.


Go to your newly created automation account and look for Hybrid worker groups.

Create a new hybrid worker group and select a name.


Now we have a Hybrid worker group without any hybrid workers. Click hybrid workers to the left


Click add


Select one or more servers and add them to your hybrid worker group. If your list is empty you need to enable Azure Arc at least on one server.



Back to automation account and press Runbooks.


Create a new runbook


Give the runbook a name and select type. In this example Powershell och runtime version 5.1.

Now to the fun part, edit the new runbook and write or paste your script. Select publish when done.


When pressing start a meny to the right enables you to select to start the runbook and let it run on a server in your hybrid worker group.


When the runbook is finished we can view the output or errors. The last line shows the name of the server the script was executed on. Can be useful for troubleshooting if the hybrid worker group contains multiple servers.

To make more use of this capability ro trigger script on a local server from Azure start explore schedules and webhooks.

Add your server to Azure Arc

Azure Arc helps your manage your on-prem servers from Azure portal. To add a server to Azure Arc just search for “Servers – Azure Arc” in the portal and press Add.


This time we will only add one server and can select Generate Script option.


Select your subscription and a new or existing resource group. You also need to select a location.


In this step you can add your desired tags.


The script is ready to be downloaded or copied to your server.


Start powershell as local admin and navigate to the folder where your onbording script is stored.


The script will download and install Azure machine agent and open a web browser where you need to sign in to Azure.


After a couple of minutes our on-prem server is visible in Azure portal.


We can now see some details like operating system and the tags we defined during setup. In a future post I will show what we can achieve with Azure Arc enabled servers.

Export subnets from Meraki to phpIPAM

In order to populate phpIPAM I needed to export all subnets that was already present in Meraki Dashboard without type all information manually. I found PSMeraki module on Github which is a prerequisite for this script. The script creates a CSV that can be importet in to phpIPAM

$networks = Get-MrkNetwork
$subnets =@()

foreach ($network in $networks) {
    foreach ($vlan in $network) {
        $vlans = Get-MrkNetworkvlan -networkId $network.id

            foreach ($vlan in $vlans){

                    if (!$vlan.subnet){
                        break
                    }
            else{
            $sub = Get-Subnet $vlan.subnet

            $subnetname = $network.name + "_" + $vlan.name 
            [hashtable]$net = @{}
            $net.add('VLAN',$vlan.id)
            $net.add('Section','Company')
            $net.add('Subnet',$sub.ipaddress)
            $net.add('Mask',$sub.Maskbits)
            $net.add('Domain',$network.name)
            $net.add('Description',$subnetname)

            $objVlan = New-Object -TypeName psobject -Property $net
            $subnets += $objVlan
            }
        }
    }
}
$subnets | Export-Csv -Path subnets.csv -Delimiter ","

Powershell group export

I needed to get a list of people in som AD group for an audit so I wrote a quick script to export each group matching my filter to a CSV-file and populate it with name and samaccountname. Setting semi colon as an delimiter ensures that you can open the CSV-file in Excel with no additional work to get columns correct.

$groups = Get-ADGroup -filter { name -like "Company-Fileserver-ACL*" }

foreach ($group in $groups)
{
	Write-Output $group.name
	$file=$group.name + ".csv"
	Get-ADGroupMember $group.name | Select-Object name, samaccountname | Export-Csv -path $file -NoTypeInformation -delimiter ";"
}

Setup AWX Vcenter inventory with tags part 1

New to AWX and I had a goal to setup Vcenter as an inventory source with groups based on vmware tags. I got that setup working with ansible and started to investigate how to achieve same result in AWX. After a couple of days testing I got some hints on Reddit and was able to get it working as expected. Hopefully this guide can help someone (and me next time) setup inventory with tags.

First step if not already done is to install AWX. I will not cover the setup. It is already covered here https://github.com/ansible/awx/blob/devel/INSTALL.md. I have chosen to install on a standalone Docker host in my Home lab running CentOS.


Open the inventory file install/inventory in your favorite editor.
Look for and uncomment custom_venv_dir=/opt/my-envs/

Create dir: mkdir /opt/my-envs

Run the playbook: ansible-playbook install.yml -i inventory

Create folder and Python env and install all prerequisites

mkdir /opt/my-envs/vm-tags
python3 -m venv /opt/my-envs/vm-tags/
source /opt/my-envs/vm-tags/bin/activate
yum install gcc
yum install python36-devel
pip3 install psutil
pip3 install ansible
pip3 install pyaml
pip3 install requests
pip3 install PyVmomi
pip3 install --upgrade pip setuptools
pip3 install --upgrade git+https://github.com/vmware/vsphere-automation-sdk-python.git
deactivate

Log on to AWX and navigate to Settings -> System.
Add /opt/my-envs/vm-tags to CUSTOM VIRTUAL ENVIRONMENT PATHS

Last step in this part is to verify our new custom env. Go to ORGANIZATIONS and push then pencil to edit default organization.

Verify that you can see your new Ansible Environment.

Add users or groups to local admin group

Sometimes you need to add users or groups in local Administrators group on a windows server. This function helps to accomplish that on one or more servers. Load a text- or csv-file and pipe it to Add-AdminGroup. All servers not responding will be shown at the end for later follow-up.

function Add-AdminGroup
{
	Param (
		[parameter(Mandatory = $true,
				   ValueFromPipeline = $true,
				   position = 0)]
		[Alias('IPAddress', '__Server', 'CN', 'server')]
		[string[]]$Computername,
		[parameter(ValueFromPipelineByPropertyName)]
		[Alias('groupname', 'adgroup')]
		[string[]]$group
	)
	
	Process
	{
		if (Test-Connection -quiet -Computername $computername)
		{
			Write-Output "Adding $group to local administrators on" $Computername
			Invoke-Command -ComputerName $Computername -ScriptBlock {
			Add-LocalGroupMember -Group Administrators -Member $args[0]
			} -ArgumentList $group
			
		}
		else
		{
			write-output "No response from" $Computername
			$failed += $computername
		}
		
	}
	end
		{
			foreach ($obj in $failed)
			{
				Write-Output $obj
			}
		}
	
}

List VMs according to memory and CPU usage

For internal billing purpose I needed a way list all Windows VMs for a given subsidiary and their CPU and memory configuration.

Connect-VIServer -Server vcenter.corp.lan

$var = get-vm -location Subsidiary1 | Where{ $_.Guest.OSFullName -like '*windows*' }  | select numcpu, memorygb | Group-Object numcpu,memorygb

function get-numOfVms
{
	param
	(
		[parameter(Mandatory = $true)]
		[pscustomobject]$VMs
	)

	$results = foreach ($row in $var)
	{
		$cpu, $mem = $row.Name -split ',', 2
		[pscustomobject]@{
			NumOfVMs = $row.Count
			NumOfCPUs   = $cpu
			MemoryGB = $mem.Trim()
		}
	}
	
	return $results
}
$total = get-numOfVms -VMs $var
$total | Export-Csv -Path totalvms.csv -NoTypeInformation

 

Example of totalvms.csv. It gives you a number of each specific CPU and memory configuration.

"NumOfVMs","NumOfCPUs","MemoryGB"
"1","2","8"
"12","1","4"
"4","4","8"
"2","4","4"
"9","1","8"
"5","4","12"
"22","4","32"
"6","4","16"
"2","1","12"
"1","4","24"
"1","1","16"
"1","4","6"
"1","1","6"
"1","24","32"
"1","4","25"
"3","2","16"
"1","8","6"
"1","1","3"

Create DHCP scopes from a CSV file

A fast way to import multiple DHCP scopes to a DHCP server. Some settings needs to be added on top level. For example DNS servers.

Required header in CSV:
name;description;startrange;endrange;subnetmask;scopeid;router

$dhcpserver = "1.1.1.1"
$scopes = Import-Csv -Path dhcp.csv -Delimiter ";"
foreach ($scope in $scopes)
{
	$name = $scope.name
	$description = $scope.description
Write-Output "Creating scope  $name"
Add-DhcpServerv4Scope -ComputerName $dhcpserver -Name "$name" -Description "$description" -StartRange $scope.startrange -EndRange $scope.endrange -SubnetMask $scope.subnetmask -State Active -LeaseDuration 1.00:00:00
Set-DhcpServerv4OptionValue -Router $scope.router -ScopeId $scope.scopeid -ComputerName $dhcpserver
}