Create A Test Environment With Terraform And KVM

by Christoph Stoettner
Read in about 6 min · (words)

Some great image

Photo by Charles Deluvio | Unsplash

I create a lot of virtual machines during the week to test deployments, or try to digg into problems of deployments. In the past I used Vmware Workstation, Oracle VirtualBox or MS HyperV on my desktops, but I also used Vmware ESX. I tried to use Vagrant and Packer to prepare images and distribute them, but wasn’t satisfied at all.

The biggest issues with this was the time when I tried to use WSL2 and HyperV on a Windows machine. There even creating a seperate virtual network was hard, unreliable, and so I stopped trying this.

What I want to achieve is automated and reliable distribution of virtual machines, easy management and performance.

I never had the need to do distribution and deployment in one step, so I split the process into the distribution of the virtual machines with Terraform and then the deployment with Ansible. I could combine this in one step, but as I said no need to do this. Additional environments use seperate repositories or folders, so I can create or destroy multiple environments in parallel without affecting each other.

The advantage of Terraform is the amount of providers, so it is easy to adjust the definition and deploy it on a Cloud provider, Vmware host or whatever is around.

Install prerequisits

Libvirt

On my local Linux machine I use libvirt to create virtual machines through Terraform.

Terraform

I used this repository as a starting boilerplate for my Terraform projects with Centos Cloud images and libvirt virtualization. I uploaded one example to https://github.com/stoeps13/terraform-libvirt, so you can follow in the repository.

Terraform Provider

Follow the README.md to set up the environment.

Download and configure cloud images

Actually I use the latest available Centos GenericCloud image from: CentOS-7-x86_64-GenericCloud-2009.qcow2, but just with exchanging the source in volume.tf to CentOS-8-GenericCloud-8.4.2105-20210603.0.x86_64.qcow2 I can build the same environment with a CentOS 8 image.

The cloud images are very handy, because you just need a file to configure the basic stuff like hostname, ip address, dns server, additional packages and your ssh keys for easy access later with Ansible.

My cloud_config.cfg

6
7
hostname: ${hostname}
fqdn: ${fqdn}

Hostname and fqdn are set to variables, which we will use from Terraform.

8
ssh_pwauth: true

Allow password authentication with SSH.

11
12
13
14
15
16
17
18
19
20
21
users:
  - name: root
    ssh_authorized_keys:
      - ${file("~/.ssh/id_rsa.pub")}
    shell: /bin/bash
  - name: sysadm
    ssh_authorized_keys:
      - ${file("~/.ssh/cnx6.pub")}
    sudo: ['ALL=(ALL) NOPASSWD:ALL']
    shell: /bin/bash
    groups: wheel

This part generates a user sysadmin with the content of cnx6.pub as authorized key (key-authentication with SSH) and /bin/bash as shell, adds him to sudoers and the group wheel. The root user just gets an authorized key.

I add one more user which will be used to run Ansible. My users connect with SSH keys, but still have a password to manage and test things after deployment.

It’s easy to add more keys, users, or commands.

36
37
38
39
40
41
42
43
44
45
46
47
48
49
# change some passwords
#
# - create password:
#   makepasswd --minchars 20 --maxchars 20
# - hash the generated passwort with openssl:
#   (Note: passing -1 will generate an MD5 password, -5 a SHA256 and -6 SHA512 (recommened))
#   openssl passwd -6 -salt fdfngmklndfgbl   PASSWORD

chpasswd:
  list:
    - root:$6$fdfngmklndfgbl$PnuPSSecvXm3gW3WQPDTqoP7WeoqgmSSI2TYvK8XELp1IwidyJG4uM9TSkhWW/EAcC4XN08IdZ5OGvj87aIST/
    - sysadm:$6$fdfngmklndfgbl$PnuPSSecvXm3gW3WQPDTqoP7WeoqgmSSI2TYvK8XELp1IwidyJG4uM9TSkhWW/EAcC4XN08IdZ5OGvj87aIST/
    - ansible:$6$fdfngmklndfgbl$PnuPSSecvXm3gW3WQPDTqoP7WeoqgmSSI2TYvK8XELp1IwidyJG4uM9TSkhWW/EAcC4XN08IdZ5OGvj87aIST/
  expire: False

The comment describes the commands which can be used to generate the password hashes.

In my case I even copied the hashes from user to user, but normally even with the same password they have different salted hashes. But as I already stated, I use this only for demos and to test updates or development stuff.

packages:
  - bash-completion
  - vim
  - qemu-guest-agent
  - libselinux-python
  - policycoreutils-python
  - dnsmasq
  - python3

Install additional packages, like your favorite editor and python3, so Ansible can connect and work.

The rest of the file is pretty straight forward, you can add more packages and reboot it.

Here can you find my complete file, which I use for my environments.

Configure hostname, ip address and disc configuration

In terraform-libvirt.tfvars is the definition of our servers, the example file defines 3 virtual machines:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
servers = {
  "cnx-ns" = {                        # Name in virsh / kvm
    "memory"    = 2*1024              # Memory: 2GB
    "vcpu"      = 2                   # 2 cores
    "disk_size" = 20*1024*1024*1024   # 20 GB Disc size
    octetIP     = 2                   # last octet of ip
    "hostname"  = "cnx-ns"            # Hostname
  }
  "cnx-nfs" = {
    "memory"    = 2*1024
    "vcpu"      = 2
    "disk_size" = 200*1024*1024*1024
    octetIP     = 3
    "hostname"  = "cnx-nfs"        # Hostname
  }
  "cnx-ds" = {
    "memory"    = 2*1024
    "vcpu"      = 2
    "disk_size" = 20*1024*1024*1024
    octetIP     = 4
    "hostname"  = "cnx-ds"        # Hostname
  }
}

I added some comments of the first server with the description of definitions used.

So now we need to add a domain (fqdn = hostname + domain) and the first three octets of the network address.

These can be set in main.tf:

2
3
variable "domain"   { default = "stoeps.home" }
variable "prefixIP" { default = "10.0.22" }

So the domain is stoeps.home and all ip addresses start with 10.0.22.

Some of this things need to be repeated in network.tf which holds the basic network configuration.

The file network.tf is only present in my very first set of virtual machines! I use this repository to build a dns server, nfs server and directory server (ldap), so here I define the network and the servers run all the time. Additional machines use this network, but don’t define it, so I can create and destroy machines without affecting the network.

Additionally, I added in the end of main.tf:

88
89
90
  xml {
  	xslt		    = "${file("volume.xsl")}"
  }

Which adds the content of volume.xsl to the disc definitions, because without this, the storage access on my notebook was just too slow to create databases on DB2.

This article on serverfault describes more details and one solution is to add cache='unsafe' to the qemu driver. I found the solution here.

So I had to write a matching xsl to add this to all discs which are created by Terraform.

12
13
14
15
16
17
  <xsl:template match="disk[@type='volume']/driver">
    <xsl:copy>
      <xsl:attribute name="cache">unsafe</xsl:attribute>
      <xsl:apply-templates select="@*|node()"/>
    </xsl:copy>
  </xsl:template>

You can change unsafe to writeback which can save the filesystem on power-outages. But I use the virtualization only for automated deployed machines, so I can get them back within some hours from Ansible and Terraform.

Create the machines

The creator of the original repository created a Makefile as wrapper for the different terraform commands, for my test environments this speeds up the execution, so I stayed with it.

Create

make apply

This command generates three running machines, which can be configured with Ansible. Test connections with ssh root@10.0.22.2 for example.

Apply complete! Resources: 11 added, 0 changed, 0 destroyed.

All done.

ssh root@10.0.22.2
Warning: Permanently added '10.0.22.2' (ED25519) to the list of known hosts.
[root@cnx-ns ~]#

No additional password prompt, we can start configuring with Ansible. In the next post I’ll show how to deploy a DNS server for the virtual environment.

Destroy

To destroy the environment, just use following command:

make destroy

References

Author
Add a comment
Error
There was an error sending your comment, please try again.
Thank you!
Your comment has been submitted and will be published once it has been approved.

Your email address will not be published. Required fields are marked with *