In the post Create A Test Environment with Terraform and KVM I created the first three virtual machines, now we configure a DNS server so name resolution works as expected.
Since HCL Connections started to add Kubernetes to the stack, we need to use proper name resolution instead of just editing /etc/hosts
. That’s a bit of an effort, but in the end it is way easier than checking several hosts if the hosts file is uptodate.
Clone example repository
git clone https://github.com/stoeps13/ansible-pb-infra-demo.git
cd ansible-pb-infra-demo
Adjust to your environment
ansible.cfg
Here only the uncommented parts of the file.
|
|
It sets the default inventory, and the roles path. Additionally I set my remote user, which I had configured in my Terraform project before.
environments/libvirt/cnx.ini
For this installation the DNS server is enough, but I already added groups for LDAP and NFS.
|
|
environments/libvirt/group_vars/dns.yml
---
# The port to listen on.
dns_port: 53
# Should the DNS server be a caching DNS server?
dns_caching_dns: yes
dns_options_forwarders:
- 1.1.1.1
- 8.8.8.8
# A list of zones and properties per zone.
dns_zones:
- name: localhost
soa: localhost
serial: 1
refresh: 604800
retry: 86400
expire: 2419200
ttl: 604800
records:
- name: "@"
type: NS
value: localhost.
- name: "@"
value: 127.0.0.1
- name: "@"
type: AAAA
value: ::1
- name: 127.in-addr.arpa
ttl: 604800
records:
- name: "@"
type: NS
value: localhost.
- name: 1.0.0
type: PTR
value: localhost.
- name: 0.in-addr.arpa
records:
- name: "@"
type: NS
value: localhost.
- name: 255.in-addr.arpa
records:
- name: "@"
type: NS
value: localhost.
- name: stoeps.home
ttl: 604800
ns:
- name: cnx-ns.stoeps.home.
mx:
- name: cnx-mail.stoeps.home.
priority: 10
records:
- name: cnx-ns
value: 10.0.22.2
- name: cnx-nfs
value: 10.0.22.3
- name: cnx-ds
value: 10.0.22.4
dns_options_listen_on:
- any
dns_options_listen_on_v6:
- any
dns_pid_file: /run/named/named.pid
I highlighted the most important parts, the dns forwarders which are used to resolve hostnames outside my local zone and the definition of stoeps.home
which is the domain I use for my demo environment and the dns records for the first three hosts which we created with Terraform.
Find and download roles
In requirements.yml
we just add all roles from https://galaxy.ansible.com which we want to use for our playbook. I decided to use https://galaxy.ansible.com/robertdebock/dns which supports all major Linux distributions.
requirements.yml
# from galaxy:
- src: robertdebock.dns
version: 3.1.0
Example requirements
https://docs.ansible.com/ansible/latest/galaxy/user_guide.html#installing-multiple-roles-from-a-file explains the different options to import roles. From galaxy, git, using the newest version or just a special branch.
The usage of versions has the advantage, that you can test your playbook with a special version, and you can stick with it, until you need to update or change something. So a role update won’t break your entire playbook.
# from galaxy
- name: yatesr.timezone
# from locally cloned git repository (git+file:// requires full paths)
- src: git+file:///home/bennojoy/nginx
# from GitHub
- src: https://github.com/bennojoy/nginx
# from GitHub, overriding the name and specifying a specific tag
- name: nginx_role
src: https://github.com/bennojoy/nginx
version: main
This block was only for information, that you see what’s possible in a requirements.yml
file.
ansible-galaxy install -r requirements.yml
This imports the role which will install and configure bind
and stores it into the subfolder roles
.
Playbooks
playbooks/dns.yml
Within playbooks
I create the different playbooks to import for example dns. So with this playbook everything is configured and installed to run dns.
---
- hosts: dns
become: yes
roles:
- robertdebock.dns
This runs the playbook robertdebock.dns
on all hosts in the group dns
.
site.yml
If there are multiple playbooks in playbooks
, I just include these into site.yml
, so I can run all defined playbooks of this repository in one step.
Imagine a second file playbooks/ldap.yml
which is also included into site.yml
.
---
- name: Install bind
import_playbook: playbooks/dns.yml
So our folder looks like this:
├── environments
│ └── libvirt
│ ├── group_vars
│ │ ├── dns.yml
│ └── cnx.ini
├── playbooks
│ └── dns.yml
├── roles
│ ├── robertdebock.dns
│ │ ├── defaults
│ │ │ └── main.yml
│ │ ├── files
│ │ │ └── override.conf
│ │ ├── handlers
│ │ │ └── main.yml
│ │ ├── meta
│ │ │ ├── exception.yml
│ │ │ ├── main.yml
│ │ │ └── preferences.yml
│ │ ├── molecule
│ │ │ └── default
│ │ │ ├── collections.yml
│ │ │ ├── converge.yml
│ │ │ ├── molecule.yml
│ │ │ ├── prepare.yml
│ │ │ └── verify.yml
│ │ ├── tasks
│ │ │ ├── assert.yml
│ │ │ └── main.yml
│ │ ├── templates
│ │ │ ├── named.conf.j2
│ │ │ └── zone.j2
│ │ ├── vars
│ │ │ └── main.yml
│ │ ├── CODE_OF_CONDUCT.md
│ │ ├── CONTRIBUTING.md
│ │ ├── LICENSE
│ │ ├── README.md
│ │ ├── requirements.txt
│ │ ├── requirements.yml
│ │ ├── SECURITY.md
│ │ └── tox.ini
├── ansible.cfg
├── README.md
├── requirements.yml
├── run-playbook.sh
└── site.ymlBash {linenos=false}
If you used my Terraform repository to create the virtual machines, the default DNS server is set to 10.0.22.2 (or better to octet 2 of the configured ip range), but at the moment there is no DNS running and Ansible can’t install additional software.
Connect to the machine ssh root@10.0.22.2
and change /etc/resolv.conf
to nameserver 8.8.8.8
.
I normally place a shellscript to run the entire playbook without using a large commandline. There I add the STDOUT_CALLBACK environment variable when the switch -v
or -vv
is used, which adds more information to the Ansible output and formats it in a more readable way. Very handy for troubleshooting.
./run-playbook.sh
PLAY [dns] *********************************************************************
TASK [Gathering Facts] *********************************************************
ok: [cnx-ns.stoeps.home]
TASK [robertdebock.dns : Add default DNS Server to resolv.conf] ****************
changed: [cnx-ns.stoeps.home]
TASK [robertdebock.dns : test if dns_port is set correctly] ********************
ok: [cnx-ns.stoeps.home -> localhost]
TASK [robertdebock.dns : test if dns_caching_dns is set correctly] *************
ok: [cnx-ns.stoeps.home -> localhost]
TASK [robertdebock.dns : test if dns_zones is set correctly] *******************
ok: [cnx-ns.stoeps.home -> localhost]
TASK [robertdebock.dns : test if item in dns_zones is set correctly] ***********
ok: [cnx-ns.stoeps.home -> localhost] => (item=localhost)
ok: [cnx-ns.stoeps.home -> localhost] => (item=127.in-addr.arpa)
ok: [cnx-ns.stoeps.home -> localhost] => (item=0.in-addr.arpa)
ok: [cnx-ns.stoeps.home -> localhost] => (item=255.in-addr.arpa)
ok: [cnx-ns.stoeps.home -> localhost] => (item=stoeps.home)
...
RUNNING HANDLER [robertdebock.dns : rndc reload] *******************************
changed: [cnx-ns.stoeps.home]
PLAY RECAP *********************************************************************
cnx-ns.stoeps.home : ok=28 changed=13 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0
A big advantage of using Ansible is idempotency, so when the role and playbook is written the right way, you can run it over and over again, and it will not change already configured stuff.
Running the command a second time:
...
TASK [robertdebock.dns : start and enable dns] *********************************
ok: [cnx-ns.stoeps.home]
PLAY RECAP *********************************************************************
cnx-ns.stoeps.home : ok=28 changed=0 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0
The handlers know that nothing has changed and the service is just running without restart.
When I add additional records to the group_vars/dns.yml
file, the dns is updated and restarted automatically to get the changes.
So it is easy to add new hosts to the dns server, just edit the dns.yml
add a record and rerun Ansible in this repository.
Todo
- let Terraform add new hosts to
dns.yml
so the dns server is always up-to-date
Resources
- Ansible Galaxy
- ansible-pb-infra-demo : GitHub project with files to follow this post
- terraform-libvirt : Terraform files to create the infrastructure for this post