Skip to Content

Ansible – Playbook Server Provisioning (5)

Ansible – Playbook Server Provisioning (5)

One of the many purposes of Ansible is to easily, quickly, and efficiently provision new server infrastructure. The use of configuration management tools in server provisioning can be quite essential, as it provides a very flexible solution in regards to deploying and managing new hosts.

This post goes through a very simple example of playbook that uses Ansible roles to break up and organize the provisioning process. If you haven’t used Ansible to set up a server before this is a good place to start. The idea can then be expanded upon to add more individual components or specific ideas.

The Playbook is intended for Linux hosts running Debian 8 (Jessie) and is tested using a suitable Vagrant VM. After the testing, towards the end of the post, the playbook is then deployed to several newly created Debian 8 droplets on Digital Ocean

.


1 – Playbook Repository

The entirety of this post was tested on a Xubuntu 16.04 VM using the below version of Ansible and Python:

[alert-announce]

  1. ansible 2.3.1.0
  2. config file = /etc/ansible/ansible.cfg
  3. configured module search path = Default w/o overrides
  4. python version = 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609]

[/alert-announce]

The layout of the files for this playbook look like this:

[alert-announce]

  1. ├── playbook.yml
  2. ├── README.md
  3. ├── roles
  4. │ ├── base
  5. │ │ ├── files
  6. │ │ │ └── motd
  7. │ │ ├── handlers
  8. │ │ │ └── main.yml
  9. │ │ └── tasks
  10. │ │ └── main.yml
  11. │ ├── ntp
  12. │ │ ├── defaults
  13. │ │ │ └── main.yml
  14. │ │ ├── handlers
  15. │ │ │ └── main.yml
  16. │ │ ├── tasks
  17. │ │ │ └── main.yml
  18. │ │ └── templates
  19. │ │ └── timezone
  20. │ ├── ufw
  21. │ │ ├── defaults
  22. │ │ │ └── main.yml
  23. │ │ └── tasks
  24. │ │ └── main.yml
  25. │ └── users
  26. │ ├── defaults
  27. │ │ └── main.yml
  28. │ └── tasks
  29. │ └── main.yml
  30. └── vagrant
  31. ├── group_vars
  32. │ └── vagrant.yml
  33. ├── inventory.yml
  34. ├── README.md
  35. └── Vagrantfile
  36. 18 directories, 17 files

[/alert-announce]

2 – Main Playbook File

At the root level of the ansible-debian-provisioning repo is the main playbook.yml file, which calls and runs the subsequent roles and their tasks.

[alert-announce]

  1. – name: provision debian 8 (jessie) droplets
  2. hosts: all
  3. gather_facts: yes
  4. roles:
  5. – base
  6. – users
  7. – ufw
  8. – ntp

[/alert-announce]

There are several variables defined here in this core playbook file.

  • hosts – set to target all the hosts in the Ansible inventory file /etc/ansible/hosts.
  • gather_facts – when the playbook is run, it will gather tasks about the operating system first before executing tasks.
  • roles – here are the four role names to be included when running the Ansible playbook (listed in order of execution).

From here on the plays (tasks) inside each of the roles directories are processed by Ansible – once the user runs the playbook.


3 – Roles: Base

The “base” role puts in place some sensible server defaults/groundwork, and holds three directories containing their relevant Ansible configuration files:

  • roles/base/files/motd
  • roles/base/handlers/main.yml
  • roles/base/tasks/main.yml

The “files” directory contains an ASCII style message of the day (MOTD) file, that once in place is shown to user’s upon connecting to the server. This can be changed freely to whatever message is suitable by altering the motd file contents.

[alert-announce]

  1. *****************************************************************
  2. * This server is configured by Ansible. *
  3. * *
  4. * See https://github.com/5car1z/ansible-debian-provisioning *
  5. *****************************************************************

[/alert-announce]

The play (or tasks) for this role to be carried out by Ansible on each target host, are found in the “tasks” directory’s configuration file. In here there are multiple tasks that together form the play. The description for each task explains its individual purpose.

[alert-announce]

  1. – name: install some commonly used packages
  2. apt: pkg={{ item }} state=present
  3. with_items:
  4. – fail2ban
  5. – git
  6. – htop
  7. – tmux
  8. – vim
  9. – unattended-upgrades
  10. – cowsay
  11. – name: set the server message of the day explaining ansible was the configuration management tool
  12. copy: src=motd
  13. dest=/etc/motd
  14. mode=644
  15. – name: disable ssh root logins without the use of a valid ssh key
  16. lineinfile: dest=/etc/ssh/sshd_config state=present regexp=’^PermitRootLogin ‘ line=’PermitRootLogin without-password’
  17. notify: restart sshd
  18. – name: disable ssh password logins for regular users
  19. lineinfile: dest=/etc/ssh/sshd_config state=present regexp=’^PasswordAuthentication ‘ line=’PasswordAuthentication no’
  20. notify: restart sshd
  21. – name: enable unattended security updates option
  22. debconf: name=unattended-upgrades
  23. question=’unattended-upgrades/enable_auto_updates’
  24. value=’true’
  25. vtype=’boolean’
  26. notify: reconfigure unattended-upgrades

[/alert-announce]

This is where the bulk of changes are actually made to the target hosts when running the playbook. Each change is described roughly in the name directives.

In this base role, several packages are to be installed, the message of the day is changed, SSH key usage is enforced and made mandatory, whilst automatic security updates are enabled.

Notice in the last code snippet the triggering of the two handlers when required via the usage of notify:.

Here’s how the handlers work.

The “handler” directory’s configuration file lists two handlers. The first handler for this role ensures the SSH system daemon is restarted. The second handler runs the dpkg-reconfigure command for the unattended-upgrades package.

[alert-announce]

roles/base/handlers/main.yml

  1. – name: restart sshd
  2. service: name=ssh state=restarted
  3. – name: reconfigure unattended-upgrades
  4. command: dpkg-reconfigure -f noninteractive unattended-upgrades

[/alert-announce]

These handlers are both triggered when appended to tasks found elsewhere in this role’s playbook files – like we saw earlier.


4 – Roles: Users

The “users” role houses two Ansible directories.

  • roles/users/defaults/main.yml
  • roles/users/tasks/main.yml

The main.yml file in the defaults directory contains credentials for the Linux user accounts to be generated on the target host(s). The YAML used here begins as a list of dictionaries. Each user and their associated credentials make up one entry in this initial list of dictionaries. The keys in every dictionary here contain the literal user values.

If you’re unsure on the YAML syntax and its usage, see Ansible’s explanations on YAML in Ansible.

[alert-announce]

  1. provisioned_users:
  2. – name: user-one
  3. encrypted_password: $1$@YMgS-5Y$2lH.vkVmawJ810djjkGp70
  4. public_keys:
  5. – /home/$USER/.ssh/id_rsa.pub
  6. sudo: true
  7. adm: true
  8. – name: user-two
  9. encrypted_password: $1$@YMgS-5Y$2lH.vkVmawJ810djjkGp70
  10. public_keys:
  11. – /home/$USER/.ssh/id_rsa.pub
  12. – /home/$USER/.ssh/id_rsa.pub
  13. sudo: true
  14. adm: true
  15. – name: user-three
  16. encrypted_password: $1$@YMgS-5Y$2lH.vkVmawJ810djjkGp70
  17. public_keys:
  18. – /home/$USER/.ssh/id_rsa.pub
  19. – /home/$USER/.ssh/id_rsa.pub
  20. – /home/$USER/.ssh/id_rsa.pub
  21. sudo: false
  22. adm: false
  23. – name: user-four
  24. encrypted_password: $1$@YMgS-5Y$2lH.vkVmawJ810djjkGp70
  25. public_keys:
  26. – /home/$USER/.ssh/id_rsa.pub
  27. – /home/$USER/.ssh/id_rsa.pub
  28. – /home/$USER/.ssh/id_rsa.pub
  29. – /home/$USER/.ssh/id_rsa.pub
  30. sudo: false
  31. adm: false

[/alert-announce]

Note: It’s important to remember that the provided key values in the previous code snippet are mainly placeholders and imagined examples. They need replacing with real values.

The first key in the user dictionary is name: and defines the Linux account username.

The second is encrypted_password: which must be set to a hashed value and should not be plain-text. Later on in this section we’ll explain how to go about generating a hash for your own passwords.

The third is public_keys: which you’ll notice is plural. This is in case you want to add multiple SSH keys (from your localhost) to your remote new user account, to give multiple people/keys access when needed. Importantly here, a “nested” list is used to add these multiple entries when they’re required.

The next key sudo: is a Boolean and adds the user account to the sudo group when set to “true”.

The last key adm: is the same as the previous key. It’s also a Boolean and adds the user account to the adm or “admin` group when set to “true”.

Moving over to the second configuration file in the “tasks” directory, you can see the play/tasks that make use of the definitions in the previous file.

[alert-announce]

  1. – name: add provisioned user accounts defined in defaults config
  2. user: name={{ item.name }} home=/home/{{ item.name }} shell=/bin/bash state=present password={{
  3. item.encrypted_password }}
  4. with_items: “{{ provisioned_users }}”
  5. – name: add public keys to authorized keys files
  6. authorized_key: user={{ item[0].name }} key=”{{ lookup(‘file’, item[1]) }}” state=present
  7. with_subelements:
  8. – “{{ provisioned_users }}”
  9. – public_keys
  10. – name: add provisioned users to sudo group
  11. user: name={{ item.name }} groups=sudo append=yes
  12. with_items: “{{ provisioned_users }}”
  13. when: item.sudo
  14. – name: add provisioned users to admin group
  15. user: name={{ item.name }} groups=adm append=yes
  16. with_items: “{{ provisioned_users }}”
  17. when: item.adm

[/alert-announce]

The descriptions of the tasks explain how things work here. The modules used for this are the user and authorized key modules.


Generating Crypted Passwords

When it comes to generating the hashes (crypted values) for your username key passwords, there are a several different methods on offer.

The first involves using mkpasswd a utility that is available on most Linux systems. If it’s not on your system look for it in your package manager’s index – which in Debian and Ubuntu comes bundled inside the whois package.

[alert-announce]

  1. $ sudo apt-get install whois

[alert-announce]

  1. $ mkpasswd –method=sha-512

[/alert-announce]

A separate way of doing this is to use Python’s crypt module/library:

[alert-announce]

$ python -c ‘import crypt; print crypt.crypt(“EnterPasswordHere”, “SomeSalt”)’

[/alert-announce]

Enter Password Here is of course replaced by the plain-text password you want to use. Whilst SomeSalt needs to be replaced by a salt – a random string of characters.

You could use pwgen to generate salts, should you want to:

[alert-announce]

  1. $ sudo apt-get install pwgen
  2. $ pwgen

[/alert-announce]

A final alternative is via the openssl package.

[alert-announce]

  1. $ openssl passwd -salt SomeSalt -1 EnterPasswordHere

[/alert-announce]

Pick one of the values from the output to use for your encrypted_password: keys.


5 – Roles: UFW

The “UFW” role configures the firewall settings for servers. There are only two configuration files to be aware of.

  • roles/ufw/defaults/main.yml
  • roles/ufw/tasks/main.yml

The first config file in the “defaults” directory defines which custom firewall ports you wish to remain open and accessible to outside connections. They are defined in a list format that can be added to as necessary.

[alert-announce]

  1. ufw_open_ports: [’80’, ’43’]

[/alert-announce]

Only two ports are set to open currently in this file – HTTP port 80 and HTTPS port 43.

More tinkering with the firewall is carried out in the other config file, which resides in the “tasks” directory:

[alert-announce]

  1. – name: install ufw
  2. apt: pkg=ufw state=present
  3. – name: disable and reset firewall
  4. ufw: state=reset
  5. – name: open firewall for ssh
  6. ufw: rule=allow name=OpenSSH
  7. – name: open firewall on specific ports
  8. ufw: rule=allow port={{ item }}
  9. with_items: “{{ ufw_open_ports }}”
  10. – name: reload and enable firewall
  11. ufw: state=enabled policy=deny

[/alert-announce]

As you can see above in the last code snippet, Ansible is instructed to install the UFW package, reset the firewall, open port 22 for SSH access, then reload and enable the firewall to make it active. Our previous ports defined in the other file are also opened.

It’s worth noting that all of this is done via the inbuilt ufw: Ansible module, and not at any point manually through the shell.


6 – Roles: NTP

The NTP role handles timezone settings for target hosts and has four directories with configuration files inside of them:

  • roles/ntp/defaults/main.yml
  • roles/ntp/handlers/main.yml
  • roles/ntp/tasks/main.yml
  • roles/ntp/templates/main.yml

The “defaults” config file is where you set the timezone you wish to use for your server(s). Alter the Europe/London text to your own choice here.

[alert-announce]

  1. timezone: Europe/London
  2. ntp_server: 0.debian.pool.ntp.org

[/alert-announce]

There is no requirement to change the ntp_server: definition and this can be left as it is written.

Two handlers are required (all set in the “handlers” file) to ensure the NTP config is running once setup/updated. They are triggered as normal in the other “tasks” section file.

[alert-announce]

  1. – name: reconfigure tzdata
  2. command: dpkg-reconfigure -f noninteractive tzdata
  3. – name: restart ntp
  4. service: name=ntp state=restarted

[/alert-announce]

The NTP “tasks” carry out several actions. A template file containing the chosen timezone is copied into the remote hosts’s /etc/timezone directory, and given root user based permissions.

The tzdata and ntp package is then downloaded and installed. The ntp service gets enabled as well as started using the service: module. Two lines in the ntp.conf are altered to match the OS type (Debian in this case), which has been set in the “defaults” ntp_server: directive.

[alert-announce]

roles/ntp/tasks/main.yml

  1. – name: configure timezone
  2. template:
  3. src: timezone
  4. dest: /etc/timezone
  5. owner: root
  6. group: root
  7. notify: reconfigure tzdata
  8. – name: install tzdata
  9. apt: pkg=tzdata state=installed
  10. – name: install ntp
  11. apt: pkg=ntp state=installed
  12. – name: start ntp
  13. service: name=ntp state=started enabled=true
  14. – name: set ntp server
  15. lineinfile: dest=/etc/ntp.conf state=present regexp=’^server ‘ line=’server {{ ntp_server }} iburst’
  16. notify: restart ntp
  17. # Note that there may be more than one ‘server’ line in this file (hence we
  18. # cannot do this with just one regexp rule).
  19. – name: remove all other ntp servers
  20. lineinfile: dest=/etc/ntp.conf state=absent regexp=”^server\s+(?!{{ ntp_server }})”
  21. notify: restart ntp

[/alert-announce]

Like previously, note the use of the handlers to trigger the desired actions in this above NTP snippet.

Lastly in this section/role, you can see the very short one line template file. Which takes your timezone choice e.g. timezone: Europe/London from the defaults config file.

[alert-announce]

roles/ntp/templates/main.yml

  1. {{ timezone }}

[/alert-announce]

This is all the roles of the playbook covered, here’s one method of testing it before deploying and utilising it on real hosts.


7 – Vagrant Local Testing

You can run through the playbook in a test environment to ensure it works as intended, before applying it to real servers. A containerisation/virtualisation tool like Vagrant or Docker is great for this purpose. In my example testing I’m going with Vagrant, but Docker is probably a more modern choice and certainly worth pursuing instead if preferred. Be aware that container’s are meant to be abstracted stripped down layers of a full OS image however, so may in fact not be better suited than a Vagrant VM.

How to Install and Get Started with Vagrant

If you haven’t already you’ll need to clone the GitHub repository to carry out the testing in this step:

[alert-announce]

  1. $ git clone https://github.com/5car1z/ansible-debian-provisioning.git

[/alert-announce]

Once you have the repo cloned and Vagrant up and running on your local system, change into the vagrant directory to begin the testing.

[alert-announce]

  1. $ cd vagrant

[/alert-announce]

In this directory you’ll see several Vagrant files. The main file that does most of the work here is the Vagrantfile.

[alert-announce]

  1. ├── README.md
  2. ├── Vagrantfile
  3. ├── group_vars
  4. │ └── vagrant.yml
  5. └── inventory.yml

[/alert-announce]

The only portion of the Vagrantfile file we’ll take a look at here is the “provision” code block, so open up the file with an editor and find it, or just read it here:

[alert-announce]

  1. # Provision using our Ansible playbook.
  2. config.vm.provision “ansible” do |ansible|
  3. ansible.playbook = “../playbook.yml”
  4. ansible.inventory_path = “inventory.yml”
  5. ansible.host_key_checking = false
  6. end

[/alert-announce]

This piece of configuration sets up Ansible as the provisioning agent Vagrant should use when provisioning a Vagrant VM. It also identifies where the target playbook to make use of is stored, alongside which Ansible inventory file to use. Host key checking for SSH with Ansible is also disabled. All of this is in the context of the Vagrant test VM.

So the provisioner is the entity that works through some pre-set tasks using the VM instance provided by Vagrant. The most common provisioners are: Puppet, Chef and Ansible. Shell Scripting is also still a very prevalent option.

The rest of the Vagrantfile is important but not crucial to aware of, as it can be understood another time when learning how Vagrant itself works.

Exit this file and type in the next command to begin the testing process:

[alert-announce]

  1. $ vagrant up

[/alert-announce]

This downloads the debian/jessie64 box (seen here) specified in the Vagrantfile and creates an instance of that box as a Vagrant virtual machine (to test the Ansible playbook on).

Note: If your host is already a Linux VM (nested virtualisation) set your hypervisor network state to Bridged instead of NAT, to remove networking problems such as slow downloads, networking auth errors, proxy errors, etc inside of Vagrant boxes.

As the Ansible provision settings are already in place within the Vagrantfile, there’s no need to tell Vagrant to use Ansible with the new VM for us.

So just watch as the output messages show the playbook execution progress, until the final display reads:

[alert-announce]

  1. < PLAY RECAP >
  2. ————
  3. \ ^__^
  4. \ (oo)\_______
  5. (__)\ )\/\
  6. ||—-w |
  7. || ||
  8. default : ok=24 changed=16 unreachable=0 failed=0

[/alert-announce]

To further verify the changes have been made, or explore individual parts of the Vagrant test VM, SSH into it by typing:

[alert-announce]

  1. $ vagrant ssh

[/alert-announce]

Exit from the Vagrant VM as you would any other remote host e.g. exit or CTRL + D

When you make changes to the playbook content and its actions in the future, to further test the changes you must re-run the playbook on the Vagrant VM.

To do this use this command:

[alert-announce]

  1. $ vagrant provision

[/alert-announce]

Manually running Ansible against Vagrant to achieve something like a a dry test run i.e. –check or any other options you might want to incorporate is possible too.

[alert-announce]

  1. $ ansible-playbook –private-key=~/.vagrant.d/insecure_private_key -u vagrant -i inventory.yml –check ../playbook.yml

[/alert-announce]

To actually implement changes again, remove the –check flag or use the provision command as normal.

Finally here, let’s look at some housekeeping in terms of Vagrant usage.

Update your Vagrant Debian box to the latest version from the maintainer with:

[alert-announce]

  1. $ vagrant box update debian/jessie64

[/alert-announce]

Destroy your test environment (Vagrant VM instance) using the next command, where default is the name of the environment.

[alert-announce]

  1. $ vagrant destroy ansible_server_provisioning

[/alert-announce]

ID’s for Vagrant virtual machines work instead of the environment name too.

[alert-announce]

  1. $ vagrant global-status

[/alert-announce]

Finally to remove your Vagrant Debian box download completely, and all of the different updated versions, run:

[alert-announce]

  1. $ vagrant box remove debian/jessie64 –all

[/alert-announce]

Now onto the real thing!


8 – Digital Ocean Droplet(s)

Create several new Debian droplets using the Digital Ocean control panel – copying your Ansible SSH key across to the new droplets during creation.

How To Configure SSH Key-Based Authentication on a Linux Server

Add the multiple new droplet IP addresses to your Ansible inventory file (default in /etc/ansible/hosts).

[alert-announce]

  1. $ sudo vim /etc/ansible/hosts

[/alert-announce]

Here’s a bare minimum example of the contents, for if you created three droplets in total.

[alert-announce]

  1. [testing]
  2. ansible-test-1 ansible_host=your.droplet.ip.address
  3. ansible-test-2 ansible_host=your.droplet.ip.address
  4. ansible-test-3 ansible_host=your.droplet.ip.address

[/alert-announce]

Create this directory, and begin writing to a new file:

[alert-announce]

  1. $ sudo mkdir /etc/ansible/group_vars/
  2. $ sudo vim /etc/ansible/group_vars/testing

[/alert-announce]

Setup a group variable for the testing group, so they use the root user with Ansible’s SSH operations.

[alert-announce]

/etc/ansible/group_vars/testing

  1. ansible_user=root

[/alert-announce]

In the root level of the repository, run our provisioning Ansible playbook at your testing group’s live Digital Ocean droplets.

[alert-announce]

  1. $ ansible-playbook -l testing playbook.yml

[/alert-announce]

Watch the output once again to confirm the playbook’s success.


That’s about it. You could check your droplet manually and take a look at what’s actually changed if you really like, but the earlier output from running the Ansible playbook is of course your verification, on what’s been carried out.

Links to subsequent Ansible posts can be found on the Trades page.