One of the many purposes of Ansible is to easily, quickly, and efficiently provision new server infrastructure. The use of configuration management tools in server provisioning can be quite essential, as it provides a very flexible solution in regards to deploying and managing new hosts.
This post goes through a very simple example of playbook that uses Ansible roles to break up and organize the provisioning process. If you haven’t used Ansible to set up a server before this is a good place to start. The idea can then be expanded upon to add more individual components or specific ideas.
The Playbook is intended for Linux hosts running Debian 8 (Jessie) and is tested using a suitable Vagrant VM. After the testing, towards the end of the post, the playbook is then deployed to several newly created Debian 8 droplets on Digital Ocean
1 – Playbook Repository
The entirety of this post was tested on a Xubuntu 16.04 VM using the below version of Ansible and Python:
The layout of the files for this playbook look like this:
2 – Main Playbook File
At the root level of the ansible-debian-provisioning repo is the main playbook.yml file, which calls and runs the subsequent roles and their tasks.
There are several variables defined here in this core playbook file.
- hosts – set to target all the hosts in the Ansible inventory file /etc/ansible/hosts.
- gather_facts – when the playbook is run, it will gather tasks about the operating system first before executing tasks.
- roles – here are the four role names to be included when running the Ansible playbook (listed in order of execution).
From here on the plays (tasks) inside each of the roles directories are processed by Ansible – once the user runs the playbook.
3 – Roles: Base
The “base” role puts in place some sensible server defaults/groundwork, and holds three directories containing their relevant Ansible configuration files:
The “files” directory contains an ASCII style message of the day (MOTD) file, that once in place is shown to user’s upon connecting to the server. This can be changed freely to whatever message is suitable by altering the motd file contents.
The play (or tasks) for this role to be carried out by Ansible on each target host, are found in the “tasks” directory’s configuration file. In here there are multiple tasks that together form the play. The description for each task explains its individual purpose.
This is where the bulk of changes are actually made to the target hosts when running the playbook. Each change is described roughly in the name directives.
In this base role, several packages are to be installed, the message of the day is changed, SSH key usage is enforced and made mandatory, whilst automatic security updates are enabled.
Notice in the last code snippet the triggering of the two handlers when required via the usage of notify:.
Here’s how the handlers work.
The “handler” directory’s configuration file lists two handlers. The first handler for this role ensures the SSH system daemon is restarted. The second handler runs the dpkg-reconfigure command for the unattended-upgrades package.
These handlers are both triggered when appended to tasks found elsewhere in this role’s playbook files – like we saw earlier.
4 – Roles: Users
The “users” role houses two Ansible directories.
The main.yml file in the defaults directory contains credentials for the Linux user accounts to be generated on the target host(s). The YAML used here begins as a list of dictionaries. Each user and their associated credentials make up one entry in this initial list of dictionaries. The keys in every dictionary here contain the literal user values.
If you’re unsure on the YAML syntax and its usage, see Ansible’s explanations on YAML in Ansible.
Note: It’s important to remember that the provided key values in the previous code snippet are mainly placeholders and imagined examples. They need replacing with real values.
The first key in the user dictionary is name: and defines the Linux account username.
The second is encrypted_password: which must be set to a hashed value and should not be plain-text. Later on in this section we’ll explain how to go about generating a hash for your own passwords.
The third is public_keys: which you’ll notice is plural. This is in case you want to add multiple SSH keys (from your localhost) to your remote new user account, to give multiple people/keys access when needed. Importantly here, a “nested” list is used to add these multiple entries when they’re required.
The next key sudo: is a Boolean and adds the user account to the sudo group when set to “true”.
The last key adm: is the same as the previous key. It’s also a Boolean and adds the user account to the adm or “admin` group when set to “true”.
Moving over to the second configuration file in the “tasks” directory, you can see the play/tasks that make use of the definitions in the previous file.
Generating Crypted Passwords
When it comes to generating the hashes (crypted values) for your username key passwords, there are a several different methods on offer.
The first involves using mkpasswd a utility that is available on most Linux systems. If it’s not on your system look for it in your package manager’s index – which in Debian and Ubuntu comes bundled inside the whois package.
A separate way of doing this is to use Python’s crypt module/library:
Enter Password Here is of course replaced by the plain-text password you want to use. Whilst SomeSalt needs to be replaced by a salt – a random string of characters.
You could use pwgen to generate salts, should you want to:
A final alternative is via the openssl package.
Pick one of the values from the output to use for your encrypted_password: keys.
5 – Roles: UFW
The “UFW” role configures the firewall settings for servers. There are only two configuration files to be aware of.
The first config file in the “defaults” directory defines which custom firewall ports you wish to remain open and accessible to outside connections. They are defined in a list format that can be added to as necessary.
Only two ports are set to open currently in this file – HTTP port 80 and HTTPS port 43.
More tinkering with the firewall is carried out in the other config file, which resides in the “tasks” directory:
As you can see above in the last code snippet, Ansible is instructed to install the UFW package, reset the firewall, open port 22 for SSH access, then reload and enable the firewall to make it active. Our previous ports defined in the other file are also opened.
It’s worth noting that all of this is done via the inbuilt ufw: Ansible module, and not at any point manually through the shell.
6 – Roles: NTP
The NTP role handles timezone settings for target hosts and has four directories with configuration files inside of them:
The “defaults” config file is where you set the timezone you wish to use for your server(s). Alter the Europe/London text to your own choice here.
There is no requirement to change the ntp_server: definition and this can be left as it is written.
Two handlers are required (all set in the “handlers” file) to ensure the NTP config is running once setup/updated. They are triggered as normal in the other “tasks” section file.
The NTP “tasks” carry out several actions. A template file containing the chosen timezone is copied into the remote hosts’s /etc/timezone directory, and given root user based permissions.
The tzdata and ntp package is then downloaded and installed. The ntp service gets enabled as well as started using the service: module. Two lines in the ntp.conf are altered to match the OS type (Debian in this case), which has been set in the “defaults” ntp_server: directive.
Like previously, note the use of the handlers to trigger the desired actions in this above NTP snippet.
Lastly in this section/role, you can see the very short one line template file. Which takes your timezone choice e.g. timezone: Europe/London from the defaults config file.
This is all the roles of the playbook covered, here’s one method of testing it before deploying and utilising it on real hosts.
7 – Vagrant Local Testing
You can run through the playbook in a test environment to ensure it works as intended, before applying it to real servers. A containerisation/virtualisation tool like Vagrant or Docker is great for this purpose. In my example testing I’m going with Vagrant, but Docker is probably a more modern choice and certainly worth pursuing instead if preferred. Be aware that container’s are meant to be abstracted stripped down layers of a full OS image however, so may in fact not be better suited than a Vagrant VM.
If you haven’t already you’ll need to clone the GitHub repository to carry out the testing in this step:
Once you have the repo cloned and Vagrant up and running on your local system, change into the vagrant directory to begin the testing.
In this directory you’ll see several Vagrant files. The main file that does most of the work here is the Vagrantfile.
The only portion of the Vagrantfile file we’ll take a look at here is the “provision” code block, so open up the file with an editor and find it, or just read it here:
This piece of configuration sets up Ansible as the provisioning agent Vagrant should use when provisioning a Vagrant VM. It also identifies where the target playbook to make use of is stored, alongside which Ansible inventory file to use. Host key checking for SSH with Ansible is also disabled. All of this is in the context of the Vagrant test VM.
So the provisioner is the entity that works through some pre-set tasks using the VM instance provided by Vagrant. The most common provisioners are: Puppet, Chef and Ansible. Shell Scripting is also still a very prevalent option.
The rest of the Vagrantfile is important but not crucial to aware of, as it can be understood another time when learning how Vagrant itself works.
Exit this file and type in the next command to begin the testing process:
This downloads the debian/jessie64 box (seen here) specified in the Vagrantfile and creates an instance of that box as a Vagrant virtual machine (to test the Ansible playbook on).
Note: If your host is already a Linux VM (nested virtualisation) set your hypervisor network state to Bridged instead of NAT, to remove networking problems such as slow downloads, networking auth errors, proxy errors, etc inside of Vagrant boxes.
As the Ansible provision settings are already in place within the Vagrantfile, there’s no need to tell Vagrant to use Ansible with the new VM for us.
So just watch as the output messages show the playbook execution progress, until the final display reads:
To further verify the changes have been made, or explore individual parts of the Vagrant test VM, SSH into it by typing:
Exit from the Vagrant VM as you would any other remote host e.g. exit or CTRL + D
When you make changes to the playbook content and its actions in the future, to further test the changes you must re-run the playbook on the Vagrant VM.
To do this use this command:
Manually running Ansible against Vagrant to achieve something like a a dry test run i.e. –check or any other options you might want to incorporate is possible too.
To actually implement changes again, remove the –check flag or use the provision command as normal.
Finally here, let’s look at some housekeeping in terms of Vagrant usage.
Update your Vagrant Debian box to the latest version from the maintainer with:
Destroy your test environment (Vagrant VM instance) using the next command, where default is the name of the environment.
ID’s for Vagrant virtual machines work instead of the environment name too.
Finally to remove your Vagrant Debian box download completely, and all of the different updated versions, run:
Now onto the real thing!
8 – Digital Ocean Droplet(s)
Create several new Debian droplets using the Digital Ocean control panel – copying your Ansible SSH key across to the new droplets during creation.
Add the multiple new droplet IP addresses to your Ansible inventory file (default in /etc/ansible/hosts).
Here’s a bare minimum example of the contents, for if you created three droplets in total.
Create this directory, and begin writing to a new file:
Setup a group variable for the testing group, so they use the root user with Ansible’s SSH operations.
In the root level of the repository, run our provisioning Ansible playbook at your testing group’s live Digital Ocean droplets.
Watch the output once again to confirm the playbook’s success.
That’s about it. You could check your droplet manually and take a look at what’s actually changed if you really like, but the earlier output from running the Ansible playbook is of course your verification, on what’s been carried out.