A highly condensed set of basic commands to install Fail2ban the traditional way.
These can be executed on any remote server/VPS running recent versions of Ubuntu; although the process was carried out by myself on 18.04. If you’re not familiar with Fail2ban, the start of this brief guide refers to two good resources you can read up on. One more up to date than the other.
The purpose of this post is to serve as background for a follow up post which uses Ansible to install the Fail2ban package and configuration more efficiently (linked at the end).
Several of the instructions for this process are taken and adapted from an older article on DigitalOcean. They’re intended for Ubuntu 14.04 but are still overall suitable on Bionic:
It might be better to read through this more up to date Linode article instead however to understand what Fail2ban is, how it works, and most importantly what different values to place into the configuration files. Otherwise, this may not make complete sense before doing so.
It may even be more preferable to follow the Linode guide in its entirety, but that’s up to you! See here: Linode – “Use Fail2ban to Secure Your Server”
On the remote Ubuntu server in question, update the system package index.
[alert-announce]$ sudo apt-get update -y[/alert-announce]
Download and acquire the fail2ban plus sendmail packages.
$ sudo apt-get install fail2ban sendmail
Sendmail (if not present by default) is required for Fail2ban to generate notification emails.
Copy the base Fail2ban config into a new jail.local file, in order to begin adding in the config options we want to be overridden and applied:
$ sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
Note: “Fail2ban reads .conf configuration files first, then .local files overriding any settings. Because of this, all changes to the configuration are generally done in .local files, leaving the .conf files untouched.”
Here’s where an understanding of the configuration is very much necessary.
Having a working firewall such as UFW on Ubuntu is also a background assumption I’m working with here, as the two can work together, and a firewall’s kinda mandatory anyway of course.
Open the newly copied jail.local file.
$ sudo vim /etc/fail2ban/jail.local
Add in your sensible Fail2ban configuration blocks and values now; this is my example file contents, should you want to make use of them:
- # email address to receive notifications.
- destemail = root@localhost
- # the email address from which to send emails.
- sender = root@<fq-hostname>
- # name on the notification emails.
- sendername = Fail2Ban
- # email transfer agent to use.
- mta = sendmail
- # see action.d/ufw.con
- actionban = ufw.conf
- # see action.d/ufw.conf
- actionunban = ufw.conf
- enabled = true
- port = ssh
- filter = sshd
- # the length of time between login attempts for maxretry.
- findtime = 600
- # attempts from a single ip before a ban is imposed.
- maxretry = 5
- # the number of seconds that a host is banned for.
- bantime = 3600
Note: “A host is banned if it has generated “maxretry” during the last “findtime”.”
Lastly here enable the Fail2ban service on system startup.
$ sudo systemctl service enable fail2ban
Then start the service so it’s currently active.
$ sudo systemctl service start fail2ban
Fail2ban is now up and running – assuming you entered proper configuration options and have no syntax errors.
As an alternate to using Systemd, restarting the entire Fail2ban server reports any runtime errors, should there be any issue, so…
$ fail2ban-client restart
Fix any reported problems in the output, and then restart again.
There’s also a command to confirm the status of the server/jails.
Try it out:
$ fail2ban-client status
More specific information about the sshd jail we created in the config file is retrievable with:
$ fail2ban-client status sshd
Many more useful commands for you to explore are available, indexed at the following wiki: Fail2ban Client CLI Commands
Fail2ban is now installed, running, and working!
Add more jails and actions for other services to expand upon it.
The post leading on from this one achieves the same end result but using Ansible configuration management to do the job.