5

How to Configure a New Ubuntu Server with Ansible

 2 years ago
source link: https://www.vultr.com/docs/how-to-configure-a-new-ubuntu-server-with-ansible
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client
<?xml encoding="utf-8" ??>

Introduction

This guide will show you how to automate the initial Ubuntu server configuration (20.04 or later), using Ansible. Ansible is a software tool that automates the configuration of one or more remote nodes from a local control node. The local node can be another Linux system, a Mac, or a Windows PC. If you are using a Windows PC, you can install Linux using the Windows Subsystem for Linux. This guide will focus on using a Mac as the control node to set up a new Vultr Ubuntu server.

The Ubuntu Ansible setup playbook is listed at the end of this guide. Instructions are provided on how to install and use it. The playbook, and this guide, have been updated to work with Ubuntu 21.10. Additional changes include working with a Vultr instance configured with Cloud-init, replacing the systemd-resolve stub DNS resolver with the Unbound DNS resolver, and other minor changes and improvements.

It takes a little work to set up and start using Ansible, but once it is set up and you become familiar with it, using Ansible will save a lot of time and effort. For example, you may want to experiment with different applications. Using the Ansible setup playbook described in this guide, you can quickly reinstall your Ubuntu instance and then run the playbook to configure your base server. I hope this playbook will be a good example for creating future playbooks for installing web servers, database servers, or even an email server.

We will use a single Ansible playbook that will do the following:

  • Upgrade installed apt packages.
  • Install a base set of useful software packages.
  • Replace the systemd-resolve stub DNS resolver with the Unbound DNS resolver.
  • Set a fully qualified domain name (FQDN).
  • Set the timezone.
  • Set the SSH port number (allows for setting a non-standard port number).
  • Set sudo password timeout (can change the default 15-minute timeout).
  • Create a regular user with sudo privileges.
  • Install SSH Keys for the new regular user.
  • Ensure authorized key for the root user is installed (or updated to a new key).
  • Update/Change the root user password.
  • Disable password authentication for root.
  • Disable tunneled clear-text passwords.
  • Create a 2-line prompt for root and the new regular user.
  • Configure a firewall using ufw.
  • Configure brute force mitigation using fail2ban.
  • Optionally configure static IP networking.
  • Reboot and restart services as needed.

A local DNS resolver is created by installing the unbound and resolvconf packages (using the default installed configuration). It provides a local DNS recursive resolver, and the results are cached. This is really important if you want to run your own email server that includes DNS blacklist (DNSBL) lookups. Some DNSBL services will not work with a public DNS resolver because they limit the number of queries from a server IP.

If you have configured additional IPs in the Vultr control panel, you can use this playbook to install a new or updated netplan networking file (/etc/netplan/50-cloud-init.yaml). By default, the Configure static networking playbook task is disabled.

After you create your Ubuntu 21.10 (or later) instance, Cloud-init will install any available updates. If you try to execute the setup playbook right after creating your instance, there may be a conflict between Cloud-init and the apt upgrade task. You may see a task failure like:

TASK [Upgrade installed apt packages]   **********************************************************************************************
FAILED - RETRYING: Upgrade installed apt packages (15 retries left).
FAILED - RETRYING: Upgrade installed apt packages (14 retries left).
FAILED - RETRYING: Upgrade installed apt packages (13 retries left).
ok: [ap1.altoplace.org]

This is completely normal. The setup playbook will retry up to 15 times to run the apt upgrade task (while it is waiting for the lockfile to be released). If you wait a few minutes after creating the instance, you will probably not see any apt upgrade task failures when running the setup playbook.

Prerequisites

  • A Vultr server with a freshly installed Ubuntu (20.04 or later) instance.
  • A local Mac, Windows (with Linux installed via the WSL), or a Linux system (this guide will focus on using a Mac, but the procedures are similar for any Linux control node).
  • If using a Mac, Homebrew should be installed.
  • A previously generated SSH Key for the Vultr host; the SSH public key should already be installed for the root user.
  • Ansible 2.9.x, or later stable version (this guide has been thoroughly tested with Ansible version 2.9.27 on a Mac, installed via Homebrew).

1. Install Ansible on the Local System

For this guide, we are using the Ansible 2.9.x Red Hat released version.

If you are using a Mac with Homebrew installed:

$ brew install [email protected]
$ brew link --force --overwrite [email protected]

This will install Ansible along with all the required dependencies, including python version 3.9.x. You can quickly test your installation by doing:

$ ansible --version
ansible 2.9.27
config file = /Users/george/.ansible.cfg
configured module search path = ['/Users/george/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/[email protected]/2.9.27_1/libexec/lib/python3.9/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.9.7 (default, Oct 13 2021, 06:45:31) [Clang 13.0.0 (clang-1300.0.29.3)]

Create a Simple Ansible Configuration

Create the .ansible.cfg configuration file in the local_user home directory. This will tell Ansible how to locate the host's inventory file.

If you are using a Mac, add the following content:

[defaults]
inventory  = /Users/user/ansible/hosts.yml
interpreter_python = auto

Be sure to replace user with your actual user name.

Create the folder to store the hosts.yml hosts inventory file:

$ mkdir ~/ansible
$ cd ~/ansible

Of course, you can put it anywhere you want to and give it any name. Just make sure that your .ansible.cfg file points to the correct location. I like storing all my ansible files in a Dropbox folder where I can also run my setup playbook from other Mac machines.

Add the following content to ~/ansible/hosts.yml:

all:
  vars:
    ansible_python_interpreter: /usr/bin/python3
    ansible_become: yes
    ansible_become_method: sudo 
  children:
    vultr:
      hosts:
        host.example.com:
          user: user
          user_passwd: "{{ host_user_passwd }}"
          root_passwd: "{{ host_root_passwd }}"
          ssh_pub_key: "{{ lookup('file', '~/.ssh/host_ed25519.pub')  }}"
          ansible_become_pass: "{{ host_user_passwd }}"
          cfg_static_network: false
    vmware:
      hosts:
        ubuntu1.local:
          user: george
          user_passwd: "{{ ubuntu1_user_passwd }}"
          root_passwd: "{{ ubuntu1_root_passwd }}"
          ssh_pub_key: "{{ lookup('file', '~/.ssh/ubuntu1_ed25519.pub')  }}"
          ansible_become_pass: "{{ ubuntu1_user_passwd }}"
          cfg_static_network: true

The first block defines Ansible variables that are global to the host's inventory file. Hosts are listed under the children groups.

Replace host.example.com with your actual hostname. The vmware group shows a working example for setting up a VMware host on my Mac.

The user is the regular user to be created. The host_user_passwd and host_root_passwd are the users and root passwords that are stored in an Ansible vault (described below). ssh_pub_key points to the SSH public key for the Vultr host. The ansible_become lines provide the ability for the newly created user to execute sudo commands (in future ansible playbooks).

The cfg_static_network is a boolean variable that is set to true if you are configuring static networking in /etc/netplan. Unless you have specifically created a static networking configuration, you should leave this set to false. Configuring a static network is beyond the scope of this guide.

Using the Ansible Vault

Create the directory for the Ansible password vault and setup playbook:

$ mkdir -p ~/ansible/ubuntu
$ cd ~/ansible/ubuntu

Create the Ansible password vault:

$ ansible-vault create passwd.yml
New Vault password: 
Confirm New Vault password:

This will start up your default system editor. Add the following content:

host_user_passwd: ELqZ9L70SSOTjnE0Jq
host_root_passwd: tgM2Q5h8WCeibIdJtd

Replace host with your actual hostname. Generate your own secure passwords. Save and exit your editor. This creates an encrypted file that only Ansible can read. You can add other host passwords to the same files.

pwgen is a very handy tool that you can use to generate secure passwords. Install it on a Mac via Homebrew: brew install pwgen. Use it as follows:

$ pwgen -s 18 2
ELqZ9L70SSOTjnE0Jq tgM2Q5h8WCeibIdJtd

You can view the contents of the ansible-vault file with:

$ ansible-vault view passwd.yml
Vault password:

You can edit the file with:

$ ansible-vault edit passwd.yml                
Vault password: 

2. Create an SSH Config File for the Vultr Host

Next, we need to define the Vultr hostname and SSH port number that Ansible will use to connect to the remote host.

The SSH configuration for the server host is stored in ~/.ssh/config. An example configuration looks like this (on a Mac):

Host *
  AddKeysToAgent yes
  UseKeychain yes
  IdentitiesOnly yes
  AddressFamily inet

Host host.example.com host
  Hostname host.example.com
  Port 22
  User user
  IdentityFile ~/.ssh/host_ed25519

Using this SSH config file, you can change the default SSH port number (if changed by the Ansible playbook). (The playbook is always executed the first time with SSH port 22.) If the playbook changes the SSH port number, then the SSH port number in the SSH config file needs to be changed after the playbook runs or during a server reboot initiated by the playbook.

With this SSH configuration file, you can use a shorthand hostname to log into the server.

For the user login:

$ ssh host

For the root login:

$ ssh root@host

UserKeychain is specific to macOS. It stores the SSH public key in the macOS key chain.

host.example.com is your Vultr server FQDN (Fully Qualified Domain Name) that needs to be defined in your DNS or /etc/hosts file on your local system. Port 22 is optional but required if you define a non-standard SSH port.

Important: Install your SSH Key for the root user if you have not done so already:

$ ssh-copy-id -i ~/.ssh/host_ed25519 root@host

Verify that you can log in without using a password.

Note: If you reinstall your Vultr instance, be sure to delete your Vultr hostname from ~/.ssh/known_hosts on your local control node. Otherwise, you will see an SSH error when you try to log into your reinstalled host. The hostname is added to this file during the first login attempt:

$ ssh root@ap1
The authenticity of host 'ap1.altoplace.org (216.128.149.25)' can't be established.
ECDSA key fingerprint is SHA256:oNczYD+xuXx0L6CM17Ciy+DWu3jOEbfVclIj9wUT7Y8.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes

Answer yes to the question. If you don't delete the hostname from this file after reinstalling your instance, you will see an error like:

$ ssh root@ap1
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
o o o

If this happens, delete the line entered for your hostname in the known_hosts file and rerun the ssh command.

3. Test Your SSH/Ansible Configuration

Before trying to run the setup Ansible playbook, we need to verify that Ansible is working correctly, you can access your Ansible vault, and can connect to your Vultr host. First, verify that Ansible is installed correctly on a Mac:

$ ansible --version
ansible 2.9.27 
  config file = /Users/user/.ansible.cfg
  o o o

This is the latest version of Ansible on a Mac/Homebrew when this guide was written.

Run this command to test your Ansible configuration (also, your SSH configuration):

$ cd ~/ansible/ubuntu
$ ansible -m ping --ask-vault-pass --extra-vars '@passwd.yml' vultr -u root
Vault password: 
host.example.com | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

If you see the above output, then everything is working fine. If not, go back and double-check all your SSH and Ansible configuration settings. Start by verifying that you can execute:

$ ssh root@host

Login in without a password (you have installed your SSH key for root).

4. Running the Ansible Ubuntu Server Configuration Playbook

You are ready to run the playbook; when you execute the playbook, you will be prompted for your vault password. The playbook will execute a number of tasks with a PLAY RECAP at the end. You can rerun the playbook multiple times; for example, you may want to rerun the playbook to change something like the SSH port number. It will only execute tasks when needed. Be sure to update variables at the beginning of the playbook, such as your SSH port number and your local client IP address, before running the playbook. Setting your local client IP address prevents you from being accidentally locked out by fail2ban.

You can easily determine your client IP address by logging into your host and executing the who command:

root@host:~# who
root     pts/1        2021-10-11 20:24 (12.34.56.78)

Your client IP address, 12.34.56.78, will be listed in the output.

We are finally ready to run the Ansible playbook, which I listed below. Be sure that you are in the ~/ansible/ubuntu directory. This is the command to run:

$ ansible-playbook --ask-vault-pass --extra-vars '@passwd.yml' setup-pb.yml -l vultr -u root
Vault password:

Depending on the speed of your Mac, it might take a few seconds to start up. If it completes successfully, you will see PLAY RECAP like:

PLAY RECAP *************************************************************************************************************************
ap1.altoplace.org          : ok=37   changed=26   unreachable=0    failed=0    skipped=2    rescued=0    ignored=0

The most important thing to note is that there should be no failed tasks.

Next, I will describe some basic tests that you can run to verify your server setup.

5. Ubuntu Server Verification

After you have successfully executed the Ansible setup playbook, here at some basic tests that you can execute to verify your sever setup. I will show some real-life examples with the server host that I used to test the setup playbook (my local hostname is ap1 and user name is george). I executed these tests on Ubuntu 21.10.

Verify Your User Login

Verify that you can log into your new user account using your host's public SSH key:

╭─george@imac1 ~/Dropbox/ansible/ubuntu 
╰─$ ssh ap1
Welcome to Ubuntu 21.10 (GNU/Linux 5.13.0-20-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Fri Nov  5 11:35:01 AM CDT 2021

  System load:             0.08
  Usage of /:              8.7% of 127.41GB
  Memory usage:            8%
  Swap usage:              0%
  Processes:               147
  Users logged in:         0
  IPv4 address for enp1s0: 149.28.116.218
  IPv4 address for enp1s0: 216.128.149.25
  IPv6 address for enp1s0: 2001:19f0:5c01:59a:5400:3ff:fe8d:a5ae
  IPv6 address for enp1s0: 2001:19f0:5c01:59a:5400:3ff:fe8d:a5b0

 * Super-optimized for small spaces - read how we shrank the memory
   footprint of MicroK8s to make it the smallest full K8s around.

   https://ubuntu.com/blog/microk8s-memory-optimisation

0 updates can be applied immediately.


Last login: Fri Nov  5 11:20:55 2021 from 72.34.15.207
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details. 

Note the two-line prompt. The first line shows user@host and the current directory. Note: On my server, I have configured additional IPv4 & IPv6 addresses.

Now, note how the l, la, and ls LS aliases work:

george@ap1:~
$ touch tmpfile
george@ap1:~
$ l
tmpfile
george@ap1:~
$ la
.bash_history  .bash_logout  .bashrc  .cache  .profile  .ssh  tmpfile
george@ap1:~
$ ll
total 32
drwxr-x--- 4 george george 4096 Nov  5 11:40 ./
drwxr-xr-x 4 root   root   4096 Nov  5 11:12 ../
-rw------- 1 george george    9 Nov  5 11:14 .bash_history
-rw-r--r-- 1 george george  220 Oct  7 05:09 .bash_logout
-rw-r--r-- 1 george george 3879 Nov  5 11:12 .bashrc
drwx------ 2 george george 4096 Nov  5 11:14 .cache/
-rw-r--r-- 1 george george  807 Oct  7 05:09 .profile
drwx------ 2 george george 4096 Nov  5 11:12 .ssh/
-rw-rw-r-- 1 george george    0 Nov  5 11:40 tmpfile

Verify Your User Password

Even though you use an SSH public key to log in to your user account, you still need to use your user password with the sudo command. For example, use the sudo command to change to the root account (enter your user password when prompted):

george@ap1:~
$ sudo -i
[sudo] password for george: 
root@ap1:~
# exit
logout
george@ap1:~
$

Verify the Root Password

While in your user account, you can also use su - to change to the root account. One difference is that you will have to enter your root password:

george@ap1:~
$ su -
Password: 
root@ap1:~
# exit
logout
george@ap1:~
$ 

Verify Your Hostname

While we are in the root account, let's verify our hostname and some other features that the playbook set up for us:

root@ap1:~
# hostname
ap1

root@ap1:~
# hostname -f
ap1.altoplace.org

root@ap1:~
# date
Fri Nov  5 11:49:34 AM CDT 2021

Here we verified both the short and FQDN hostnames. With the date command, verify that the timezone is set correctly.

Verify the Unbound Local DNS Caching Resolver

An in-depth discussion of Unbound is beyond the scope of this guide. However, I can provide a few quick tests to verify that the default Unbound local DNS caching resolver configuration is working. We will use the dig command.

To verify that the resolver is working, do, for example:

root@ap1:~
# dig +noall +answer +stats altoplace.com
altoplace.com.      3600    IN  A   216.92.30.76
;; Query time: 16 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Fri Nov 05 11:50:51 CDT 2021
;; MSG SIZE  rcvd: 58

Note that the server address is 127.0.0.1. Also, note the TTL (Time To Live). For this example, the TTL is 3600 seconds. Also, note the Query time, 16 msec. Now execute the same command again:

root@ap1:~
# dig +noall +answer +stats altoplace.com
altoplace.com.      3532    IN  A   216.92.30.76
;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Fri Nov 05 11:51:59 CDT 2021
;; MSG SIZE  rcvd: 58

The Query time should be at or near 0 msec because the second query result came from our local cache. The cached result will remain active for the time-to-live interval, which, as you can see, is counting down.

Some (email) blocklist servers will rate-limit your access to their pre-defined DNS resolvers. This could cause issues when using a public DNS resolver. For example, when executing the following dig command, you should see "permanent testpoint" when using a local DNS resolver.

root#ap1:~
# dig test.uribl.com.multi.uribl.com txt +short
"permanent testpoint"

If you were using a public DNS resolver, you might see a failure like (after you first create your Vultr instance, but have not executed the setup playbook):

root@ap1:~# dig test.uribl.com.multi.uribl.com txt +short
"127.0.0.1 -> Query Refused. See http://uribl.com/refused.shtml for more information [Your DNS IP: 149.28.122.136]"

You can have a look at that URL to read more about this topic.

Verify fail2ban and UFW SSH Port Protection

This set of tests will verify that fail2ban and ufw are integrated together to protect your SSH port. If you are using the default port 22, it will not take long for attackers to attempt to log in to your server. Their login attempt will fail, and fail2ban will take note of the failure. If there are multiple failed attempts in a short period of time (as noted in your fail2ban configuration), fail2ban will ban the IP for the time that you configured in your fail2ban configuration. Fail2ban will notify ufw to block the IP for the duration of the ban.

To see the current fail2ban status, you can execute fail2ban-client status sshd (or f2bst sshd to save some typing):

root@ap1:/var/log
# f2bst sshd
Status for the jail: sshd
|- Filter
|  |- Currently failed: 0
|  |- Total failed: 6
|  `- File list:    /var/log/auth.log
`- Actions
   |- Currently banned: 2
   |- Total banned: 2
   `- Banned IP list:   2600:380:681e:ceee:ed7c:c8b2:66ef:8f25 166.175.56.178

This output shows that there are currently 0 failed login attempts. There have been a total of 6 failures. Of those failed attempts, two IP addresses met the criteria to be banned. There are two IPs that are being actively banned. You can observe these IP failures and bans at /var/log/fail2ban.log.

You can also execute iptables -nL and see in the command output that the IPv4 address is banned:

 O O O
Chain f2b-sshd (1 references)
target     prot opt source               destination         
REJECT     all  --  166.175.56.178       0.0.0.0/0            reject-with icmp-port-unreachable
RETURN     all  --  0.0.0.0/0            0.0.0.0/0
 O O O  

Note: Execute ip6tables -nL to view the banned IPv6 address.

I produced the above output by running a test from my phone, logging into my server with invalid credentials, and verifying that I could no longer connect after my IP address was banned. It first tried an IPv6 address and then the IPv4 address. (I turned off my Wi-Fi to ensure that the IP address was different from my my_client_ip. If my phone were using my_client_ip, the connection would never fail.)

6. Ubuntu Ansible Set-Up Playbook Listing

This is the setup-pb.yml playbook:

# Initial server setup
#
---
- hosts: all
  become: yes
  vars:
    ssh_port: "509"
    my_client_ip: 72.34.15.207
    tmzone: America/Chicago
    sudo_timeout: 20
    f2b_jail_local: |
      [DEFAULT]
      ignoreip = 127.0.0.1/8 ::1 {{ my_client_ip }}
      findtime = 1h
      bantime = 2h
      maxretry = 3

      [sshd]
      enabled = true
      port = {{ ssh_port }}

  tasks:
    - name: Get datestamp from the system
      shell: date +"%Y%m%d"
      register: dstamp

    - name: Set current date stamp varible
      set_fact:
        cur_date: "{{ dstamp.stdout }}"

    # Update and install the base software
    - name: Update apt package cache
      apt:
        update_cache: yes
        cache_valid_time: 3600

    - name: Upgrade installed apt packages 
      apt:
        upgrade: dist
      register: upgrade
      retries: 15
      delay: 5
      until: upgrade is success

    - name: Ensure that these software packages are installed
      apt:
        pkg:
          - build-essential
          - fail2ban
          - needrestart
          - pwgen
          - resolvconf
          - unbound
          - unzip
        state: latest

    - name: Disable the systemd-resolve stub DNS resolver (replaced by unbound)..
      lineinfile:
        dest: /etc/systemd/resolved.conf
        regexp: '^#DNSStubListener='
        line: 'DNSStubListener=no'
        state: present
      notify:
        - restart systemd-resolved

    - name: Check if a reboot is needed for Debian-based systems
      stat:
        path: /var/run/reboot-required
      register: reboot_required

    # Host Setup
    - name: Set static hostname
      hostname:
        name: "{{ inventory_hostname_short }}"

    - name: Add FQDN to /etc/hosts
      lineinfile:
        dest: /etc/hosts
        regexp: '^127\.0\.1\.1'
        line: '127.0.1.1 {{ inventory_hostname }} {{ inventory_hostname_short }}'
        state: present

    - name: Check if cloud init is installed.
      stat: path="/etc/cloud/templates/hosts.debian.tmpl"
      register: cloud_installed

    - name: Add FQDN to /etc/cloud/templates/hosts.debian.tmpl
      lineinfile:
        dest: /etc/cloud/templates/hosts.debian.tmpl
        regexp: '^127\.0\.1\.1'
        line: "127.0.1.1 {{ inventory_hostname }} {{ inventory_hostname_short }}"
        state: present
      when: cloud_installed.stat.exists

    - name: set timezone
      timezone:
        name: "{{ tmzone }}"

    - name: Set ssh '{{ ssh_port }}' port number
      lineinfile:
        dest: /etc/ssh/sshd_config
        regexp: 'Port '
        line: 'Port {{ ssh_port }}'
        state: present
      notify:
        - restart sshd

    # Set sudo password timeout (default is 15 minutes)
    - name: Set sudo password timeout.
      lineinfile:
        path: /etc/sudoers
        state: present
        regexp: '^Defaults\tenv_reset'
        line: 'Defaults env_reset, timestamp_timeout={{ sudo_timeout }}'
        validate: '/usr/sbin/visudo -cf %s'

    - name: Create/update regular user with sudo privileges
      user:
        name: "{{ user }}"
        password: "{{ user_passwd | password_hash('sha512') }}"
        state: present
        groups: sudo
        append: true
        shell: /bin/bash

    - name: Ensure authorized keys for remote user is installed
      authorized_key:
        user: "{{ user }}"
        state: present
        key: "{{ ssh_pub_key }}"

    - name: Ensure authorized key for root user is installed
      authorized_key:
        user: root
        state: present
        key: "{{ ssh_pub_key }}"

    - name: Update root user password.
      user:
        name: root
        password: "{{ root_passwd | password_hash('sha512') }}"

    - name: Disable password authentication for root
      lineinfile:
        path: /etc/ssh/sshd_config
        state: present
        regexp: '^#?PermitRootLogin'
        line: 'PermitRootLogin prohibit-password'
      notify:
        - restart sshd

    - name: Disable tunneled clear-text passwords
      lineinfile:
        path: /etc/ssh/sshd_config
        state: present
        regexp: '^#?PasswordAuthentication'
        line: 'PasswordAuthentication no'
      notify:
        - restart sshd

    - name: Set user PS1 to a two-line prompt
      lineinfile:
        dest: "/home/{{ user }}/.bashrc"
        insertafter: EOF
        line: "PS1='${debian_chroot:+($debian_chroot)}\\[\\033[01;32m\\]\\u@\\h\\[\\033[00m\\]:\\[\\033[01;34m\\]\\w\\[\\033[00m\\]\\n\\$ '"
        state: present

    - name: Set root PS1 to a two-line prompt
      lineinfile:
        path: '/root/.bashrc'
        state: present
        insertafter: EOF
        line: PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\n\$ '

    # Configure a firewall
    - name: Disable and reset ufw firewall to installation defaults.
      ufw:
        state: reset

    - name: Find backup rules to delete
      find:
        paths: /etc/ufw
        patterns: "*.{{ cur_date }}_*"
        use_regex: no
      register: files_to_delete

    - name: Delete ufw backup rules
      file:
        path: "{{ item.path }}"
        state: absent
      with_items: "{{ files_to_delete.files }}"

    - name: Allow ssh port '{{ ssh_port }}'.
      ufw:
        rule: allow
        proto: tcp
        port: '{{ ssh_port }}'
        state: enabled

    - name: Turn UFW logging off
      ufw:
        logging: "off"

    - name: configure fail2ban for ssh
      copy:
        dest: /etc/fail2ban/jail.local
        content: "{{ f2b_jail_local }}"
        owner: root
        group: root
        mode: 0644
      notify:
        - restart fail2ban

    # simple shell script to display fail2ban-client status info; usage:
    #   f2bst
    #   f2bst sshd
    - name: Configure f2bst
      copy:
        dest: /usr/local/bin/f2bst
        content: |
          #!/usr/bin/sh
          fail2ban-client status $*
        owner: root
        group: root
        mode: 0750

    - name: run needrestart
      command: needrestart -r a
      when: not reboot_required.stat.exists and upgrade.changed

    - name: Configure static networking
      copy:
        src: etc/netplan/50-cloud-init.yaml
        dest: /etc/netplan/50-cloud-init.yaml
        owner: root
        group: root
        mode: 0644
      notify:
        - netplan apply
      when: cfg_static_network == true

    - name: Reboot the server if needed
      reboot:
        msg: "Reboot initiated by Ansible because of reboot required file."
        connect_timeout: 5
        reboot_timeout: 600
        pre_reboot_delay: 0
        post_reboot_delay: 30
        test_command: whoami
      when: reboot_required.stat.exists

    - name: Remove old packages from the cache
      apt:
        autoclean: yes

    - name: Remove dependencies that are no longer needed
      apt:
        autoremove: yes
        purge: yes

  handlers:
    - name: restart sshd
      service:
        name: sshd
        state: restarted
      when: reboot_required.stat.exists == false

    - name: restart fail2ban
      service:
        name: fail2ban
        state: restarted
      when: reboot_required.stat.exists == false

    - name: restart systemd-resolved
      service:
        name: systemd-resolved
        state: restarted
      when: reboot_required.stat.exists == false

    - name: netplan apply
      command: netplan apply
      when: cfg_static_network == true 

You can read the Ansible Documentation to learn more about Ansible.

You should only have to update the vars: section to change the settings for your specific situation. Most likely, you will want to set the client IP and timezone. Setting the client IP prevents one from being accidentally locked out by fail2ban.

Conclusion

In this guide, we have introduced Ansible for automating the initial Ubuntu server setup. This is very useful for deploying or redeploying a server after testing an application. It also creates a solid foundation for creating a web, database, or email server.

Want to contribute?

You could earn up to $600 by adding new articles


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK