Skip to content

Ansible

Ansible is an open-source configuration management tool mainly designed for provisioning and deploying applications using IaaC.

ansible

The control node performs the commands described in a playbook on target nodes typically using SSH. The available target nodes are divided into groups and described by an inventory.

Install Ansible

Ansible is agentless but needs to be installed on the control node:

Install Ansible

Note that both the control node and the target nodes need to have Python installed. Ansible will connect to the target nodes and install Python modules to execute different commands. The modules will be removed once their job is done.

Ansible plugins for interacting with OpenNebula for inventory or managing the resources via OpenNebula are included in the community.general collection which is installed if you choose the ansible package. If you choose ansible-core you will need to install that collection manually. The plugins require the PyONE Python package to be installed, typically installed using

pip3 install pyone

Ansible makes the connections using SSH - or other authentication methods. Make sure you have configured SSH access to the target nodes, i.e. your public SSH key is in the authorized_keys file on the remote systems accepting root SSH login from a key.

Create an Inventory

Static inventory

Ansible needs to know the list of target nodes and this is called the inventory. It can be specified through a simple file with IP addresses such as this:

inventory.yaml
ungrouped:
  hosts:
    194.28.122.123:

The nodes can also be divided into groups to apply different commands to different sets. For example, assuming we have deployed the second scenario with multiple internal VMs behind one public IP scenario using Terraform:

inventory.yaml
public:
  hosts:
    194.28.122.123:

internal:
  hosts:
    172.22.0.1:
    172.22.0.2:
    172.22.0.3:
    172.22.0.4:
  vars:
    ansible_ssh_common_args: '-J root@194.28.122.123'

Thus the public instance belongs to the public group and the four internal belong to the internal group. Note that for the internal group we have set a variable so that the public instance is used as a jump host when connecting to the internal instances. See scenario 2 SSH access.

OpenNebula inventory plugin

Using the OpenNebula inventory plugin the nodes can be read directly from OpenNebula instead. This is called dynamic inventory and you will get an up-to-date inventory with updated IP addresses etc. For example:

inventory.yaml
plugin: community.general.opennebula
api_url: https://opennebula.ice.ri.se/RPC2
api_username: [OpenNebula username]
api_password: [OpenNebula login token]
groups:
  public: "'public' in DESCRIPTION and DEPLOYMENT_MODE == 'terraform'"
  internal: "'internal' in DESCRIPTION and DEPLOYMENT_MODE == 'terraform'"

Replace api_username with your username and enter the generated login_token from the Terraform example as api_password. These can also be specified in a config file or as environment variables.

In this example we used the description to divide the instances into groups and also the deployment_mode tag, set in the Terraform example.

Using labels

The OpenNebula inventory plugin supports filtering by label by setting filter_by_label or grouping by label by setting group_by_label. You can create new labels in settings and assign your instances in the VMs tab.

To set group variables for a dynamic inventory you can put them in a file with the group name in the group_vars folder. For example to use the public instance as a jump server for the SSH connections to the instances in the internal group:

group_vars/internal
ansible_ssh_common_args: "-J root@{{ hostvars['public-instance']['ansible_host'] }}"

Show and test the inventory

Use the following command to show the resulting inventory for any of the files above:

ansible-inventory -i inventory.yaml --list

The built-in ping module can be used to test the connectivity and verify Python availability on all target nodes:

ansible -i inventory.yaml all -u root -m ping

Use the -u flag to specify the remote user for SSH.

SSH fingerprints

If you have not configured your SSH client to automatically accept fingerprints, you will have to accept the new fingerprints the first time Ansible is using SSH to the target nodes. This can be hard to detect if there is a lot of output from the command. If you are re-using IP addresses, e.g. re-installed your instances, there may be conflicting entries in your ~/.ssh/known_hosts file that you will have to remove.

Run playbooks

The collections of commands to be executed in Ansible are written in a file called playbook which is written in YAML format. For example, to run the ping command above:

playbook.yaml
- name: Ping all
  hosts: all
  tasks:
   - name:
     ansible.builtin.ping:

Or to run a shell command on the nodes in the internal group and display the output:

playbook.yaml
- name: Run date on internal nodes
  hosts: internal
  tasks:
    - name: Check Date with shell command
      shell:
         "date"
      register: datecmd

    - debug: msg="{{datecmd.stdout}}"

You can then use ansible-playbook to execute a playbook:

ansible-playbook -i inventory_opennebula.yaml -u root playbook.yaml

OpenNebula ansible plugins

You can also use OpenNebula ansible plugins to manage common OpenNebula resources, e.g. VMs, images or hosts.

For example, to deploy the example scenarios. Replace api_username and api_password with your credentials.

One VM with a public IP scenario

playbook.yaml
- name: Deploy one VM with a public IP
  hosts: localhost
  tasks:
  - name: Create a new instance
    community.general.one_vm:
      api_url: https://opennebula.ice.ri.se/RPC2
      api_username: [OpenNebula username]
      api_password: [OpenNebula login token]
      template_name: "Ubuntu 22.04 (ice-public)"
      updateconf:
        CONTEXT:
          PASSWORD: 'secretpassword'

Multiple internal VMs behind one public IP scenario

playbook.yaml
- name: Deploy multiple internal VMs behind one public IP
  hosts: localhost
  tasks:
  - name: Create the public instance
    community.general.one_vm:
      api_url: https://opennebula.ice.ri.se/RPC2
      api_username: [OpenNebula username]
      api_password: [OpenNebula login token]
      template_name: "Ubuntu 22.04 (ice-public)"
      attributes:
        name: 'public-instance'
      cpu: 1
      memory: 1024 MB
      updateconf:
        CONTEXT:
          PASSWORD: 'secretpassword'
          SET_HOSTNAME: "$NAME"

  - name: Create three internal instances
    community.general.one_vm:
      api_url: https://opennebula.ice.ri.se/RPC2
      api_username: [OpenNebula username]
      api_password: [OpenNebula login token]
      template_name: "Ubuntu 22.04 (ice-internal)"
      count: 3
      attributes:
        name: 'internal-instance-#'
      cpu: 1
      memory: 1024 MB
      updateconf:
        CONTEXT:
          PASSWORD: 'secretpassword'
          SET_HOSTNAME: "$NAME"