Skip to content

Terraform

Terraform is an infrastructure as code (IaC) tool from HashiCorp that lets you create, manage, and manipulate infrastructure with configuration files. Along with many other plugins, Terraform comes with an OpenNebula provider plugin that can be used to interact with OpenNebula resources. This guide shows some basic examples to get you started.

terraform

The Terraform Core looks at the desired configuration, compares it with the current state, and communicates with the providers to take the needed steps to fulfill the desired configuration by managing resources, reading from data sources, etc.

Install Terraform

To use Terraform you will need to install it. HashiCorp distributes Terraform as a binary package which you can install using package managers.

Create a Terraform configuration

Terraform uses declarative syntax to describe your infrastructure. The configuration files typically use HashiCorp Configuration Language format and have .tf file extension but may also use JSON format end with the .tf.json file extension.

A Terraform configuration must be in a separate working directory. All configuration files in the directory will be read, so you can name your files however you choose. For smaller configurations, all blocks are typically written in a single file, usually called main.tf. For larger configurations, it is often split into multiple files, e.g. input variables in variables.tf and output variables in output.tf.

Example scenarios

In the following examples, we will describe how to deploy example scenarios previously described using the OpenNebula GUI:

Templates

The public and the internal templates are used in the example scenarios. You need to have Show all selected in the group selection to see them.

One VM with a public IP scenario

basic scenario

Create a directory for this scenario and put the following contents in a file called main.tf.

scenario1/main.tf
variable "username" {
  description = "Login username in OpenNebula GUI"
}

variable "login_token" {
  description = "Generated login token from Settings in OpenNebula Sunstone GUI"
}

terraform {
  required_providers {
    opennebula = {
      source  = "OpenNebula/opennebula"
      version = "~> 1.3"
    }
  }
}

provider "opennebula" {
  endpoint = "https://opennebula.ice.ri.se/RPC2"
  username = var.username
  password = var.login_token
}

resource "opennebula_virtual_machine" "public-instance" {
  template_id = [template id of public template] # <-- REPLACE THIS

  context = {
    PASSWORD     = "s3cr3t"
    SET_HOSTNAME = "$NAME"
  }
}

output "public_ip" {
  value       = opennebula_virtual_machine.public-instance.ip
  description = "Public IP address."
}

Replace the value for template_id with the actual id of the public template (remove the brackets).

Note that:

  • username and login_token are required input variables - since no default value is given.
  • The opennebula provider version constraints are specified in the terraform block.
  • The provider block contains provider-specific configuration, which may also be given as environmental variables.
  • The resource block specifies that a resource of type virtual_machine from provider opennebula is requested with the local property name of public-instance.
  • Some template parameters are given in the context, PASSWORD is required in this template and it is recommended to also include SET_HOSTNAME. See Virtual Machine Template for details about these and available template variables.
  • public_ip is an output variable where the value is taken from the resource, using the local property name public-instance.

Multiple internal VMs behind one public IP scenario

gateway scenario

Create a directory for this configuration and put the following contents in a file called main.tf. You can also split it into several files, for example, variables.tf, main.tf and output.tf.

scenario2/data.tf
data "opennebula_templates" "public" {
  name_regex = ".*Ubuntu 22\\.04 \\(ice-public\\).*"
  sort_on    = "register_date"
  order      = "ASC"
}

data "opennebula_templates" "internal" {
  name_regex = ".*Ubuntu 22\\.04 \\(ice-internal\\).*"
  sort_on    = "register_date"
  order      = "ASC"
}
scenario2/variables.tf
variable "username" {
  description = "Generated login token from Settings in Sunstone GUI"
}

variable "login_token" {
  description = "Generated login token from Settings in Sunstone GUI"
}

variable "group" {
  description = "Group (leave empty for your primary group)"
  default     = ""
}

variable "resource_count" {
  description = "Number of instances"
  type        = number
  default     = 4
}

variable "public_template_id" {
  description = "Public VM template id, see template listing in GUI"
  type        = number
  default     = null
}

variable "internal_template_id" {
  description = "Internal VM template id, see template listing in GUI"
  type        = number
  default     = null
}

variable "cpu" {
  description = "Amount of CPU shares assigned to the VM"
  type        = number
  default     = 1
}

variable "memory" {
  description = "Amount of RAM assigned to the VM in MB"
  type        = number
  default     = 1024
}

variable "cli_password" {
  description = "Console password for root login via OpenNebula GUI VPN"
  default     = null
}

locals {
  cli_password         = coalesce(var.cli_password, random_password.password.result)
  public_template_id   = coalesce(var.public_template_id, data.opennebula_templates.public.templates[0].id)
  internal_template_id = coalesce(var.internal_template_id, data.opennebula_templates.internal.templates[0].id)
}
scenario2/main.tf
terraform {
  required_providers {
    opennebula = {
      source  = "OpenNebula/opennebula"
      version = "~> 1.3"
    }
  }
}

provider "opennebula" {
  endpoint = "https://opennebula.ice.ri.se/RPC2"
  username = var.username
  password = var.login_token
  default_tags {
    tags = {
      deployment_mode = "terraform"
    }
  }
}

resource "random_password" "password" {
  length           = 8
  override_special = "!#%&?"
}

resource "opennebula_virtual_machine" "virtual_machine" {
  count = var.resource_count

  group       = var.group
  template_id = count.index == 0 ? local.public_template_id : local.internal_template_id

  name   = count.index == 0 ? "public-instance" : "internal-instance-${count.index}"
  cpu    = var.cpu
  memory = var.memory

  context = {
    PASSWORD     = local.cli_password
    SET_HOSTNAME = "$NAME"
  }
}
scenario2/outputs.tf
output "cli_password" {
  # use 'terraform output -json password' to display
  value       = opennebula_virtual_machine.virtual_machine[0].context.PASSWORD
  description = "Password for root access via VPN in the GUI"
  sensitive   = true
}

output "group_name" {
  value       = opennebula_virtual_machine.virtual_machine[0].gname
  description = "OpenNebula Group Name"
}

output "public_ip" {
  value       = opennebula_virtual_machine.virtual_machine[0].ip
  description = "Public IP address."
}

output "vm_names_and_ips" {
  value = {
    for machine in opennebula_virtual_machine.virtual_machine :
    machine.name => flatten(machine.template_nic.*.computed_ip)
  }
  description = "Instances names and IP addresses."
}

Note that, in addition, this scenario includes:

  • data sources to look up the templates by name.
  • locals, for local variables, to simplify the configuration.
  • count, which is a meta argument for the number of instances, is set to 4 (by default). The first instance is configured to use the public template and the others to use the internal template.
  • group is a variable so it is possible to change.
  • name is constructed so that the first is called public-instance while the others are called internal-instance-.
  • cpu, vcpu and memory attributes are set with default values, see OpenNebula VM resource for all available attributes.
  • random_password resource is used to generate a default password.

Common configuration blocks

Providers

The OpenNebula Provider plugin is declared in the terraform block and configuration of the provider can be set using either the provider block or using environment variables. Use tags to set extra attributes for your resources making it easier to filter and group your resources.

Resources

Each resource block describes one or more infrastructure objects, such as virtual networks or higher-level components such as DNS records etc. In the examples, we use the OpenNebula VM resource from the provider.

Variables

Besides providers and resources, input and output variables are common in configurations.

  • Input variables can be used to customize the deployment. Those that have no default value are required, and Terraform will prompt for input if no value is supplied. In the examples below we will pass some variables using the -var flag.
  • Output variables are used to get information back. In the example scenarios we return the IP addresses, generated passwords etc.
  • Local variables can be used to simplify the configurations by giving an expression of value a name.

Data sources

The OpenNebula provider also provides a set of data sources to get more information about hosts, networks, templates, images etc. In the second scenario, we included an example where we searched for the IDs of the public and internal templates using the name regex. Alternatively, you can use tags for the matching.

Generate a login token for OpenNebula API

The OpenNebula provider will need a login token to have access to the OpenNebula API.

  1. Open OpenNebula Sunstone GUI (i.e. the advanced/legacy GUI).
  2. Go to Settings ➡ Auth ➡ Manage Login Token.
  3. Configure token expiration and optionally select a target group.
  4. Click Get a new token.

Target group

If no target group is selected the default group will be used. This can be overridden by the group attribute in the resource.

Init, apply and destroy

Go to the directory of one of the scenarios above and use the following commands to deploy and later destroy the VM instances.

Replace the values for the username and login_token with your OpenNebula username and the login token you created above (remove the brackets).

  • Use init to initialize the project and prepare the working directory. Terraform will install providers and modules etc, to run the configuration.

    terraform init
    
  • Optionally, use validate to validate your configuration.

    terraform validate
    
  • Optionally, use plan before committing the configuration to preview the execution plan, describing what Terraform will create, update, or destroy based on the existing infrastructure and your configuration.

    terraform plan -var username='[your OpenNebula username]' -var login_token=[OpenNebula login token]
    
  • Use apply to commit the configuration. Terraform will first display the plan so you can review the changes. Answer yes to continue. Note the output variables.

    terraform apply -var username='[your OpenNebula username]' -var login_token=[OpenNebula login token]
    
  • Optionally, use show to show the resulting state including the output variables.

    terraform show
    
  • Use destroy to terminate the resources. Review the changes and answer yes to continue.

    terraform destroy -var username='[your OpenNebula username]' -var login_token=[OpenNebula login token]
    

Sensitive data

Note that the password is not shown by default since it is flagged as sensitive data. Use for example terraform output password to display it.

Import resources

The OpenNebula provider has support for importing existing infrastructure either by using the terraform import command or via an import block. Before importing you need to add a resource block in the configuration that the resource is imported into.

Import problems

Not all attributes and context variables are imported so management can be limited or error-prone and may require manual editing of the state. The current version of the VM import will fail with some common operations such as destroy since it will timeout directly. Editing the Terraform state file manually and giving timeout a value of 20 (for example), will allow you to continue. Alternatively, you can manually destroy the resource and use the state rm command.