Overview
This post provides a step-by-step guide on deploying a Kubernetes cluster in Civo, utilizing OpenTofu, an open-source infrastructure-as-code tool. It covers the prerequisites needed for the deployment, such as setting up the Civo API token and installing OpenTofu. The post walks through writing the necessary configuration files, defining the cluster specifications, and applying the configuration to create and manage the cluster.
Prerequisites
Useful Commands to list and verify different versions, applications, and node sizes, which will help us define the configuration of our Kubernetes cluster before deployment.
civo region ls #List available regions.
civo kubernetes versions #List available Kubernetes versions.
civo kubernetes applications ls #List available Kubernetes applications.
civo kubernetes size #List available Kubernetes node sizes.
Configure Backend
Setting Up Terraform Cloud for Civo Deployment
In this example, we use Terraform Cloud as the backend, so it is necessary to create an account in Terraform Cloud, define an organization, create a workspace, and configure the CIVO_TOKEN variable. This variable is essential to grant Terraform Cloud the necessary permissions to access Civo and deploy the required resources.
Authenticating OpenTofu with Terraform Cloud
Note: To authenticate OpenTofu with Terraform Cloud, first run the command tofu login app.terraform.io. After executing the command, you will be prompted to type yes
to proceed. Then, paste the token
obtained from Terraform Cloud. Once this step is completed, you will be able to use OpenTofu with Terraform Cloud seamlessly.
Defining a Kubernetes resource
providers.tf
In this section, we define the provider, the backend, and configure the corresponding variables: the region where the cluster will be deployed and the token that OpenTofu uses to authenticate with Civo.
terraform {
required_providers {
civo = {
source = "civo/civo"
version = "1.1.1"
}
}
cloud {
organization = "xllauca"
## Required for Terraform Enterprise; Defaults to app.terraform.io for Terraform Cloud
hostname = "app.terraform.io"
workspaces {
name = "prod"
}
}
}
provider "civo" {
token = var.CIVO_TOKEN
region = var.region
}
network.tf
In this section, we define the network region that we will use for our cluster.
resource "civo_network" "network_cluster_civo" {
label = var.network_name
region = var.region
}
firewall.tf
In the firewall resource, we define which ports need to be open on our cluster, customizing them according to our needs. In this case, we have configured an ingress rule for port 6443 and an egress rule that allows all traffic. To ensure that the cluster is only accessible from trusted IPs, we need to specify only our public IP in the cidr = var.authorized_networks
variable, thus avoiding exposing the cluster to the entire Internet.
resource "civo_firewall" "civo_firewall_cluster" {
name = var.firewall_name
region = var.region
network_id = civo_network.network_cluster_civo.id
create_default_rules = false
ingress_rule {
label = "k8s"
protocol = "tcp"
port_range = "6443"
cidr = var.authorized_networks
action = "allow"
}
ingress_rule {
label = "ssh"
protocol = "tcp"
port_range = "22"
cidr = var.authorized_networks
action = "allow"
}
egress_rule {
label = "all"
protocol = "tcp"
port_range = "1-65535"
cidr = ["0.0.0.0/0"]
action = "allow"
}
}
node_pools.tf
Here, we define an additional node group (optional) for our Kubernetes cluster, as it is possible to add multiple node groups to the same cluster. In this resource, we can specify the number of nodes and the instance type (size) for this group. Currently, Civo offers several options, such as Standard, Performance, CPU-Optimized, and RAM Optimized, each with different characteristics. Additionally, we can choose whether the nodes in this group will have a public IP.
resource "civo_kubernetes_node_pool" "back-end_pool" {
count = length(var.node_pools)
cluster_id = civo_kubernetes_cluster.civo_cluster.id
label = var.node_pools[count.index].label
node_count = var.node_pools[count.index].node_count
size = var.node_pools[count.index].size
}
cluster.tf
In this section, we define the Kubernetes cluster. In the reference arguments, we specify the resources defined in the previous files, excluding the Container Network Interface (CNI) and the node pool. Currently, Civo offers CNI options such as cilium
and flannel
. Unlike the civo_kubernetes_node_pool
resource, the node pool
defined here is mandatory.
resource "civo_kubernetes_cluster" "civo_cluster" {
name = var.cluster_name
kubernetes_version = var.kubernetes_version
network_id = civo_network.network_cluster_civo.id
firewall_id = civo_firewall.civo_firewall_cluster.id
region = var.region
applications = var.applications
tags = var.tags
cni = var.cni
pools {
label = var.node_label
node_count = var.node_count
size = var.node_instance_size
}
}
variables.tf
In this file, we define all the variables referenced in other files. We can assign some default values, but we must be especially careful with variables containing sensitive information, such as CIVO_TOKEN and others we wish to protect. To ensure the security of these variables and avoid exposing them, we will define and assign their values in Terraform Cloud.
variable "CIVO_TOKEN" {
description = "Personal Access Token to access the CIVO API)"
}
variable "region" {
type = string
description = "The region in which the cluster should be created."
default = "NYC1"
}
variable "cluster_name" {
type = string
description = "name of the kubernetes cluster"
default = "core"
}
variable "firewall_name" {
type = string
description = "name of the kubernetes cluster firewall"
default = "core"
}
variable "node_label" {
type = string
description = "Label of the main node pool"
default = "core"
}
variable "node_count" {
type = number
description = "Number of nodes into the cluster"
default = 3
}
variable "node_instance_size" {
type = string
description = "Instance type of the target nodes."
default = "g4s.kube.medium"
}
variable "cni" {
type = string
description = "The cni for the k3s to install"
default = "cilium"
}
variable "kubernetes_version" {
type = string
description = "supported version of the k3s cluster"
}
variable "network_name" {
type = string
description = "name of the network to be created"
}
variable "authorized_networks" {
type = set(string)
description = "Authorized networks for Kubernetes API server"
default = ["0.0.0.0/0"]
}
variable "tags" {
type = string
description = "Tags"
default = "terraform"
}
variable "node_pools" {
description = "Addons node pools"
type = list(object({
label = string
node_count = number
size = string
}))
default = []
}
variable "applications" {
description = "Comma Separated list of Application to be installed"
type = string
default = "-traefik2-nodeport, metrics-server"
}
terraform.tfvars
In this file, we define values for variables that do not contain sensitive information or whose exposure does not pose a security risk.
region = "NYC1"
cluster_name = "k3s-civo-opentofu"
firewall_name = "k3s-production"
kubernetes_version = "1.29.2-k3s1"
node_label = "pool-production-front-end"
node_instance_size = "g4s.kube.medium"
node_count = 2
cni = "cilium"
network_name = "ks3-production"
applications = "metrics-server,argo-rollouts,argo-workflows,argocd"
node_pools = [
{
label = "pool-production-back-end"
node_count = 2
size = "g4s.kube.small"
}
]
Plan and apply the configuration
Resource Creation
Once you have created the previously mentioned files or downloaded the project source code from GitLab (link at the beginning of the post), you should have a file structure similar to the following:
~/kubernetes-civo-opentofu ❯ tree
.
├── LICENSE
├── README.md
├── agent.tf
├── cluster.tf
├── firewall.tf
├── modules
│ └── vpc
│ ├── main.tf
│ ├── outputs.tf
│ ├── providers.tf
│ └── variables.tf
├── network.tf
├── node_pools.tf
├── outputs.tf
├── providers.tf
├── terraform.tfvars
└── variables.tf
3 directories, 15 files
Warning: The -auto-approve
option should be handled with great care, as it will apply changes automatically without asking for confirmation.
Next, run the following commands.
tofu init
tofu plan
tofu apply -auto-approve
Once the resources have been created, you should see an output similar to the following:
------------------------------------------------------------------------
civo_network.civo_network_cluster: Creating...
civo_network.civo_network_cluster: Creation complete after 1s [id=e116d8d0-b22b-4177-8f17-18ee46a83b58]
civo_firewall.civo_firewall_cluster: Creating...
civo_firewall.civo_firewall_cluster: Creation complete after 4s [id=74ace676-e035-4d62-8cdd-3a4c74db6785]
civo_kubernetes_cluster.civo_cluster: Creating...
civo_kubernetes_cluster.civo_cluster: Still creating... [10s elapsed]
civo_kubernetes_cluster.civo_cluster: Still creating... [20s elapsed]
civo_kubernetes_cluster.civo_cluster: Still creating... [30s elapsed]
civo_kubernetes_cluster.civo_cluster: Still creating... [40s elapsed]
civo_kubernetes_cluster.civo_cluster: Still creating... [50s elapsed]
civo_kubernetes_cluster.civo_cluster: Still creating... [1m0s elapsed]
civo_kubernetes_cluster.civo_cluster: Creation complete after 1m6s [id=30cae189-1734-4e8a-9989-fe754a554bbd]
civo_kubernetes_node_pool.back-end_pool[0]: Creating...
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [10s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [20s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [30s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [40s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [50s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [1m0s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [1m10s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [1m20s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [1m30s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [1m40s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [1m50s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [2m0s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [2m10s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [2m20s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [2m30s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [2m40s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [2m50s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [3m0s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [3m10s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [3m20s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [3m30s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [3m40s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [3m50s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [4m0s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [4m10s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Still creating... [4m20s elapsed]
civo_kubernetes_node_pool.back-end_pool[0]: Creation complete after 4m22s [id=pool-production-back-end]
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
Outputs:
kubernetes_cluster_ready = true
kubernetes_cluster_status = "ACTIVE"
kubernetes_cluster_version = "1.29.2-k3s1"
kubernetes_cluster_endpoint = "https://212.2.246.232:6443"
kubernetes_cluster_id = "30cae189-1734-4e8a-9989-fe754a554bbd"
kubernetes_cluster_name = "k3s-civo-opentofu"
Save and Switch Cluster Configuration
This command switches to the specified Kubernetes cluster. You should use the cluster name defined in the kubernetes_cluster_name
variable; in my case, it is k3s-civo-opentofu
. It then saves this configuration as the default and activates it for immediate use.
civo kubernetes config k3s-civo-opentofu --save --switch
Validating the Connection to the Cluster
~ ❯ k get nodes
NAME STATUS ROLES AGE VERSION
k3s-k3s-civo-opentofu-0f4a-dae818-node-pool-c109-ehwpv Ready <none> 23m v1.29.2+k3s1
k3s-k3s-civo-opentofu-0f4a-dae818-node-pool-c109-2nx3f Ready <none> 23m v1.29.2+k3s1
k3s-k3s-civo-opentofu-0f4a-dae818-node-pool-beba-gjlxk Ready <none> 21m v1.29.2+k3s1
k3s-k3s-civo-opentofu-0f4a-dae818-node-pool-beba-w3kvl Ready <none> 21m v1.29.2+k3s1
~ ❯
Clean up
To delete the project, service, and data:
- Create a destroy plan and preview the changes to your infrastructure with the following command:
tofu plan -destroy
- To delete the resources and all data, run:
tofu destroy
- Enter
yes
to confirm. The output is similar to the following:
------------------------------------------------------------------------
civo_kubernetes_node_pool.back-end_pool[0]: Destroying... [id=pool-production-back-end]
civo_kubernetes_node_pool.back-end_pool[0]: Destruction complete after 1s
civo_kubernetes_cluster.civo_cluster: Destroying... [id=13b3c6d0-743a-4cc3-9372-3e50514a2577]
civo_kubernetes_cluster.civo_cluster: Still destroying... [10s elapsed]
civo_kubernetes_cluster.civo_cluster: Destruction complete after 11s
civo_firewall.civo_firewall_cluster: Destroying... [id=e92d51b9-3030-4fcd-9d6c-6347634ef88e]
civo_firewall.civo_firewall_cluster: Destruction complete after 3s
civo_network.network_cluster_civo: Destroying... [id=7ae394dd-9dc5-4166-8927-ad3bf7d7954c]
civo_network.network_cluster_civo: Destruction complete after 9s
Apply complete! Resources: 0 added, 0 changed, 4 destroyed.