In this lab, we will automate the entire lifecycle of a cloud-native application. We will move beyond single-VM deployments to a more robust, orchestrated environment using Kubernetes.

We will use:

The process will involve using Ansible to create VMs, then using another Ansible playbook to install a multi-node K3s cluster. Finally, we will deploy a test application and expose it securely with HTTPS.

We need to set up our local control machine with the necessary tools to orchestrate the deployment. We will use pipx to install Ansible in an isolated environment, which is a modern best practice.

Inside your WSL terminal

Update your system and install pipx

$ sudo apt update
$ sudo apt install pipx
$ pipx ensurepath

Install Ansible and openstack using pipx:

$ pipx install --include-deps ansible
$ pipx inject ansible openstacksdk

Visit the code in your folder inside your WSL terminal

cd guestbook-src/09_k3s_rancher

Now we will use an Ansible playbook to create the virtual machines that will form our cluster.

Run the provisioning playbook:

  1. Configure os_key_name in inventory/config and set it to your Switch Engines key pair
  2. From within your Ansible project directory, execute the playbook.
$ ansible-playbook -i inventory provision.yml

Verify the Ansible Inventory:

Once the provisioning is complete, your new servers should be ready. You can visualize the inventory structure with the ansible-inventory command to verify that the openstack dynamic inventory works as expected.

$ ansible-inventory -i inventory --graph

You should see an output similar to this, confirming you have one server and two agent nodes.

@all:
  |--@ungrouped:
  |--@k3s_cluster:
  |  |--@agent:
  |  |  |--k3s-agent-1
  |  |  |--k3s-agent-2
  |  |--@server:
  |  |  |--k3s-server-1

With our VMs ready, we'll now use a specialized Ansible collection to deploy K3s across them.

Install the K3s Ansible Collection:

Ansible Collections are a way to package and distribute playbooks, roles, modules, and plugins. We'll install the official collection for K3s directly from its Git repository.

$ ansible-galaxy collection install git+https://github.com/k3s-io/k3s-ansible.git

Run the K3s Orchestration Playbook:

The collection provides a master playbook to handle the entire cluster setup. It will install the K3s server on the k3s-server-1 node and join the other two nodes as agents.

$ ansible-playbook -i inventory k3s.orchestration.site.yml

Verify the Cluster:

Once the playbook finishes, SSH into your server node to verify that the cluster is up and all nodes have joined successfully with kubectl get nodes.

Or just run kubectl command via ansible shell

$ ansible -i inventory k3s-server-1 -become -m shell -a "kubectl get nodes"

The output should show all three nodes in a Ready state.

NAME           STATUS   ROLES                  AGE     VERSION
k3s-agent-1    Ready    <none>                 2m55s   v1.28.x+k3s1
k3s-agent-2    Ready    <none>                 2m55s   v1.28.x+k3s1
k3s-server-1   Ready    control-plane,master   3m25s   v1.28.x+k3s1

Rancher is an open-source management platform that provides a web UI to simplify deploying and managing applications on our Kubernetes cluster. We are installing it to provide a user-friendly web interface, which simplifies the process of deploying applications and managing the cluster without relying solely on the command line.

  1. Register Three DNS Names:
    Go to your DNS provider (e.g., DuckDNS) and create three A records. All records should point to the public IP address of your k3s-server-1 node.
  1. Configure Ansible Inventory/config file for Rancher:

We need to tell our installation playbook what hostname to use for Rancher.

Run the Playbook:

This will take several minutes.

$ ansible-playbook -i inventory install-rancher.yml

Access Rancher:


Open your browser and navigate to https://rancher-username.duckdns.org. You should see the Rancher login page.

Navigate to your Cluster:
Ensure you are inside the local cluster dashboard.

Create a Namespace:

Create the Let's Encrypt Issuer

For resources with complex configurations like Issuers and Ingresses, using the YAML editor is needed, Rancher UI does not know all options.

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: letsencrypt-podinfo-issuer
  namespace: podinfo
spec:
  acme:
    # IMPORTANT: Change this to your email address
    email: your-email@he-arc.ch
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: letsencrypt-podinfo-issuer-account-key
    solvers:
    - http01:
        ingress:
          class: traefik

Create the Deployment and Service

In the left menu, go to Workload -> Deployments.

Click Create.

Click Create.

Rancher will automatically create both the Deployment and the associated ClusterIP Service.

Create the Ingress

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: podinfo-ingress
  namespace: podinfo
  annotations:
    cert-manager.io/issuer: letsencrypt-podinfo-issuer
spec:
  ingressClassName: traefik
  rules:
    - host: podinfo-<username>.duckdns.org # <--- CHANGE THIS
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: podinfo
                port:
                  number: 9898 # The container port, Rancher names the service after the deployment
  tls:
    - hosts:
      - podinfo-<username>.duckdns.org # <--- CHANGE THIS
      secretName: podinfo-tls-cert

Access Your Application:

Open a web browser and navigate to https://podinfo.your-domain.com. It may take a minute for the certificate to be issued. You should see the Podinfo UI with a valid HTTPS lock.

Manually Scale with the UI

Change an Environment Variable with the UI

Task Progress Check

Take a screenshot showing:

  1. The Rancher UI, with the podinfo deployment scaled to 2 pods.
  2. Your browser successfully connected to the podinfo application over HTTPS, showing the custom message.

Upload the combined screenshot to complete the lab.

We will use the cluster in next labs, do not delete it, but you can scale down to 0 the podinfo deployment.