
In this lab, we will automate the entire lifecycle of a cloud-native application. We will move beyond single-VM deployments to a more robust, orchestrated environment using Kubernetes.
We will use:
The process will involve using Ansible to create VMs, then using another Ansible playbook to install a multi-node K3s cluster. Finally, we will deploy a test application and expose it securely with HTTPS.
We need to set up our local control machine with the necessary tools to orchestrate the deployment. We will use pipx to install Ansible in an isolated environment, which is a modern best practice.
Inside your WSL terminal
Update your system and install pipx
$ sudo apt update $ sudo apt install pipx $ pipx ensurepath
Install Ansible and openstack using pipx:
$ pipx install --include-deps ansible $ pipx inject ansible openstacksdk
Visit the code in your folder inside your WSL terminal
cd guestbook-src/09_k3s_rancher
Now we will use an Ansible playbook to create the virtual machines that will form our cluster.
os_key_name in inventory/config and set it to your Switch Engines key pair$ ansible-playbook -i inventory provision.yml
Once the provisioning is complete, your new servers should be ready. You can visualize the inventory structure with the ansible-inventory command to verify that the openstack dynamic inventory works as expected.
$ ansible-inventory -i inventory --graph
You should see an output similar to this, confirming you have one server and two agent nodes.
@all:
|--@ungrouped:
|--@k3s_cluster:
| |--@agent:
| | |--k3s-agent-1
| | |--k3s-agent-2
| |--@server:
| | |--k3s-server-1
With our VMs ready, we'll now use a specialized Ansible collection to deploy K3s across them.
Ansible Collections are a way to package and distribute playbooks, roles, modules, and plugins. We'll install the official collection for K3s directly from its Git repository.
$ ansible-galaxy collection install git+https://github.com/k3s-io/k3s-ansible.git
The collection provides a master playbook to handle the entire cluster setup. It will install the K3s server on the k3s-server-1 node and join the other two nodes as agents.
$ ansible-playbook -i inventory k3s.orchestration.site.yml
Once the playbook finishes, SSH into your server node to verify that the cluster is up and all nodes have joined successfully with kubectl get nodes.
Or just run kubectl command via ansible shell
$ ansible -i inventory k3s-server-1 -become -m shell -a "kubectl get nodes"
The output should show all three nodes in a Ready state.
NAME STATUS ROLES AGE VERSION
k3s-agent-1 Ready <none> 2m55s v1.28.x+k3s1
k3s-agent-2 Ready <none> 2m55s v1.28.x+k3s1
k3s-server-1 Ready control-plane,master 3m25s v1.28.x+k3s1
Rancher is an open-source management platform that provides a web UI to simplify deploying and managing applications on our Kubernetes cluster. We are installing it to provide a user-friendly web interface, which simplifies the process of deploying applications and managing the cluster without relying solely on the command line.
A records. All records should point to the public IP address of your k3s-server-1 node.rancher-.duckdns.org podinfo-.duckdns.org guestbook-.duckdns.org Inventory/config file for Rancher:We need to tell our installation playbook what hostname to use for Rancher.
inventory/config update rancher_hostnameRun the Playbook:
This will take several minutes.
$ ansible-playbook -i inventory install-rancher.yml
Open your browser and navigate to https://rancher-username.duckdns.org. You should see the Rancher login page.
admin and the password from your inventory/configNavigate to your Cluster:
Ensure you are inside the local cluster dashboard.
podinfoFor resources with complex configurations like Issuers and Ingresses, using the YAML editor is needed, Rancher UI does not know all options.

podinfo namespace in the dropdownapiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-podinfo-issuer
namespace: podinfo
spec:
acme:
# IMPORTANT: Change this to your email address
email: your-email@he-arc.ch
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-podinfo-issuer-account-key
solvers:
- http01:
ingress:
class: traefik
In the left menu, go to Workload -> Deployments.
Click Create.
podinfostefanprodan/podinfopodinfo from the dropdown.ClusterIP, Name: http, Container Port: 9898Click Create.
Rancher will automatically create both the Deployment and the associated ClusterIP Service.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: podinfo-ingress
namespace: podinfo
annotations:
cert-manager.io/issuer: letsencrypt-podinfo-issuer
spec:
ingressClassName: traefik
rules:
- host: podinfo-<username>.duckdns.org # <--- CHANGE THIS
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: podinfo
port:
number: 9898 # The container port, Rancher names the service after the deployment
tls:
- hosts:
- podinfo-<username>.duckdns.org # <--- CHANGE THIS
secretName: podinfo-tls-cert
Open a web browser and navigate to https://podinfo.your-domain.com. It may take a minute for the certificate to be issued. You should see the Podinfo UI with a valid HTTPS lock.
podinfo deployment.podinfo deployment and select Edit Config.PODINFO_UI_MESSAGEWelcome from my Rancher-managed Cluster!Take a screenshot showing:
podinfo deployment scaled to 2 pods.podinfo application over HTTPS, showing the custom message.Upload the combined screenshot to complete the lab.
We will use the cluster in next labs, do not delete it, but you can scale down to 0 the podinfo deployment.