Step-by-Step Guide- How to Effortlessly Install K3s for Your Kubernetes Cluster
How to Install K3s: A Step-by-Step Guide
K3s is a lightweight, simplified version of Kubernetes designed for small to medium-sized clusters. It is gaining popularity due to its ease of deployment and minimal resource requirements. In this article, we will walk you through the process of installing K3s on a Linux system. By the end of this guide, you will have a fully functional K3s cluster ready to deploy your applications.
Prerequisites
Before you begin, ensure that your system meets the following requirements:
1. A Linux distribution with a 64-bit kernel.
2. A static IP address for your server.
3. Root access to the server.
4. At least 1GB of RAM (2GB is recommended).
5. A swap space of at least 2GB (optional but recommended).
Step 1: Install K3s
1. Update your system packages:
“`bash
sudo apt update
sudo apt upgrade -y
“`
2. Install K3s on your server. You can choose between a server and a worker node installation. For this guide, we will install a server node:
“`bash
curl -sfL get.k3s.io | sh –
“`
3. After the installation, you will receive a message with the server token. Note this token as you will need it later for joining worker nodes.
Step 2: Configure the Server
1. Configure your server’s firewall to allow traffic on port 6443 (Kubernetes API server) and port 22 (SSH). You can use the following commands:
“`bash
sudo ufw allow 6443/tcp
sudo ufw allow 22/tcp
“`
2. Verify that the server is running:
“`bash
sudo systemctl status k3s
“`
Step 3: Install a Worker Node
1. On your worker node, install K3s as a worker node using the following command:
“`bash
curl -sfL get.k3s.io | K3S_URL=https://
“`
Replace `
2. Verify that the worker node is running:
“`bash
sudo systemctl status k3s
“`
Step 4: Verify the Installation
1. On your server, run the following command to check the status of the worker node:
“`bash
kubectl get nodes
“`
You should see your worker node listed with a status of “Ready.”
Congratulations! You have successfully installed K3s on your Linux server and worker node. You can now deploy applications to your K3s cluster using the Kubernetes API.