K3s: Kubernetes Without the Weight of Kubernetes

K3s explained by someone who uses it in production. When it makes sense, how to install it, and what to expect from Kubernetes in 512MB of RAM.

Kubernetes is fantastic until you have to run it on a Raspberry Pi. Or on a 1GB RAM VPS. Or on 50 edge devices scattered around the world. "Vanilla" Kubernetes needs resources that in certain scenarios simply aren't there.

K3s was born to solve this problem. It's Kubernetes, but stripped to the bone. Same API, same concepts, one-tenth the resources. I've been using it for three years in various contexts and have learned where it shines and where it has limits.

What is K3s

K3s is a certified Kubernetes distribution, created by Rancher (now part of SUSE). "Certified" means it passes all Kubernetes conformance tests. If it works on K8s, it works on K3s. Same kubectl, same manifests, same everything.

The difference is in how it's made:

  • Single binary — the entire control plane in a ~70MB executable
  • SQLite as default — no etcd cluster to manage (but you can use etcd or other databases if you want)
  • Optional components removed — no integrated cloud providers, storage drivers reduced to minimum
  • Traefik included — ingress controller ready out of the box
  • Containerd instead of Docker — lighter and the standard now anyway

The result is a Kubernetes that runs with 512MB RAM on a server, 200MB for agents. Unthinkable numbers for a standard cluster.

When It Makes Sense to Use It

Don't use K3s just because it's "simpler". If you have resources, use a standard distribution. K3s makes sense in specific scenarios.

Edge Computing

The main use case. You have distributed devices — stores, factories, retail locations — and you want to deploy containerized applications. K3s runs on ARM, on x86, on modest hardware. You can have the same Kubernetes workflow you use in the data center on a machine in a warehouse.

Local Development

A K3s cluster on your laptop to test your manifests before deploying to staging. Lighter than Minikube, more similar to a real cluster than Kind. If your laptop isn't a powerhouse, K3s is gentle on resources.

Home Lab

Want to experiment with Kubernetes at home but only have Raspberry Pis or old PCs? K3s is perfect. I have a 3-Pi cluster running Home Assistant, Grafana, and various personal services. Works great.

Small VPS

Those $5/month VPS with 1-2GB RAM. A standard Kubernetes doesn't fit. K3s does, and leaves you resources for your workloads too.

CI/CD

Ephemeral clusters for testing. Creating a K3s cluster takes seconds, not minutes. For pipelines that need to test Kubernetes deployments, K3s is ideal.

Installation

K3s installation is almost embarrassingly simple.

Server (Control Plane)

curl -sfL https://get.k3s.io | sh -

Yes, that's it. One command. After a few seconds you have a working cluster. The kubeconfig is in /etc/rancher/k3s/k3s.yaml.

# Verify
sudo k3s kubectl get nodes

Agent (Worker Nodes)

On the server, get the token:

sudo cat /var/lib/rancher/k3s/server/node-token

On the agents:

curl -sfL https://get.k3s.io | K3S_URL=https://server-ip:6443 K3S_TOKEN=token-from-server sh -

Done. The agent connects to the server and appears as a node in the cluster.

Configuration

You can configure K3s with flags or with a YAML file in /etc/rancher/k3s/config.yaml:

# /etc/rancher/k3s/config.yaml
write-kubeconfig-mode: "0644"
tls-san:
  - "k3s.example.com"
  - "192.168.1.100"
disable:
  - traefik  # If you want to use another ingress

High Availability

K3s supports HA with two approaches:

Embedded etcd (recommended)

From K3s 1.19+ you can have an embedded etcd cluster. You need at least 3 servers.

# First server
curl -sfL https://get.k3s.io | sh -s - server --cluster-init

# Other servers
curl -sfL https://get.k3s.io | sh -s - server \
  --server https://first-server:6443 \
  --token token-from-first-server

External Database

You can use MySQL, PostgreSQL, or external etcd as datastore:

curl -sfL https://get.k3s.io | sh -s - server \
  --datastore-endpoint="mysql://user:pass@tcp(host:3306)/k3s"

For most uses, embedded etcd works well. External database makes sense if you already have a managed database you want to reuse.

Networking

K3s uses Flannel as the default CNI with VXLAN. It works, it's simple, requires no configuration. For most edge scenarios it's more than enough.

If you need network policies, Flannel doesn't support them. You can replace it with Calico:

curl -sfL https://get.k3s.io | sh -s - --flannel-backend=none \
  --disable-network-policy

# Then install Calico
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

For scenarios with particular network requirements (multi-cluster, service mesh), Cilium is another supported option.

Storage

K3s includes Local Path Provisioner by default. Creates volumes on the local machine. For development and testing it's fine, for production it depends on the case.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  resources:
    requests:
      storage: 1Gi

For distributed storage in multi-node clusters, Longhorn (which I've discussed in another article) integrates perfectly with K3s. Same company, designed to work together.

Traefik: Included Ingress

K3s installs Traefik as the default ingress controller. It's already configured and working. You can create Ingress resources and they work immediately.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app
spec:
  rules:
    - host: app.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: my-service
                port:
                  number: 80

If you prefer nginx-ingress or something else, you can disable Traefik:

curl -sfL https://get.k3s.io | sh -s - --disable=traefik

K3s vs K8s: Practical Differences

On paper they're compatible. In practice, there are some differences.

What Works the Same

  • All standard workloads (Deployment, StatefulSet, DaemonSet, etc.)
  • Helm charts
  • Kubectl and all CLI tools
  • Most operators
  • Service mesh (Istio, Linkerd)

What's Different

  • No integrated cloud providers (if you need them, install separately)
  • Default storage is local-path, not cloud volumes
  • Some enterprise features (advanced audit logging, encryption at rest) require extra configuration
  • The control plane is a single process, not separate pods

What Might Not Work

  • Some operators that assume running on "standard" clusters may have issues
  • Tools that expect etcd to be directly accessible
  • Some dashboards that make assumptions about cluster layout

In three years of use, I've found maybe 2-3 Helm charts that had problems on K3s, and they were generally chart bugs, not K3s bugs.

Upgrades

K3s upgrades are simple. Re-run the installation script and it updates:

curl -sfL https://get.k3s.io | sh -

For HA clusters, upgrade one server at a time. For clusters with agents, upgrade servers first then agents.

K3s also supports an auto-upgrade system with a dedicated controller:

kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/system-upgrade-controller.yaml

You create Plans that define the target version, and the controller upgrades nodes automatically with rolling updates.

Monitoring

K3s doesn't include monitoring out of the box, but integrating it is easy.

# kube-prometheus-stack via Helm
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install prometheus prometheus-community/kube-prometheus-stack

Watch the resources: kube-prometheus-stack by default wants quite a bit of memory. On edge clusters it might be too much. In that case, a standalone Prometheus with minimal configuration works better.

Backup

The datastore is the critical thing to back up.

SQLite (default)

# The database is a file
sudo cp /var/lib/rancher/k3s/server/db/state.db /backup/

Embedded etcd

k3s etcd-snapshot save --name backup-name
# Snapshot saved in /var/lib/rancher/k3s/server/db/snapshots/

For restore:

k3s server --cluster-reset --cluster-reset-restore-path=/path/to/snapshot

Real Scenarios

Some examples of how I use K3s in production.

Retail edge — K3s clusters on hardware in stores. Each store has its own mini-cluster doing local caching, data synchronization, and running specific applications. GitOps with Fleet for managing deployments across dozens of clusters.

Home lab — 3 Raspberry Pi 4s with K3s, Longhorn for storage, ArgoCD for deployment. Costs less than a coffee per month in electricity and lets me experiment like on a real cluster.

Development — K3s on laptop to test manifests and charts before pushing. Starts in 20 seconds, stops in 5. I don't have to wait for Minikube to boot a VM.

Conclusion

K3s is not a replacement for standard Kubernetes for large and complex clusters. It's Kubernetes for scenarios where "real" Kubernetes would be overkill or impossible.

If you have edge computing, limited resources, or want Kubernetes without the operational weight of an enterprise cluster, K3s is probably what you're looking for. If you have a data center with infinite resources and dedicated teams, use a standard distribution.

As always, the right tool depends on the problem. K3s solves specific problems very well. For those problems, it's the obvious choice.

Kubernetes doesn't have to be heavy. Sometimes less is better.