Multi-Node Cluster Setup
Tutorial for creating multi-node Kubernetes clusters with LoKO.
Overview
Section titled “Overview”This tutorial covers:
- Creating multi-worker clusters
- Node labeling and taints
- Workload scheduling and affinity
- Testing distributed systems
- High availability setups
Time: 20 minutes
Prerequisites:
- LoKO installed and configured
- Basic Kubernetes knowledge
- Adequate system resources (8GB+ RAM recommended)
Why Multi-Node?
Section titled “Why Multi-Node?”Use cases:
- Testing distributed systems - Simulate production topology
- High availability - Multiple replicas across nodes
- Node affinity - Test workload placement
- Resource isolation - Separate workload types
- Failure scenarios - Test node failures
Resource requirements:
| Workers | RAM | CPU | Disk |
|---|---|---|---|
| 1 | 4GB | 2 cores | 20GB |
| 2 | 6GB | 4 cores | 30GB |
| 3 | 8GB | 6 cores | 40GB |
| 5 | 12GB+ | 8+ cores | 50GB+ |
Step 1: Create Multi-Worker Cluster
Section titled “Step 1: Create Multi-Worker Cluster”Configure Workers
Section titled “Configure Workers”Edit loko.yaml:
cluster: nodes: servers: 1 # Control plane nodes workers: 3 # Create 3 worker nodesCreate Cluster
Section titled “Create Cluster”loko env createOutput:
Creating cluster with 3 worker nodes...✓ Creating cluster "loko-dev-me" ... • Control plane: 1 node • Worker nodes: 3 nodes✓ Writing kubeconfig✓ Installing components✓ Cluster readyVerify Nodes
Section titled “Verify Nodes”kubectl get nodesOutput:
NAME STATUS ROLES AGE VERSIONloko-dev-me-control-plane Ready control-plane 2m v1.35.0loko-dev-me-worker Ready <none> 1m v1.35.0loko-dev-me-worker2 Ready <none> 1m v1.35.0loko-dev-me-worker3 Ready <none> 1m v1.35.0Step 2: Label Nodes
Section titled “Step 2: Label Nodes”Add Labels for Workload Types
Section titled “Add Labels for Workload Types”# Label nodes for different workload typeskubectl label node loko-dev-me-worker workload-type=databasekubectl label node loko-dev-me-worker2 workload-type=applicationkubectl label node loko-dev-me-worker3 workload-type=cacheAdd Tier Labels
Section titled “Add Tier Labels”# High-tier nodes (SSD, more resources)kubectl label node loko-dev-me-worker tier=high
# Standard-tier nodeskubectl label node loko-dev-me-worker2 tier=standardkubectl label node loko-dev-me-worker3 tier=standardVerify Labels
Section titled “Verify Labels”kubectl get nodes --show-labelsStep 3: Configure Node Labels in LoKO
Section titled “Step 3: Configure Node Labels in LoKO”Add labels via configuration:
cluster: nodes: servers: 1 workers: 3 labels: worker: # Labels applied to all worker nodes environment: "development" managed-by: "loko"For per-node labels, use kubectl after creation.
Step 4: Deploy with Node Affinity
Section titled “Step 4: Deploy with Node Affinity”Database on Specific Node
Section titled “Database on Specific Node”Deploy PostgreSQL to database node:
postgres-values.yaml:
affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: workload-type operator: In values: - database# Add to loko.yamlworkloads: postgres: enabled: true values: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: workload-type operator: In values: - database
# Deployloko workloads deploy postgresVerify Pod Placement
Section titled “Verify Pod Placement”kubectl get pods -n loko-workloads -o wideOutput:
NAME READY STATUS NODEpostgres-0 1/1 Running loko-dev-me-worker # On database nodeStep 5: Spread Replicas Across Nodes
Section titled “Step 5: Spread Replicas Across Nodes”Deploy with Pod Anti-Affinity
Section titled “Deploy with Pod Anti-Affinity”web-app.yaml:
apiVersion: apps/v1kind: Deploymentmetadata: name: web-appspec: replicas: 3 selector: matchLabels: app: web-app template: metadata: labels: app: web-app spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - web-app topologyKey: kubernetes.io/hostname containers: - name: nginx image: nginx:alpine ports: - containerPort: 80Deploy:
kubectl apply -f web-app.yamlVerify distribution:
kubectl get pods -l app=web-app -o wideOutput (pods spread across nodes):
NAME NODEweb-app-xxx-aaa loko-dev-me-workerweb-app-xxx-bbb loko-dev-me-worker2web-app-xxx-ccc loko-dev-me-worker3Step 6: Node Taints and Tolerations
Section titled “Step 6: Node Taints and Tolerations”Taint Node for Special Workloads
Section titled “Taint Node for Special Workloads”# Taint node for GPU workloadskubectl taint node loko-dev-me-worker3 gpu=true:NoScheduleDeploy with Toleration
Section titled “Deploy with Toleration”apiVersion: v1kind: Podmetadata: name: gpu-workloadspec: tolerations: - key: "gpu" operator: "Equal" value: "true" effect: "NoSchedule" nodeSelector: gpu: "true" containers: - name: app image: my-gpu-appRemove Taint
Section titled “Remove Taint”kubectl taint node loko-dev-me-worker3 gpu=true:NoSchedule-Step 7: Test High Availability
Section titled “Step 7: Test High Availability”Deploy Replicated Database
Section titled “Deploy Replicated Database”postgres-ha.yaml:
workloads: postgres: enabled: true values: replicaCount: 3 # 3 replicas affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - postgres topologyKey: kubernetes.io/hostnameSimulate Node Failure
Section titled “Simulate Node Failure”# Cordon node (prevent new pods)kubectl cordon loko-dev-me-worker
# Drain node (evict pods)kubectl drain loko-dev-me-worker --ignore-daemonsets --delete-emptydir-data
# Watch pods reschedulekubectl get pods -wRestore Node
Section titled “Restore Node”kubectl uncordon loko-dev-me-workerStep 8: Resource Management
Section titled “Step 8: Resource Management”Set Resource Quotas per Node
Section titled “Set Resource Quotas per Node”namespace-quota.yaml:
apiVersion: v1kind: ResourceQuotametadata: name: compute-quota namespace: defaultspec: hard: requests.cpu: "4" requests.memory: "8Gi" limits.cpu: "8" limits.memory: "16Gi"Monitor Resource Usage
Section titled “Monitor Resource Usage”# Node resourceskubectl top nodes
# Pod resourceskubectl top pods -A
# Describe nodekubectl describe node loko-dev-me-workerStep 9: Distributed Application Example
Section titled “Step 9: Distributed Application Example”Deploy Distributed System
Section titled “Deploy Distributed System”Example: 3-tier application
# Frontend (2 replicas, any node)apiVersion: apps/v1kind: Deploymentmetadata: name: frontendspec: replicas: 2 selector: matchLabels: tier: frontend template: metadata: labels: tier: frontend spec: containers: - name: nginx image: nginx resources: requests: cpu: 100m memory: 128Mi---# Backend API (3 replicas, spread across nodes)apiVersion: apps/v1kind: Deploymentmetadata: name: backendspec: replicas: 3 selector: matchLabels: tier: backend template: metadata: labels: tier: backend spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: tier operator: In values: - backend topologyKey: kubernetes.io/hostname containers: - name: api image: my-api resources: requests: cpu: 200m memory: 256Mi---# Database (1 replica, database node)apiVersion: apps/v1kind: StatefulSetmetadata: name: databasespec: serviceName: database replicas: 1 selector: matchLabels: tier: database template: metadata: labels: tier: database spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: workload-type operator: In values: - database containers: - name: postgres image: postgres:15 resources: requests: cpu: 500m memory: 1GiDeploy:
kubectl apply -f application.yamlVerify distribution:
kubectl get pods -o wideStep 10: Scaling
Section titled “Step 10: Scaling”Scale Workers
Section titled “Scale Workers”To add/remove workers, recreate cluster:
cluster: nodes: servers: 1 workers: 5 # Increase to 5loko env recreateNote: This recreates the cluster. For production, use cluster autoscaling.
Scale Workloads
Section titled “Scale Workloads”# Scale deploymentkubectl scale deployment frontend --replicas=5
# Scale statefulsetkubectl scale statefulset database --replicas=3Advanced Configurations
Section titled “Advanced Configurations”Custom Node Configurations
Section titled “Custom Node Configurations”For different node types (via Kind config):
# Advanced: Custom Kind configkind: ClusterapiVersion: kind.x-k8s.io/v1alpha4nodes: - role: control-plane - role: worker labels: tier: high ssd: "true" - role: worker labels: tier: standard - role: worker labels: tier: standardPod Topology Spread
Section titled “Pod Topology Spread”topologySpreadConstraints: - maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: app: my-appDaemonSets
Section titled “DaemonSets”Run on all nodes:
apiVersion: apps/v1kind: DaemonSetmetadata: name: monitoring-agentspec: selector: matchLabels: name: monitoring template: metadata: labels: name: monitoring spec: containers: - name: agent image: monitoring-agentTroubleshooting
Section titled “Troubleshooting”Node not ready
Section titled “Node not ready”# Check node statuskubectl describe node <node-name>
# Check logsdocker exec -it <node-name> journalctl -xeu kubeletPods not scheduling
Section titled “Pods not scheduling”# Check pod eventskubectl describe pod <pod-name>
# Common issues:# - Insufficient resources# - Unsatisfied affinity rules# - Taints without tolerationsUneven distribution
Section titled “Uneven distribution”# Check pod distributionkubectl get pods -A -o wide | awk '{print $8}' | sort | uniq -c
# Rebalance if neededkubectl drain <node> --ignore-daemonsetskubectl uncordon <node>Performance Considerations
Section titled “Performance Considerations”Resource Allocation
Section titled “Resource Allocation”# Per worker (example)resources: worker1: cpu: 2 cores memory: 4GB worker2: cpu: 2 cores memory: 4GB worker3: cpu: 2 cores memory: 4GBDocker Resources
Section titled “Docker Resources”Adjust Docker Desktop resources:
Settings → Resources:
- CPUs: 8+ (for 3+ workers)
- Memory: 8GB+ (for 3+ workers)
- Swap: 2GB
- Disk: 50GB+
See Also
Section titled “See Also”- First Cluster Tutorial - Getting started
- Deploy Database - Database deployment
- Custom Workload - Custom applications
- Configuration Guide - Advanced configuration
- Kubernetes Documentation - Scheduling concepts