Skip to content

Internal Components

LoKO includes five internal components that provide core infrastructure services.

ComponentPurposeCan Disable?
dnsmasqIn-cluster DNS for local domain resolution❌ No (required)
HAProxyTCP/UDP port forwarding and load balancing❌ No (required)
TraefikIngress controller and reverse proxy❌ No (required)
metrics-serverKubernetes metrics collection✅ Yes
ZotOCI registry with optional mirroring✅ Yes

Host-level DNS resolution using a dedicated dnsmasq container managed by LoKO.

  • Dynamic DNS records: Regenerated from current workloads and system endpoints
  • Host-level service: Runs as a container on your machine, not as an in-cluster operator
  • Split-domain resolver integration: LoKO configures /etc/resolver/<domain> (macOS) or Linux resolver backends
  • Predictable port model: Uses network.dns-port (auto-selected, preferring 5453)
  1. LoKO generates dnsmasq.conf with host records for enabled workloads and internal endpoints
  2. A dnsmasq container is created on the LoKO Docker network and binds network.ip:network.dns-port
  3. Your OS resolver forwards <domain> queries to that dnsmasq endpoint
  4. dnsmasq returns your configured local IP for matching hostnames

Version managed via Renovate:

components:
dnsmasq:
# renovate: datasource=docker depName=dockurr/dnsmasq
version: "2.91"
Terminal window
# Check DNS container status
loko dns status
# Check DNS logs
loko logs dns
# Run DNS diagnostics
loko config dns-check

High-performance TCP/UDP load balancer and port forwarder.

  • TCP/UDP Forwarding: Routes external ports to cluster services
  • Dynamic Configuration: Automatically updated based on workload ports
  • Load Balancing: Distributes traffic across multiple backends
  • Health Checks: Monitors backend availability

HAProxy is the required port forwarder for LoKO and cannot be disabled. It provides:

  • Port forwarding from host to Kubernetes cluster
  • TCP routing for databases (PostgreSQL, MySQL, MongoDB, etc.)
  • Dynamic port mapping based on deployed workloads
  • Connection to Kind cluster’s control plane

Version managed via Renovate:

haproxy:
# renovate: datasource=docker depName=haproxy
version: "3.3.2"
  1. Listens on host ports (5432, 3306, 6379, etc.)
  2. Forwards traffic to Kind cluster’s control plane
  3. Traefik routes to appropriate service inside the cluster
Host:5432 → HAProxy:5432 → Kind:30001 → Traefik → postgres.dev.me:5432

Check HAProxy status:

Terminal window
# List running containers
docker ps | grep haproxy
# Check logs
docker logs loko-haproxy
# View configuration
docker exec loko-haproxy cat /usr/local/etc/haproxy/haproxy.cfg

Cloud-native ingress controller and reverse proxy.

  • HTTP/HTTPS Routing: Automatic TLS termination
  • TCP/UDP Support: Layer 4 routing for databases and services
  • Automatic Service Discovery: Watches Kubernetes resources
  • Let’s Encrypt Integration: Automatic certificate management
  • WebSocket Support: Full duplex communication
  • Middleware: Rate limiting, authentication, compression

Traefik is the required ingress controller for LoKO and cannot be disabled. It provides:

  • HTTP/HTTPS ingress for web UIs and APIs
  • TCP routing for databases and message queues
  • TLS certificate management
  • Load balancing across pods

Traefik exposes two main entrypoints:

  • web (HTTP): Port 80 → Redirects to HTTPS
  • websecure (HTTPS): Port 443 → TLS-enabled traffic
  • TCP ports: Dynamic based on workload requirements

Traefik dashboard is accessible via LoKO CLI:

Terminal window
loko status # Shows Traefik status

You can also enable the Traefik web dashboard, served at https://traefik.<domain>:

components:
ingress-controller:
dashboard: true

The dashboard is disabled by default.

Workloads use Traefik via standard Kubernetes Ingress resources:

ingress:
enabled: true
className: traefik
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
hosts:
- host: myapp.${LOKO_DOMAIN}
paths: [/]

For TCP services, LoKO automatically creates IngressRouteTCP resources.


Kubernetes metrics API provider for resource monitoring.

  • Resource Metrics: CPU and memory usage per pod/node
  • HPA Support: Enables Horizontal Pod Autoscaling
  • kubectl top: Powers kubectl top nodes and kubectl top pods
  • Lightweight: Minimal resource footprint

Provides metrics for monitoring and autoscaling workloads. Can be disabled if not needed.

View resource metrics:

Terminal window
# Node metrics
kubectl top nodes
# Pod metrics
kubectl top pods -A
# Specific namespace
kubectl top pods -n loko-workloads

To disable metrics-server during cluster creation:

Terminal window
loko create --disable-metrics-server

Or in configuration:

components:
metrics-server:
enabled: false

OCI-compliant container registry with optional pull-through caching.

  • OCI-Compliant: Fully compatible with Docker/containerd
  • Local Image Storage: Store and serve your custom container images
  • Optional Image Mirroring: Cache images from upstream registries (disabled by default)
  • Deduplication: Saves storage with content-addressable blobs
  • Vulnerability Scanning: Built-in security scanning (optional)
  • HTTPS-Only: Secure by default

Zot serves as the local container registry for storing your custom images. It can optionally act as a pull-through cache for external registries. Can be disabled if you prefer external registries.

Zot can mirror and cache images from external registries as a pull-through cache.

Mirroring is disabled by default and can be enabled in your configuration file if desired.

When mirroring is enabled, Zot can cache images from:

  • Docker Hub: docker.io
  • GitHub Container Registry: ghcr.io
  • Kubernetes Registries: registry.k8s.io, k8s.gcr.io
  • Quay: quay.io
  • Microsoft Container Registry: mcr.microsoft.com

When you pull an image through Zot with mirroring enabled:

  1. Checks local cache
  2. If not found, pulls from upstream registry
  3. Caches locally for future pulls
  4. Serves from cache on subsequent requests

To enable mirroring, update your loko.yaml configuration:

registry:
enabled: true
mirroring:
enabled: true # Enable pull-through cache

You can also selectively enable specific sources:

registry:
mirroring:
enabled: true
sources:
- name: docker_hub
enabled: true
- name: ghcr
enabled: true
- name: quay
enabled: false # Disable specific sources

Available sources: docker_hub, quay, ghcr, k8s_registry, mcr

Zot registry is accessible via the external ingress hostname:

<registry-name>.${LOKO_DOMAIN}

You can always push and pull your own images to/from Zot:

Terminal window
# Tag and push local images
docker tag myapp:latest <registry-name>.${LOKO_DOMAIN}/myapp:latest
docker push <registry-name>.${LOKO_DOMAIN}/myapp:latest
# Pull your images
docker pull <registry-name>.${LOKO_DOMAIN}/myapp:latest

When mirroring is enabled, you can pull images through Zot from external registries:

Terminal window
# Pull and cache from Docker Hub
docker pull <registry-name>.${LOKO_DOMAIN}/library/nginx:latest
# Pull from GitHub Container Registry
docker pull <registry-name>.${LOKO_DOMAIN}/ghcr.io/user/image:tag
# Pull from Kubernetes registry
docker pull <registry-name>.${LOKO_DOMAIN}/registry.k8s.io/pause:latest
  • Local Image Storage: Store and serve your custom images
  • Private Registry: Keep your container images within your local cluster
  • Faster Pulls (with mirroring): External images cached locally after first pull
  • Offline Development (with mirroring): Work without internet once images are cached
  • Bandwidth Savings (with mirroring): Pull from external registries once, use many times

To disable Zot during cluster creation:

Terminal window
loko create --disable-registry

Or in configuration:

components:
registry:
enabled: false

View component status:

Terminal window
# Check all components
loko status
# List pods in loko-system namespace
kubectl get pods -n loko-system