3.2 Managing Application Logs
Kubernetes provides simple built‑in commands to view application logs from containers and pods.
🎯 Why Application Logs Matter
Application logs help you:
- Debug application failures
- Track user activity
- Inspect runtime errors
- Verify container behavior
- Troubleshoot pod crashes
Note
Kubernetes itself does not provide long‑term log storage. It only exposes container logs. Use external tools (ELK, Loki, etc.) for centralized logging.
🐳 Docker Logging Basics
Containers usually write logs to:
- stdout
- stderr
Docker captures these streams and makes them available via the Docker CLI.
Run Container in Background
Since it runs in detached mode, logs are not shown in the terminal.
View Docker Logs
Stream Logs Live
Tip
-f means follow — stream logs in real time.
🧭 Docker Logging Flow (Diagram)
Application inside container
│
▼
stdout / stderr
│
▼
Docker runtime
│
▼
docker logs / docker logs -f
☸️ Kubernetes Logging Basics
In Kubernetes, logs are accessed using:
Kubernetes reads container stdout/stderr through kubelet running on each node.
Abstract
kubectl → API Server → kubelet → container logs
🔄 Kubernetes Logging Flow (Diagram)
Container Application
│
▼
stdout / stderr
│
▼
kubelet (node agent)
│
▼
Kubernetes API Server
│
▼
kubectl logs
🚀 Create Pod for Log Demo
Example pod using event simulator image:
apiVersion: v1
kind: Pod
metadata:
name: event-simulator-pod
spec:
containers:
- name: event-simulator
image: kodekloud/event-simulator
Create pod:
📄 View Pod Logs (Single Container)
Stream Logs Live
Success
Behavior is similar to docker logs -f.
📦 Multi‑Container Pod Logging
Pods can contain multiple containers.
Example:
apiVersion: v1
kind: Pod
metadata:
name: event-simulator-pod
spec:
containers:
- name: event-simulator
image: kodekloud/event-simulator
- name: image-processor
image: some-image-processor
❗ Logs Command Without Container Name
Result:
- Command fails
- Kubernetes asks for container name
Warning
For multi‑container pods, container name is required.
✅ Correct Multi‑Container Command
Format:
🧭 Multi‑Container Log Selection (Diagram)
Pod
├── container: event-simulator
└── container: image-processor
kubectl logs pod → ❌ ambiguous
kubectl logs pod event-simulator → ✅ works
🔍 Useful Log Options
Previous Container Logs (After Crash)
Example
Useful when a container restarted and you need logs from the previous run.
Logs with Namespace
🧠 Exam Tips
Question
Where do Kubernetes container logs come from?
From container stdout/stderr, captured by kubelet.
Question
Do you need extra setup to view pod logs?
No — built‑in with kubectl logs.
Question
What if a pod has multiple containers?
You must specify the container name.
⚠️ Limitations
Warning
- No built‑in long‑term retention
- Logs lost if container is removed
- Use centralized logging for production
✅ Quick Summary
Summary
- Applications log to stdout/stderr
- Docker → docker logs
- Kubernetes → kubectl logs
- Use -f to stream logs
- Multi‑container pods require container name
- kubelet provides container logs
- No built‑in long‑term storage
Centralized Logging Tools for Kubernetes (Production Guide)
Centralized logging in production Kubernetes environments is used to:
- Collect logs from all nodes and pods
- Aggregate logs in one place
- Store logs long-term
- Search and filter logs quickly
- Visualize logs with dashboards
- Trigger alerts on errors and patterns
Instead of checking logs pod-by-pod, centralized logging gives you cluster-wide visibility.
🎯 Why Centralized Logging Is Needed
Default Kubernetes logging:
- Uses
kubectl logs - Reads container stdout/stderr
- No long-term retention
- No cross-pod search
- No built-in dashboards
Warning
Production clusters should always use a centralized logging stack instead of relying only on kubectl logs.
🏗️ Common Kubernetes Logging Architecture
Typical centralized logging pipeline:
Pods / Containers
↓
Node Log Collector (DaemonSet)
↓
Log Aggregator / Storage
↓
Search + Dashboard UI
Collectors usually run as a DaemonSet on every node and read:
- /var/log/containers
- container runtime logs
✅ Most Popular Open Source Logging Stacks
📦 ELK Stack
ELK = Elasticsearch + Logstash + Kibana
Components:
- Elasticsearch → stores and indexes logs
- Logstash → parses and transforms logs
- Kibana → dashboards and search UI
Typical flow:
Pros:
- Very powerful search
- Rich dashboards
- Mature ecosystem
Cons:
- Heavy resource usage
- More operational overhead
⚡ EFK Stack (Kubernetes Favorite)
EFK = Elasticsearch + Fluentd + Kibana
Logstash is replaced by Fluentd.
Flow:
Pros:
- Kubernetes-friendly
- Easier than full ELK
- Widely used in clusters
Cons:
- Still resource heavy
- Elasticsearch needs tuning
Tip
EFK is the most common open-source Kubernetes logging stack.
🪶 Fluent Bit (Lightweight Collector)
Fluent Bit is a lightweight log shipper often used instead of Fluentd.
Flow:
Pros:
- Very low CPU and memory
- Fast
- Ideal for large clusters
Cons:
- Less processing capability than Fluentd
📊 Modern Lightweight Alternative
🟣 Grafana Loki Stack
Loki + Promtail + Grafana
Components:
- Loki → log storage
- Promtail → log collector
- Grafana → dashboards
Flow:
Pros:
- Much cheaper than Elasticsearch
- Label-based indexing
- Simple to operate
- Fast growing adoption
Cons:
- Different query model than ELK
☁️ Managed Cloud Logging Platforms
🟦 AWS CloudWatch Logs
Used with EKS.
Flow:
Features:
- Fully managed
- Built-in alerts
- No storage management
- Native AWS integration
🟥 Azure Monitor / Log Analytics
Used with AKS.
Features:
- Container Insights
- Central dashboards
- Integrated metrics + logs
- Managed service
🟨 Google Cloud Logging
Used with GKE.
Features:
- Automatic log collection
- No setup required
- Integrated with GCP monitoring
- Built-in search and alerts
💼 Enterprise / SaaS Logging Tools
📈 Datadog
- Logs + metrics + traces
- Strong Kubernetes integration
- Excellent dashboards
- SaaS platform
🧠 Splunk
- Enterprise log analytics
- Advanced search and correlation
- Large-scale deployments
- Compliance-friendly
🔍 Dynatrace
- Full observability platform
- Logs + APM + infrastructure metrics
- AI-assisted analysis
🧰 Kubernetes Deployment Pattern
Most production setups use:
- DaemonSet log collectors:
- Fluentd
- Fluent Bit
- Promtail
Collectors:
- Run on every node
- Read container log files
- Forward logs to backend storage
🏆 Production Recommendations
🥇 Most Common Open Source Stack
EFK (Elasticsearch + Fluentd + Kibana)
🥈 Lightweight Modern Stack
Loki + Promtail + Grafana
🥉 Easiest for Cloud Clusters
Cloud provider logging:
- CloudWatch (AWS)
- Azure Monitor
- Google Cloud Logging
✅ Quick Summary
Summary
- kubectl logs is not enough for production
- Use centralized logging stacks
- Most common: EFK
- Lightweight option: Loki
- Best for cloud: managed logging services
- Use DaemonSets for node-level log collection