If you’re running a Kubernetes cluster and just starting your observability journey, logs are often the first (and most valuable) signal to collect. In this post, we’ll walk through how to collect Kubernetes logs using the OpenTelemetry (OTel) Collector — no vendor lock-in, no complex Fluent Bit setup, and no noisy dashboards.
You’ll learn how to:
- Use the OpenTelemetry
filelogreceiver to collect logs from pods - Enrich logs with Kubernetes metadata using the
k8sattributesprocessor - Forward logs to any backend — like Grafana Loki, ClickHouse, Datadog and others
Whether you’re a platform engineer, DevOps lead, or just trying to get logs out of your EKS/GKE/AKS cluster, this guide will help you build a lightweight Kubernetes logging pipeline with OTel.
Why Use OpenTelemetry for Kubernetes Logs?
You might associate OpenTelemetry (OTel) with metrics and traces, but it’s a great fit for Kubernetes logs — especially for small teams.
Key Benefits:
- Vendor-neutral: No lock-in — send logs to any backend, and future proof for what’s next
- Kubernetes-aware: Add pod names, namespaces, labels context automatically
- Drop noisy logs: Filter or redact before logs leave the cluster
- Lightweight and open-source: One collector, zero black boxes
This gives small Kubernetes teams the flexibility and visibility they need without the complexity (or cost) of vendor-native logging agents.
A Lightweight K8s Logs Pipeline Using OTel Collector
We’ll use three OpenTelemetry Collector components to build up the configuration YAML:
filelogreceiver &k8sobjects receiver Reads container logs from node disk, and from the K8s API respectively.k8sattributesprocessor: Enriches each log with critical Kubernetes metadataotlpexporter: Sends logs to your preferred backend
Step 1: Collect Kubernetes Logs with the Filelog Receiver
receivers:
filelog:
include_file_path: true
include:
- /var/log/pods/*/*/*.log
operators:
- id: container-parser
type: container
k8sobjects:
objects:
- name: events
mode: watchThis watches the standard Kubernetes pod log locations. If your cluster is using containerd or Docker, these paths should work out of the box. The “container parser” is a newer operator that takes the hassle out of parsing your logs across different container runtimes – it just works. We’ve also added the k8sobjects receiver here to collect key events from the K8s API in our cluster.
Step 2: Add Kubernetes Metadata with the k8sattributes Processor
processors:
k8sattributes:
auth_type: serviceAccount
extract:
metadata:
- k8s.pod.name
- k8s.namespace.name
- k8s.container.name
- k8s.deployment.name
This automatically adds metadata to each log line so you can filter, search, and group by service, namespace, or pod in your backend.
Step 3: Export Logs to Your Backend of Choice
exporters:
otlp:
endpoint: YOUR_BACKEND:4317
tls:
insecure: true # for dev/test use only
You can swap in other exporters if you’re using Datadog, Loki, ClickHouse, or even just dumping logs to disk (e.g. the “file exporter”) for local testing.
Optional: Filter or Redact Logs Before Sending
Want to remove sensitive info or cut down on log volume? Add a transform processor:
processors:
transform/logs:
log_statements:
- context: log
statements:
- replace_pattern(body, "token=\\\\w+", "token=REDACTED")
You can also use filter to drop health checks or debug-level logs
Putting it Together
We can tie these components together in our logging pipeline to complete our collector configuration – that’s it!
service:
pipelines:
logs:
receivers: [filelog, k8sobjects]
processors: [k8sattributes]
exporters: [otlp]
Deploying the OpenTelemetry Collector as a DaemonSet
To access logs from every node, deploy the Collector as a DaemonSet:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: otel-collector-logs
spec:
selector:
matchLabels:
app: otel-collector-logs
template:
metadata:
labels:
app: otel-collector-logs
spec:
serviceAccountName: otel-collector
containers:
- name: otelcol
image: otel/opentelemetry-collector-contrib:latest
args: ["--config=/etc/otel/config.yaml"]
volumeMounts:
- name: varlog
mountPath: /var/log
volumes:
- name: varlog
hostPath:
path: /var/log
Tip: Make sure you bind a ClusterRole to the otel-collector ServiceAccount so the k8sattributes processor can access pod metadata using the K8s API.
Why OTel Makes Sense for Small Kubernetes Teams
For teams without a dedicated observability platform or budget, OpenTelemetry lets you start small and scale later. Logs are the easiest entry point, and OTel Collector gives you a simple, pluggable way to get started.
- No third-party dependencies
- Just-enough visibility for production issues
- Clear upgrade path to metrics and traces later
You don’t need to rip and replace. Just layer on what you need, when you need it.
Next Steps: Build on Your Kubernetes Logging Setup
Once you’ve got logs flowing, you can:
- Visualize logs in your backend of choice (sending to Datadog? check out how to set your service tag properly)
- Run LLMs over logs for summarization and root cause
- Layer on metrics and traces — all with the same OTel Collector
TL;DR
If you’re a small team running Kubernetes and want:
- A lightweight way to collect and enrich logs
- Full control over filtering and routing
- No vendor lock-in or heavy agents
Start with the OpenTelemetry Collector + filelog + k8sattributes. It’s simple, flexible, and future-proof.
Need help configuring this in your cluster? Reach out!
press@controltheory.com
Back