The OpenTelemetry Transform Processor or (OTel Transform Processor) allows you to modify telemetry data inside the OpenTelemetry Collector using simple but powerful OTTL (OpenTelemetry Transformation Language) expressions. You can match on specific attributes and apply transformations like:
- Setting a field (e.g.,
severity_number
) - Masking or redacting content
- Adding new attributes
- Renaming or dropping fields
- Conditional routing or filtering
It works on logs, metrics, and traces — but the syntax is slightly different depending on the signal type. Let’s examine some common logs operations here, and you can find more details on Github here.
🔧 Basic Setup of the OpenTelemetry Transform Processor
Add the transform processor to your otelcol-contrib
configuration like so:
processors:
transform/logs:
log_statements:
- context: log
statements:
- <your statements here>
Then wire it into your pipeline:
service:
pipelines:
logs:
receivers: [otlp]
processors: [transform/logs]
exporters: [otlp]
✅ Example 1: Set Severity Based on Message Content
Let’s say you’re receiving logs that don’t include a severity level — but you want to assign one based on keywords in the log body.
processors:
transform/logs:
log_statements:
- context: log
statements:
- set(severity_number, SEVERITY_NUMBER_ERROR) where IsMatch(body, "panic")
- set(severity_number, SEVERITY_NUMBER_WARN) where IsMatch(body, "retrying")
Now, any log line containing “panic” will be marked as an error, and anything with “retrying” will show up as a warning.
🛡️ Example 2: Mask Secrets in Log Messages
If your logs sometimes include sensitive values like API keys or tokens, you can redact them before export:
processors:
transform/logs:
log_statements:
- context: log
statements:
- replace_pattern(body, "token=\\w+", "token=REDACTED")
This uses regex to match any token in the log line (e.g. “token=XYZ_789”) and replaces it with a redacted value. It’s a lightweight alternative to handling redaction in each application
➕ Example 3: Enrich Logs with Custom Attributes
You may want to enrich incoming logs with metadata, like the deployment environment or a static tag for downstream filtering:
processors:
transform/logs:
log_statements:
- context: log
statements:
- set(attributes["env"], "staging")
- set(attributes["team"], "core-platform")
This will add the attributes to all of the incoming log messages for enrichment. You can also pull in values from resource attributes if you need dynamic enrichment based on origin.
🔄 Example 4: Rename an Attribute
Sometimes the attribute names in your telemetry don’t match what your backend expects. You can easily rename them:
processors:
transform/logs:
log_statements:
- context: log
statements:
- set(attributes["k8s.namespace"], attributes["namespace"])
- delete_key(attributes, "namespace")
This takes a log’s namespace
attribute and renames it to k8s.namespace
or copies the value from attribute namespace
to k8s.namespace
and then removes the original namespace
attribute— a common normalization step in Kubernetes environments.
🧪 Testing and Debugging the OpenTelemetry Transform Processor
For experimenting with OTTL expressions, use the OTTL Playground or enable the debug exporter to see what your collector is doing after each processor stage. The remote tap processor can also be useful for examining telemetry contents before and after your processor. And need some telemetry to test with? – try telemetrygen or otelgen.
🧠 Final Thoughts
The OpenTelemetry Transform Processor is a powerful way to gain control over your telemetry data. Whether you’re filtering noisy spans, sanitizing logs, or tagging metrics for better insights, OTTL gives you the flexibility to evolve your observability strategy without changing your apps.
At ControlTheory, we help teams build smarter pipelines that reduce cost and improve visibility — using tools like the OTel Transform Processor to do more with less.
Need help implementing OpenTelemetry transformations at scale?
We’d love to help. Get in touch →
press@controltheory.com