Sending data to Obics with OpenTelemetry
OpenTelemetry is an open source standard for telemetry and a suite of tools for collecting and routing telemetry data. It provides a vendor-neutral format for instrumenting, gathering, parsing, and exporting observability data including logs, mtrics, and traces. It's the second most popular project in Cloud Native Computing Foundation (CNCF), second only to Kubernetes.
If you chose to instrument your application with OpenTelemetry SDKs, there are a few ways to send data to Obics:
- Send data directly from an application
- Send data to an OpenTelemetry collector and use an exporter to forward it to Obics. (Recommended)
- Alternatively, you can use an OpenTelemetry Collector Receiver to pull telemetry data from a third party tool (like AWS CloudWatch, Prometheus, S3, ..) and have the collector forward the data to Obics.
When Obics receives fields and attributes from OTEL, it ingests them into your table schema as they are, creating new columns if needed.
In order to control your table schema, ingest just necessary columns, change column names, and enrich with relevant content, you'll need to use the processors
capabilities of the collector.
1. Send data directly from an application
An OpenTelemetry SDK will instrument your application by capturing traces, spans, metrics, and logs.
For example, for a Node.js backend, it will automatically report spans and traces on API calls, generating span_id
and trace_id
fields.
In order to send that data to an endpoint, you'll need to create an Exporter during OpenTelemetry setup. Rather, you'll have 3 different exporters for each type of data: Traces, Logs, and Metrics. The relevant Obics endpoints are:
- Logs: https://ingest.obics.io/api/otel/v1/logs
- Traces: https://ingest.obics.io/api/otel/v1/traces
- Metrics: https://ingest.obics.io/api/otel/v1/metrics
Here's an example of a simple Node.js OpenTelemetry setup:
const { Resource } = require('@opentelemetry/resources');
const { diag, DiagConsoleLogger, DiagLogLevel } = require('@opentelemetry/api');
// Logs
const { LoggerProvider, SimpleLogRecordProcessor } = require('@opentelemetry/sdk-logs');
const { OTLPLogExporter } = require('@opentelemetry/exporter-logs-otlp-http');
// Traces
const { NodeTracerProvider } = require('@opentelemetry/sdk-trace-node');
const { OTLPTraceExporter } = require('@opentelemetry/exporter-trace-otlp-http');
const { BatchSpanProcessor } = require('@opentelemetry/sdk-trace-base');
// Metrics
const { MeterProvider } = require('@opentelemetry/sdk-metrics');
const { OTLPMetricExporter } = require('@opentelemetry/exporter-metrics-otlp-http');
const { PeriodicExportingMetricReader } = require('@opentelemetry/sdk-metrics');
// Enable diagnostic logging
diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.INFO);
// Common resource for all telemetry
const resource = new Resource({
'service.name': 'my-nodejs-app', // Replace with your service name
});
// Configure Logs
const loggerProvider = new LoggerProvider({ resource });
const logExporter = new OTLPLogExporter({ url: 'https://ingest.obics.io/api/otel/v1/logs' });
loggerProvider.addLogRecordProcessor(new SimpleLogRecordProcessor(logExporter));
loggerProvider.register();
// Configure Traces
const tracerProvider = new NodeTracerProvider({ resource });
const traceExporter = new OTLPTraceExporter({ url: 'https://ingest.obics.io/api/otel/v1/traces' });
tracerProvider.addSpanProcessor(new BatchSpanProcessor(traceExporter));
tracerProvider.register();
// Configure Metrics
const meterProvider = new MeterProvider({ resource });
const metricExporter = new OTLPMetricExporter({ url: 'https://ingest.obics.io/api/otel/v1/metrics' });
meterProvider.addMetricReader(new PeriodicExportingMetricReader({
exporter: metricExporter,
exportIntervalMillis: 60000,
}));
// Example usage
const logger = loggerProvider.getLogger('default');
const tracer = tracerProvider.getTracer('default');
const meter = meterProvider.getMeter('default');
// Logging example
logger.emit({ severityText: 'INFO', body: 'This is a log message' });
// Tracing example
const span = tracer.startSpan('example-span');
setTimeout(() => span.end(), 1000);
// Metrics example
const counter = meter.createCounter('example_metric');
counter.add(1, { attribute: 'value' });
2. Send data to an OpenTelemetry collector and use an exporter to forward it to Obics. (Recommended)
The OpenTelemetry collector is telemetry data pipeline that helps collect, process, and export telemetry data (logs, traces, and metrics) from applications to one or more backend systems. It's a sidecar process that's deployed alongside your application and services. Your apps send data to the collector, which forwards it to one or more observability backend. The collector adds a lot of advantages over sending data directly to a backend:
- The collector allows effective batching or sampling of data before sending to a backend, which is crucial for performance.
- It can do parsing, filtering, and enriching of data.
- It helps vendor neutrality since moving to a different vendor requires changing just the collector.
Here's a simple collector configuration that sends telemetry data to Obics:
receivers:
otlp:
protocols:
http:
grpc:
processors:
batch:
timeout: 5s
memory_limiter:
check_interval: 1s
limit_mib: 400
spike_limit_mib: 200
exporters:
otlphttp:
endpoint: https://ingest.obics.io/api/otel
headers:
x-api-key: "YOUR_OBICS_API_TOKEN"
timeout: 30s
encoding: json
service:
pipelines:
logs:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [otlphttp]
metrics:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [otlphttp]
traces:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [otlphttp]
3. Pull from a third-party solution and have the collector forward to Obics
There are several use cases when you might prefer to keep sending telemetry data to a third-party service like CloudWatch and ingest to Obics from there. Perhaps instrumenting with an OTEL SDK in the application is difficult and integrating with CloudWatch works well. Or you still might want to keep CloudWatch Integration because some team members like it.
Whatever the case may be, it's possible to deploy an OpenTelemetry collector that pulls data from a third-party source and exports it to Obics.
OTEL has the concept of "receivers" which can capture data on their own from various data sources. In case of CloudWatch, we will use the awscloudwatchreceiver, but there is a long list of out-of-the-box receivers including receivers for Jaeger, Zipkin, AWS X-Ray, Prometheus, Kubernetes, Azure Monitor, Kafka, Google Cloud Monitoring, Syslog, Fluent, AWS S3, MySQL, Postgres, Redis, MongoDB, and others.
Staying with the example of CloudWatch, here is an example of an OTEL Collector configuration that auto-discovers CloudWatch log groups and forwards them to Obics:
receivers:
awscloudwatch:
region: eu-central-1
logs:
poll_interval: 1m
groups:
autodiscover:
limit: 100
prefix: /aws/lambda/
processors:
batch: {}
exporters:
otlphttp:
endpoint: "https://ingest.obics.io/api/otel"
headers:
x-api-key: "fe03c7d8-a192-47c2-9b90-eb2fbe94adf0"
timeout: 30s
encoding: json
compression: none
processors:
batch: {}
transform/logs: # We name this pipeline "transform/logs" for clarity, but you can name it anything.
error_mode: ignore
log_statements:
- context: log
statements:
- set(attributes["timestamp"], Split(body, "\t")[0])
- set(attributes["span_id"], Split(body, "\t")[1])
- set(attributes["level"], Split(body, "\t")[2])
- set(body, Split(body, "\t")[3])
service:
pipelines:
logs:
receivers: [awscloudwatch]
processors: [batch]
exporters: [otlphttp, debug]
You can see in the example above the use of processors
and how the original payload is being transformed.
To learn more about Logs sent with OpenTelemetry see here.
Reserved Fields
When ingesting logs, traces, and metrics, Obics is mostly going to use the received fields as-is.
If there's a non-empty field named TraceId
, the value will go to the column TraceId
. If that column doesn't exist, it will be created.
The attributes map is treated like fields. Each attribute will be written to a column.
If there's an attribute id
, the value will be written to a column named id
, and the column will be created if it doesn't exist.
You can use collector processors to modify fields and attributes to fit your Obics schema.
Some fields are processed differently:
- The fields
observedTimeUnixNano
,timeUnixNano
,timestamp
(field or attribute) andtime
are consolidated to the columntime
. The value is taken in order fromtime
,timestamp
,timeUnixNano
, orobservedTimeUnixNano
, stopping on the first valid value. For example, iftime
exists, the other fields are ignored. - The field
level
is saved as UInt8 in Obics, but it can be ingested as a string like "DEBUG", "INFO", "WARN", etc. Obics will know to parse it to a numeric value. - The field
body
in Logs will be saved in the columnmessage
.