10 Client-Side Logging Best Practices

By: Ron Miller | Published: December 23, 2024

10 Client-Side Logging Best Practices

As modern web clients get more complex, logging takes on a bigger role. After all, it’s your eyes and ears on the ground, so to speak. And like everything in software, there are many nuances that make all the difference. So let’s dive into some best practices you should absolutely consider when building your logging infrastructure.

1. You should be logging client web sessions

While application logs on the backend are industry standard, client-side logging is not quite there yet. But logging has huge benefits in many aspects. Here are some of those benefits and use cases where client-side logging can really help out:

  • Debugging - Logging captures the sequence of events leading to an error in production, helping with bugs.
  • Logging metadata like browser details, screen resolution, language, etc., is helpful for reproducing bugs and providing important statistics.
  • Logging client-side errors and exceptions is far better than ignoring them.
  • Tracking user behavior can help make important business decisions.
  • When doing A/B testing, logging adds client-side visibility.
  • Capturing performance info like page load times and API call latency allows you to find performance problems you might not be aware of.
  • Security concerns - Logging helps capture anomalies and discover security risks.
  • Logging is sometimes required for compliance and auditing.

2. Don't expose your API key—use a proxy server instead

Since writing to a file isn’t an option in a browser, web apps send logs with HTTP requests. Whatever your logging platform is, the protocol usually involves API keys for authorization. The API key is added in each HTTP request in a header. But since the API key is a secret, if you’re sending it from the client, a malicious actor can get ahold of it.

Although the easiest way to send logs is to send them directly from the client, you’ll be exposing yourself to a security risk. A better way is to send them via a proxy server. A proxy server is an API endpoint for your app that re-sends the logs to the logging database. Since the API key is on the server side, it’s hidden from malicious actors. Your API endpoint should also be secured with something like OAuth.

3. Send your logs in time-based batches

Logs tend to pile up very fast. One moment you’re logging a single interaction per minute, and the next, you’ve got thousands of logs per second. At a certain point, there’s no reasonable way to send an HTTP request for each log, especially on a web client where the browser throttles HTTP requests.

The natural solution is to send multiple logs in each request as batches. But it’s important to batch logs by time intervals. You might also batch by size, but the time interval is a must. If you don’t limit by time, it’s easy to lose the last logs before the browser session ends. This is especially true in cases where your JavaScript gets stuck in an endless loop—when you really would’ve liked to see those last logs, but they never come (though, to be fair, in a tight loop, JavaScript won’t send your logs anyway).

4. Capture time on the client side

Most log management solutions will accept your logs without a timestamp. If a timestamp is missing, they’ll add the time when the log was received on the server rather than when the log was actually written. Even if the time difference is less than a second, it’s still important to timestamp your logs on the client side to preserve the order of events. Your client-side logs are going to be mixed with server-side logs, and you’ll want to look at them in the order they occurred.

5. Log web session IDs

A web session ID identifies the lifespan of a browser session. A session starts when a user enters your site and ends when the browser tab is closed. Logging that ID with every log makes it easier to figure out problems and perform aggregations. If there's an issue, you can filter the logs of that specific session. There are also many types of aggregations you can do based on web sessions, like “the average amount of time spent on site” or “the count of unique sessions this week.” It’s very useful to add the web session ID in a structured way with every log as a separate field. This allows easy filtering, grouping, and joining on that field later.

6. Propagate trace IDs and span IDs

In a distributed system like microservices, it’s useful to have a single identifier that tracks a request or transaction as it travels between multiple services. That would be a trace_id, as defined in OpenTelemetry. The best place to create the trace ID is on the client side since it’s the origin of the request.

Creating a trace ID and logging it should be done as early as possible. For example, let’s say a user clicked the “Buy” button on your ecommerce site, which eventually triggered a request. A bunch of client-side things happened before that request, and you’ll want to know about those things when looking into that trace. You can generate a trace ID at the very start of the “Buy” button event handler and keep logging it all the way up to the triggered request—and even afterward for the request response.

To create a trace ID, just generate a GUID in your favorite way, like const traceId = crypto.randomUUID();. Next, attach it in the header of your requests. You can use a custom header like X-Trace-ID, but you might as well use the W3C and OpenTelemetry standard, which means using the traceparent header like this:

fetch('/api/data', {
  headers: {
    'traceparent': `00-${traceId}-0000000000000000-01`
  }
});

7. Look out for OpenTelemetry’s browser SDK

OpenTelemetry has an experimental browser SDK. When installed, it can instrument your code to automatically log events like page load times, user interactions, and HTTP requests. Until now, such instrumentation was only available with paid solutions like LogRocket or Sentry. But OpenTelemetry’s solution is both free and keeps you vendor-neutral.

8. Capture console output

Weird errors can happen in production environments, and they’ll often show up in the console output. These might originate from browser-specific quirks, plugin conflicts, network issues, deprecation warnings, and all kinds of other problems you can’t anticipate. With just a few lines of code, you can gain visibility into all of it.

// somewhere in app startup
const originalWarn = console.warn;
const originalError = console.error;

// Override and capture
console.warn = function(...args) {
  myLogger.warn(args);
  originalWarn.apply(console, args);
};

console.error = function(...args) {
  myLogger.error(args);
  originalError.apply(console, args);
};

When an unhandled exception happens, console error logs include a stack trace and exception details, making them an easy way to capture crucial information. However, if you’ve minified your code (as you should), the stack trace will point to the minified code, not the original code. To fix that, you can deploy source maps alongside the minified code, but that also means your source code becomes public.

Note that args is an array because console.error accepts an array of arguments. You might want to serialize it into a single string or JSON with something like myLogger.warn(JSON.stringify(args)).

9. Use structured logging

Structured logging means recording log entries as structured data rather than plain text messages. The common practice is to use JSON format. A typical structured log looks like this:

{
  "message": "User logged in",
  "userId": "user123",
  "timestamp": "2024-12-19T12:34:56Z",
  "level": "info"
}

Structured logs are so popular because they are machine-readable, which makes analytics on the backend easy and fast. Once your backend log management tool can parse the fields, you can quickly search, filter, and aggregate them.

Another approach is using parameterized logs. This is a bit different from structured logs because the data isn’t in JSON format. Instead, it’s in the same format every time, making it easy for a computer to read in bulk. For example:

log("User %s logged in from %s", username, ipAddress);

One advantage parameterized logs have over structured JSON logs is that they are human-readable.

You can also take a hybrid approach, where there’s structured metadata in JSON, but the main log details are in a parameterized log. For example, the log above would look like this:

{
  "timestamp": "2024-12-19T12:34:56Z",
  "level": "Information",
  "sessionId": "5228f2eb-fa88-41e8-a8e2-658a8360af03",
  "traceId": "bbc63a5d-7430-4baa-a53d-6f280a0b469d",
  "messageTemplate": "User {Username} logged in from {IpAddress}",
  "renderedMessage": "User jdoe logged in from 192.168.0.10",
  "properties": {
    "Username": "jdoe",
    "IpAddress": "192.168.0.10"
  }
}

10. Use log IDs

A log ID is a static identifier for each log statement in the source code. It doesn’t change even if the log moves to a different line of code, file, or function. If you’re logging from multiple services to a centralized logging system (as you should be), this identifier will remain unique, even across many services. In your logging backend, the log ID will have a dedicated field or column, allowing you to filter by it. For example:

log.info(message, "e7yT3");

Setting up automatic log ID generation requires some initial effort, but it’s worth it because filtering, aggregation, and querying will be much easier and faster afterward. Log IDs replace the need for the “log patterns” feature in a more reliable way. And if your log solution supports OLAP queries, log IDs can help find correlations between events or answer complex questions.

Finishing up

I hope this post helped you in some way. While writing this, I realized there are so many more best practices to cover that one post isn’t enough—like choosing the right vendor, compliance, and sampling strategies. But we’ll leave those for later.

10 Client-Side Logging Best Practices | Obics