Skip to content
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Next Next commit
fix: indentation
indentation needs to alight with the beginning of the code block or it formats incorrectly.  Also removed code identifiers for non-code codeblocks
  • Loading branch information
brnhensley authored Dec 3, 2025
commit 781968a002da97a7cb503435d4b4ada8af17d315
Original file line number Diff line number Diff line change
Expand Up @@ -19,21 +19,21 @@ The processor supports any Redis-compatible cache implementation. It has been te
For production deployments, we recommend using cluster mode (sharded) to ensure high availability and scalability. To enable distributed caching, add the `distributed_cache` configuration to your `tail_sampling` processor section:

```yaml
tail_sampling:
distributed_cache:
connection:
address: redis://localhost:6379/0
password: 'local'
trace_window_expiration: 30s # Default: how long to wait after last span before evaluating
in_flight_timeout: 120s # Optional: defaults to trace_window_expiration if not set
traces_ttl: 3600s # Optional: default 1 hour
cache_ttl: 7200s # Optional: default 2 hours
suffix: "itc" # Redis key prefix
max_traces_per_batch: 500 # Default: traces processed per evaluation cycle
evaluation_interval: 1s # Default: evaluation frequency
evaluation_workers: 4 # Default: number of parallel workers (defaults to CPU count)
data_compression:
format: lz4 # Optional: compression format (none, snappy, zstd, lz4); lz4 recommended
tail_sampling:
distributed_cache:
connection:
address: redis://localhost:6379/0
password: 'local'
trace_window_expiration: 30s # Default: how long to wait after last span before evaluating
in_flight_timeout: 120s # Optional: defaults to trace_window_expiration if not set
traces_ttl: 3600s # Optional: default 1 hour
cache_ttl: 7200s # Optional: default 2 hours
suffix: "itc" # Redis key prefix
max_traces_per_batch: 500 # Default: traces processed per evaluation cycle
evaluation_interval: 1s # Default: evaluation frequency
evaluation_workers: 4 # Default: number of parallel workers (defaults to CPU count)
data_compression:
format: lz4 # Optional: compression format (none, snappy, zstd, lz4); lz4 recommended
```

<Callout variant="important">
Expand All @@ -43,16 +43,16 @@ For production deployments, we recommend using cluster mode (sharded) to ensure
The `address` parameter must specify a valid Redis-compatible server address using the standard format:

```shell
redis[s]://[[username][:password]@][host][:port][/db-number]
[output] redis[s]://[[username][:password]@][host][:port][/db-number]
```

Alternatively, you can embed credentials directly in the `address` parameter:

```yaml
tail_sampling:
distributed_cache:
connection:
address: redis://:yourpassword@localhost:6379/0
tail_sampling:
distributed_cache:
connection:
address: redis://:yourpassword@localhost:6379/0
```

The processor is implemented in Go and uses the [go-redis](https://github.com/redis/go-redis/tree/v9) client library.
Expand Down Expand Up @@ -114,8 +114,8 @@ Proper Redis instance sizing is critical for optimal performance. Use the config

### Memory estimation formula

```shell
Total Memory = (Trace Data) + (Decision Caches) + (Overhead)
```
Total Memory = (Trace Data) + (Decision Caches) + (Overhead)
```

#### 1. Trace data storage
Expand All @@ -133,13 +133,13 @@ Trace data is stored in Redis for the full `traces_ttl` period to support late-a

**Example calculation**: At 10,000 spans/second with 1-hour `traces_ttl`:

```shell
```
10,000 spans/sec × 3600 sec × 900 bytes = 32.4 GB
```

**With lz4 compression** (we have observed 25% reduction):

```shell
```
32.4 GB × 0.75 = 24.3 GB
```

Expand All @@ -164,7 +164,7 @@ When using `distributed_cache`, the decision caches are stored in Redis without

**Example calculation**: 500 traces per batch (default) with 20 spans per trace on average:

```shell
```
500 × 20 × 900 bytes = 9 MB per batch
```

Expand Down Expand Up @@ -247,4 +247,4 @@ The processor uses a cascading TTL structure, with each level providing protecti
4. **`cache_ttl`** (default: 2 hours)
- Redis key expiration for decision cache entries (sampled/non-sampled)
- Prevents duplicate evaluation for late-arriving spans
- Defined via `distributed_cache.cache_ttl`
- Defined via `distributed_cache.cache_ttl`
Loading