Chris Lohfink
Cassandra Metrics
About me
• Software developer at DataStax
• OpsCenter, Metrics & Cassandra interactions
© DataStax, All Rights Reserved. 2
What this talk is
• What does the thing the metrics report mean (da dum tis)
• How metrics evolved in C*
© DataStax, All Rights Reserved. 3
Collecting
Not how, but what and why
Cassandra Metrics
• For the most part metrics do not break backwards compatibility
• Until they do (from deprecation or bugs)
• Deprecated metrics are hard to identify without looking at source
code, so their disappearance may have surprising impacts even if
deprecated for years.
• i.e. Cassandra 2.2 removal of “Recent Latency” metrics
© DataStax, All Rights Reserved. 5
C* Metrics Pre-1.1
© DataStax, All Rights Reserved. 6
• Classes implemented MBeans and metrics were added in place
• ColumnFamilyStore -> ColumnFamilyStoreMBean
• Semi-adhoc, tightly coupled to code but had a “theme” or common
abstractions
Latency Tracker
• LatencyTracker stores:
• recent histogram
• total histogram
• number of ops
• total latency
• Use latency/#ops since last time called to compute “recent” average
latency
• Every time queried it will reset the latency and histogram.
© DataStax, All Rights Reserved. 7
Describing Latencies
© DataStax, All Rights Reserved. 8
0 100 200 300 400 500 600 700 800 900 1000
• Listing the raw the values:
13ms, 14ms, 2ms, 13ms, 90ms, 734ms, 8ms, 23ms, 30ms
• Doesn’t scale well
• Not easy to parse, with larger amounts can be difficult to find high values
Describing Latencies
© DataStax, All Rights Reserved. 9
0 100 200 300 400 500 600 700 800 900 1000
• Average:
• 103ms
Describing Latencies
© DataStax, All Rights Reserved. 10
0 100 200 300 400 500 600 700 800 900 1000
• Average:
• 103ms
Describing Latencies
© DataStax, All Rights Reserved. 11
0 100 200 300 400 500 600 700 800 900 1000
• Average:
• 103ms
• Missing outliers
Describing Latencies
© DataStax, All Rights Reserved. 12
0 100 200 300 400 500 600 700 800 900 1000
• Average:
• 103ms
• Missing outliers
• Max: 734ms
• Min: 2ms
Describing Latencies
© DataStax, All Rights Reserved. 13
0 100 200 300 400 500 600 700 800 900 1000
• Average:
• 103ms
• Missing outliers
• Max: 734ms
• Min: 2ms
Latency Tracker
• LatencyTracker stores:
• recent histogram
• total histogram
• number of ops
• total latency
• Use latency/#ops since last time called to compute “recent”
average latency
• Every time queried it will reset the latency and histogram.
© DataStax, All Rights Reserved. 14
Recent Average Latencies
© DataStax, All Rights Reserved. 15
0 100 200 300 400 500 600 700 800 900 1000
• Reported latency from
• Sum of latencies since last called
• Number of requests since last called
• Average:
• 103ms
• Outliers lost
Histograms
• Describes frequency of data
© DataStax, All Rights Reserved. 16
1, 2, 1, 1, 3, 4, 3, 1
Histograms
• Describes frequency of data
1
© DataStax, All Rights Reserved. 17
1, 2, 1, 1, 3, 4, 3, 1
Histograms
• Describes frequency of data
1
2
© DataStax, All Rights Reserved. 18
1, 2, 1, 1, 3, 4, 3, 1
Histograms
• Describes frequency of data
11
2
© DataStax, All Rights Reserved. 19
1, 2, 1, 1, 3, 4, 3, 1
Histograms
• Describes frequency of data
111
2
© DataStax, All Rights Reserved. 20
1, 2, 1, 1, 3, 4, 3, 1
Histograms
• Describes frequency of data
111
2
3
© DataStax, All Rights Reserved. 21
1, 2, 1, 1, 3, 4, 3, 1
Histograms
• Describes frequency of data
111
2
3
4
© DataStax, All Rights Reserved. 22
1, 2, 1, 1, 3, 4, 3, 1
Histograms
• Describes frequency of data
111
2
33
4
© DataStax, All Rights Reserved. 23
1, 2, 1, 1, 3, 4, 3, 1
Histograms
• Describes frequency of data
1111
2
33
4
© DataStax, All Rights Reserved. 24
1, 2, 1, 1, 3, 4, 3, 1
Histograms
• Describes frequency of data
1111
2
33
4
© DataStax, All Rights Reserved. 25
1, 2, 1, 1, 3, 4, 3, 1
0 1 2 3 4
4
3
2
1
Count
Histograms
• "bin" the range of values
• divide the entire range of values into a series of intervals
• Count how many values fall into each interval
© DataStax, All Rights Reserved. 26
Histograms
• "bin" the range of values—that is, divide the entire range of values
into a series of intervals—and then count how many values fall into
each interval
© DataStax, All Rights Reserved. 27
0 100 200 300 400 500 600 700 800 900 1000
13, 14, 2, 20, 13, 90, 734, 8, 53, 23, 30
Histograms
• "bin" the range of values—that is, divide the entire range of values
into a series of intervals—and then count how many values fall into
each interval
© DataStax, All Rights Reserved. 28
13, 14, 2, 20, 13, 90, 734, 8, 53, 23, 30
Histograms
• "bin" the range of values—that is, divide the entire range of values
into a series of intervals—and then count how many values fall into
each interval
© DataStax, All Rights Reserved. 29
2, 8, 13, 13, 14, 20, 23, 30, 53, 90, 734
Histograms
• "bin" the range of values—that is, divide the entire range of values
into a series of intervals—and then count how many values fall into
each interval
© DataStax, All Rights Reserved. 30
2, 8, 13, 13, 14, 20, 23, 30, 53, 90, 734
1-10 11-100 101-1000
2 8 1
Histograms
Approximations
Max: 1000 (actual 734)
© DataStax, All Rights Reserved. 31
1-10 11-100 101-1000
2 8 1
Histograms
Approximations
Max: 1000 (actual 734)
Min: 10 (actual 2)
© DataStax, All Rights Reserved. 32
1-10 11-100 101-1000
2 8 1
Histograms
Approximations
Max: 1000 (actual 734)
Min: 10 (actual 2)
Average: sum / count, (10*2 + 100*8 + 1000*1) / (2+8+1) = 165 (actual 103)
© DataStax, All Rights Reserved. 33
1-10 11-100 101-1000
2 8 1
Histograms
Approximations
Max: 1000 (actual 734)
Min: 10 (actual 2)
Average: sum / count, (10*2 + 100*8 + 1000*1) / (2+8+1) = 165 (actual 103)
Percentiles: 11 requests, so we know 90 percent of the latencies occurred in the 11-100 bucket or
lower.
90th Percentile: 100
© DataStax, All Rights Reserved. 34
1-10 11-100 101-1000
2 8 1
Histograms
Approximations
Max: 1000 (actual 734)
Min: 10 (actual 2)
Average: sum / count, (10*2 + 100*8 + 1000) / (2+8+1) = 165 (actual 103)
Percentiles: 11 requests, so we know 90 percent of the latencies occurred in the 11-100 bucket or
lower.
90th Percentile: 100
© DataStax, All Rights Reserved. 35
1-10 11-100 101-1000
2 8 1
EstimatedHistogram
The series starts at 1 and grows by 1.2 each time
1, 2, 3, 4, 5, 6, 7, 8, 10, 12, 14, 17, 20, 24, 29,
…
12108970, 14530764, 17436917, 20924300, 25109160
© DataStax, All Rights Reserved. 36
LatencyTracker
Has two histograms
• Recent
• Count of times a latency occurred since last time read for each bin
• Total
• Count of times a latency occurred since Cassandra started for each bin
© DataStax, All Rights Reserved. 37
Total Histogram Deltas
If you keep track of histogram last time you read it can find delta to determine
how many occurred in that interval
Last
Now
© DataStax, All Rights Reserved. 38
1-10 11-100 101-1000
2 8 1
1-10 11-100 101-1000
4 8 2
Total Histogram Deltas
If you keep track of histogram last time you read it can find delta to determine
how many occurred in that interval
Last
Now
Delta
© DataStax, All Rights Reserved.
1-10 11-100 101-1000
2 8 1
1-10 11-100 101-1000
4 8 2
1-10 11-100 101-1000
2 0 1
Cassandra 1.1
• Yammer/Codahale/Dropwizard Metrics introduced
• Awesome!
• Not so awesome…
© DataStax, All Rights Reserved. 40
Reservoirs
• Maintain a sample of the data that is representative of the entire set.
• Can perform operations on the limited, fixed memory set as if on entire dataset
• Vitters Algorithm R
• Offers a 99.9% confidence level & 5% margin of error
• Simple
• Randomly include value in reservoir, less and less likely as more
values seen
© DataStax, All Rights Reserved. 41
Reservoirs
• Maintain a sample of the data that is representative of the entire set.
• Can perform operations on the limited, fixed memory set as if on entire dataset
• Vitters Algorithm R
• Offers a 99.9% confidence level & 5% margin of error
* When the stream has a normal distribution
© DataStax, All Rights Reserved. 42
Metrics Reservoirs
• Random sampling, what can it miss?
– Min
– Max
– Everything in 99th percentile?
– The more rare, the less likely to be included
43
Metrics Reservoirs
• “Good enough” for basic adhoc viewing but too non-deterministic for many
• Commonly resolved using replacement reservoirs (i.e. HdrHistogram)
44
Metrics Reservoirs
• “Good enough” for basic adhoc viewing but too non-deterministic for many
• Commonly resolved using replacement reservoirs (i.e. HdrHistogram)
– org.apache.cassandra.metrics.EstimatedHistogramReservoir
45
Cassandra 2.2
• CASSANDRA-5657 – upgrade metrics library (and extend it)
– Replaced reservoir with EH
• Also exposed raw bin counts in values operation
– Deleted deprecated metrics
• Non EH latencies from LatencyTracker
46
Cassandra 2.2
• No recency in histograms
• Requires delta’ing on the total bin counts currently which is beyond
some simple tooling
• CASSANDRA-11752 (fixed 2.2.8, 3.0.9, 3.8)
47
Storage
Storing the data
• We have data, now to store it. Approaches tend to follow:
– Store all data points
• Provide aggregations either pre-computed as entered, MR, or on query
– Round Robin Database
• Only store pre-computed aggregations
• Choice depends heavily on requirements
49
Round Robin Database
• Store state required to generate the aggregations, and only store the
aggregations
– Sum & Count for Average
– Current min, max
– “One pass” or “online” algorithms
• Constant footprint
50
Round Robin Database
• Store state required to generate the aggregations, and only store the aggregations
– Sum & Count for Average
– Current min, max
– “One pass” or “online” algorithms
• Constant footprint
51
60 300 3600
Sum 0 0 0
Count 0 0 0
Min 0 0 0
Max 0 0 0
Round Robin Database
> 10ms @ 00:00
52
60 300 3600
Sum 10 10 10
Count 1 1 1
Min 10 10 10
Max 10 10 10
Round Robin Database
> 10ms @ 00:00
> 12ms @ 00:30
53
60 300 3600
Sum 22 22 22
Count 2 2 2
Min 10 10 10
Max 12 12 12
Round Robin Database
> 10ms @ 00:00
> 12ms @ 00:30
> 14ms @ 00:59
54
60 300 3600
Sum 36 36 36
Count 3 3 3
Min 10 10 10
Max 14 14 14
Round Robin Database
> 10ms @ 00:00
> 12ms @ 00:30
> 14ms @ 00:59
> 13ms @ 01:10
55
60 300 3600
Sum 36 36 36
Count 3 3 3
Min 10 10 10
Max 14 14 14
Round Robin Database
> 10ms @ 00:00
> 12ms @ 00:30
> 14ms @ 00:59
> 13ms @ 01:10
56
60 300 3600
Sum 36 36 36
Count 3 3 3
Min 10 10 10
Max 14 14 14
Average 12
Min 10
Max 14
Round Robin Database
> 10ms @ 00:00
> 12ms @ 00:30
> 14ms @ 00:59
> 13ms @ 01:10
57
60 300 3600
Sum 0 36 36
Count 0 3 3
Min 0 10 10
Max 0 14 14
Round Robin Database
> 10ms @ 00:00
> 12ms @ 00:30
> 14ms @ 00:59
> 13ms @ 01:10
58
60 300 3600
Sum 13 49 49
Count 1 4 4
Min 13 10 10
Max 13 14 14
Max is a lie
• The issue with the deprecated LatencyTracker metrics is that the 1 minute interval
does not have a min/max. So we cannot compute true min/max
the rollups min/max will be the minimum and maximum average
59
Histograms to the rescue (again)
• The histograms of the data does not have this issue. But storage is
more complex. Some options include:
– Store each bin of the histogram as a metric
– Store the percentiles/min/max each as own metric
– Store raw long[90] (possibly compressed)
60
Histogram Storage Size
• Some things to note:
– “Normal” clusters have over 100 tables.
– Each table has at least two histograms we want to record
• Read latency
• Write latency
• Tombstones scanned
• Cells scanned
• Partition cell size
• Partition cell count
61
Histogram Storage
Because we store the extra histograms we have a 600 (minimum) with upper
bounds seen to be over 24,000 histograms per minute.
• Storing 1 per bin means [54000] metrics (expensive to store, expensive to
read)
• Storing raw histograms is [600] metrics
• Storing min, max, 50th, 90th, 99th is [3000] metrics
– Additional problems with this
• Cant compute 10th, 95th, 99.99th etc
• Aggregations
62
Aggregating Histograms
Averaging the percentiles
[ INSERT DISAPOINTED GIL TENE PHOTO ]
© DataStax, All Rights Reserved. 63
Aggregating Histograms
• Consider averaging the maximum
If there is a node with a 10 second GC, but the maximum latency on your other 9 nodes
is 60ms. If you report a “Max 1 second” latency, it would be misleading.
• Poor at representing hotspots affects on your application
One node in 10 node raspberry pi cluster gets 1000 write reqs/sec while others get 10
reqs/sec. The 1 node being under heavy stress has a 90th percentile of 10 second. The
other nodes are basically sub ms and writes are taking 1ms on 90th percentile. Would
report a 1 second 90th percentile, even though 10% of our applications writes are taking
>10 seconds
© DataStax, All Rights Reserved. 64
Aggregating Histograms
Merging histograms from different nodes more accurately can be straight forward:
Node1
Node2
Cluster
© DataStax, All Rights Reserved. 65
1-10 11-100 101-1000
2 8 1
1-10 11-100 101-1000
2 1 5
1-10 11-100 101-1000
4 9 6
Histogram Storage
Because we store the extra histograms we have a 600 (minimum) with upper
bounds seen to be over 24,000 histograms per minute.
• Storing 1 per bin means [54000] metrics (expensive to store, expensive to
read)
• Storing raw histograms is [600] metrics
• Storing min, max, 50th, 90th, 99th is [3000] metrics
– Additional problems with this
• Cant compute 10th, 95th, 99.99th etc
• Aggregations
66
Raw Histogram storage
• Storing raw histograms 160 (default) longs is a minimum of 1.2kb
bytes per rollup and hard sell
– 760kb per minute (600 tables)
– 7.7gb for the 7 day TTL we want to keep our 1 min rollups at
– ~77gb with 10 nodes
– ~2.3 Tb on 10 node clusters with 3k tables
– Expired data isn’t immediately purged so disk space can be much worse
67
Raw Histogram storage
• Goal: We wanted this to be comparable to other min/max/avg metric
storage (12 bytes each)
– 700mb on expected 10 node cluster
– 2gb on extreme 10 node cluster
• Enter compression
68
Compressing Histograms
• Overhead of typical compression makes it a non-starter.
– headers (ie 10 bytes for gzip) alone nearly exceeds the length used by
existing rollup storage (~12 bytes per metric)
• Instead we opt to leverage known context to reduce the size of the
data along with some universal encoding.
69
Compressing Histograms
• Instead of storing every bin, only store the value of each bin with a value > 0
since most bin will have no data (ie, very unlikely for a read histogram to be
between 1-10 microseconds which is first 10 bins)
• Write the count of offset/count pairs
• Use varint for the bin count
– To reduce the value of the varint as much as possible we sort the offset/count
pairs by the count and represent it as a delta sequence
70
Compressing Histograms
0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
71
1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte
1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte
1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte
Compressing Histograms
0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
72
7
Compressing Histograms
0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
{4:1, 8:100, 11:9999999, 14:1, 15:127, 16:128 17:129}
73
7
Compressing Histograms
0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
{4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999}
74
7
Compressing Histograms
0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
{4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999}
75
7 4 1
Compressing Histograms
0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
{4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999}
76
7 4 1 14 0
Compressing Histograms
0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
{4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999}
77
7 4 1 14 0 8 99
Compressing Histograms
0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
{4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999}
78
7 4 1 14 0 8 99 15
27
Compressing Histograms
0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
{4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999}
79
7 4 1 14 0 8 99 15
27 16 1 17 1
Compressing Histograms
0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
{4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999}
80
7 4 1 14 0 8 99 15
27 16 1 17 1 11
9999870
Compressing Histograms
Real Life** results of compression:
81
Size in bytes
Median 1
75th 3
95th 15
99th 45
Max** 124
Note on HdrHistogram
• Comes up every couple months
• Very awesome histogram, popular replacement for Metrics reservoir.
– More powerful and general purpose than EH
– Only slightly slower for all it offers
A issue comes up a bit with storage:
• Logged HdrHistograms are ~31kb each (30,000x more than our average use)
• Compressed version: 1kb each
• Perfect for many many people when tracking 1 or two metrics. Gets painful when
tracking hundreds or thousands
82
Questions?

More Related Content

PPTX
RocksDB detail
PPTX
Hive: Loading Data
PDF
Apache HBase Improvements and Practices at Xiaomi
PPTX
Myths of Big Partitions (Robert Stupp, DataStax) | Cassandra Summit 2016
PDF
C* Summit 2013: How Not to Use Cassandra by Axel Liljencrantz
PPTX
Lightweight Transactions in Scylla versus Apache Cassandra
PDF
What is in a Lucene index?
PDF
Optimizing Delta/Parquet Data Lakes for Apache Spark
RocksDB detail
Hive: Loading Data
Apache HBase Improvements and Practices at Xiaomi
Myths of Big Partitions (Robert Stupp, DataStax) | Cassandra Summit 2016
C* Summit 2013: How Not to Use Cassandra by Axel Liljencrantz
Lightweight Transactions in Scylla versus Apache Cassandra
What is in a Lucene index?
Optimizing Delta/Parquet Data Lakes for Apache Spark

What's hot (20)

PDF
Performance Tuning RocksDB for Kafka Streams’ State Stores
PPTX
Jvm & Garbage collection tuning for low latencies application
PPTX
Rocks db state store in structured streaming
PDF
How Adobe Does 2 Million Records Per Second Using Apache Spark!
PPTX
Tanel Poder Oracle Scripts and Tools (2010)
PPTX
RocksDB compaction
PDF
Performance Wins with eBPF: Getting Started (2021)
PPTX
Jvm tuning for low latency application & Cassandra
PDF
ACID ORC, Iceberg, and Delta Lake—An Overview of Table Formats for Large Scal...
PDF
Understanding and Improving Code Generation
PDF
Parquet Strata/Hadoop World, New York 2013
PDF
Looking towards an official cassandra sidecar netflix
PDF
Common Strategies for Improving Performance on Your Delta Lakehouse
PPTX
HBase Low Latency
PDF
Real-time Data Ingestion from Kafka to ClickHouse with Deterministic Re-tries...
PPTX
Debunking the Myths of HDFS Erasure Coding Performance
PDF
Understanding Query Plans and Spark UIs
PDF
Top 5 mistakes when writing Spark applications
PPTX
Apache Tez - A New Chapter in Hadoop Data Processing
PPTX
Apache Kafka - Overview
Performance Tuning RocksDB for Kafka Streams’ State Stores
Jvm & Garbage collection tuning for low latencies application
Rocks db state store in structured streaming
How Adobe Does 2 Million Records Per Second Using Apache Spark!
Tanel Poder Oracle Scripts and Tools (2010)
RocksDB compaction
Performance Wins with eBPF: Getting Started (2021)
Jvm tuning for low latency application & Cassandra
ACID ORC, Iceberg, and Delta Lake—An Overview of Table Formats for Large Scal...
Understanding and Improving Code Generation
Parquet Strata/Hadoop World, New York 2013
Looking towards an official cassandra sidecar netflix
Common Strategies for Improving Performance on Your Delta Lakehouse
HBase Low Latency
Real-time Data Ingestion from Kafka to ClickHouse with Deterministic Re-tries...
Debunking the Myths of HDFS Erasure Coding Performance
Understanding Query Plans and Spark UIs
Top 5 mistakes when writing Spark applications
Apache Tez - A New Chapter in Hadoop Data Processing
Apache Kafka - Overview
Ad

Similar to Storing Cassandra Metrics (Chris Lohfink, DataStax) | C* Summit 2016 (20)

PPTX
Webinar | How to Understand Apache Cassandra™ Performance Through Read/Writ...
PDF
OSMC 2016 - Friends and foes by Heinrich Hartmann
PDF
OSMC 2016 | Friends and foes in API Monitoring by Heinrich Hartmann
PDF
Statistics for Engineers
PDF
Measure All the Things! - Austin Data Day 2014
PDF
Everybody Lies
PPTX
4Developers 2015: Measure to fail - Tomasz Kowalczewski
PPTX
Measure to fail
PDF
Histograms in MariaDB, MySQL and PostgreSQL
PPTX
Real-Time Metrics and Distributed Monitoring - Jeff Pierce, Change.org - Dev...
PPTX
Data-Stream-Analytics.pptx
PPT
DB2 Workload Manager Histograms
PDF
Proactive performance monitoring with adaptive thresholds
PPTX
Engineers guide to data analysis
PDF
OSMC 2016 - The Engineer's guide to Data Analysis by Avishai Ish-Shalom
PDF
OSMC 2016 | The Engineer's guide to Data Analysis by Avishai Ish-Shalom
PPTX
Austin Scales- Clickstream Analytics at Bazaarvoice
PDF
Introduction to Data streaming - 05/12/2014
PPTX
Monitoring Cassandra with graphite using Yammer Coda-Hale Library
PPTX
Observability - The good, the bad and the ugly Xp Days 2019 Kiev Ukraine
Webinar | How to Understand Apache Cassandra™ Performance Through Read/Writ...
OSMC 2016 - Friends and foes by Heinrich Hartmann
OSMC 2016 | Friends and foes in API Monitoring by Heinrich Hartmann
Statistics for Engineers
Measure All the Things! - Austin Data Day 2014
Everybody Lies
4Developers 2015: Measure to fail - Tomasz Kowalczewski
Measure to fail
Histograms in MariaDB, MySQL and PostgreSQL
Real-Time Metrics and Distributed Monitoring - Jeff Pierce, Change.org - Dev...
Data-Stream-Analytics.pptx
DB2 Workload Manager Histograms
Proactive performance monitoring with adaptive thresholds
Engineers guide to data analysis
OSMC 2016 - The Engineer's guide to Data Analysis by Avishai Ish-Shalom
OSMC 2016 | The Engineer's guide to Data Analysis by Avishai Ish-Shalom
Austin Scales- Clickstream Analytics at Bazaarvoice
Introduction to Data streaming - 05/12/2014
Monitoring Cassandra with graphite using Yammer Coda-Hale Library
Observability - The good, the bad and the ugly Xp Days 2019 Kiev Ukraine
Ad

More from DataStax (20)

PPTX
Is Your Enterprise Ready to Shine This Holiday Season?
PPTX
Designing Fault-Tolerant Applications with DataStax Enterprise and Apache Cas...
PPTX
Running DataStax Enterprise in VMware Cloud and Hybrid Environments
PPTX
Best Practices for Getting to Production with DataStax Enterprise Graph
PPTX
Webinar | Data Management for Hybrid and Multi-Cloud: A Four-Step Journey
PDF
Webinar | Better Together: Apache Cassandra and Apache Kafka
PDF
Top 10 Best Practices for Apache Cassandra and DataStax Enterprise
PDF
Introduction to Apache Cassandra™ + What’s New in 4.0
PPTX
Webinar: How Active Everywhere Database Architecture Accelerates Hybrid Cloud...
PPTX
Webinar | Aligning GDPR Requirements with Today's Hybrid Cloud Realities
PDF
Designing a Distributed Cloud Database for Dummies
PDF
How to Power Innovation with Geo-Distributed Data Management in Hybrid Cloud
PDF
How to Evaluate Cloud Databases for eCommerce
PPTX
Webinar: DataStax Enterprise 6: 10 Ways to Multiply the Power of Apache Cassa...
PPTX
Webinar: DataStax and Microsoft Azure: Empowering the Right-Now Enterprise wi...
PPTX
Webinar - Real-Time Customer Experience for the Right-Now Enterprise featurin...
PPTX
Datastax - The Architect's guide to customer experience (CX)
PPTX
An Operational Data Layer is Critical for Transformative Banking Applications
PPTX
Becoming a Customer-Centric Enterprise Via Real-Time Data and Design Thinking
PPTX
Innovation Around Data and AI for Fraud Detection
Is Your Enterprise Ready to Shine This Holiday Season?
Designing Fault-Tolerant Applications with DataStax Enterprise and Apache Cas...
Running DataStax Enterprise in VMware Cloud and Hybrid Environments
Best Practices for Getting to Production with DataStax Enterprise Graph
Webinar | Data Management for Hybrid and Multi-Cloud: A Four-Step Journey
Webinar | Better Together: Apache Cassandra and Apache Kafka
Top 10 Best Practices for Apache Cassandra and DataStax Enterprise
Introduction to Apache Cassandra™ + What’s New in 4.0
Webinar: How Active Everywhere Database Architecture Accelerates Hybrid Cloud...
Webinar | Aligning GDPR Requirements with Today's Hybrid Cloud Realities
Designing a Distributed Cloud Database for Dummies
How to Power Innovation with Geo-Distributed Data Management in Hybrid Cloud
How to Evaluate Cloud Databases for eCommerce
Webinar: DataStax Enterprise 6: 10 Ways to Multiply the Power of Apache Cassa...
Webinar: DataStax and Microsoft Azure: Empowering the Right-Now Enterprise wi...
Webinar - Real-Time Customer Experience for the Right-Now Enterprise featurin...
Datastax - The Architect's guide to customer experience (CX)
An Operational Data Layer is Critical for Transformative Banking Applications
Becoming a Customer-Centric Enterprise Via Real-Time Data and Design Thinking
Innovation Around Data and AI for Fraud Detection

Recently uploaded (20)

PPTX
ROI from Efficient Content & Campaign Management in the Digital Media Industry
PPTX
Beige and Black Minimalist Project Deck Presentation (1).pptx
PPTX
DevOpsDays Halifax 2025 - Building 10x Organizations Using Modern Productivit...
PPTX
Swiggy API Scraping A Comprehensive Guide on Data Sets and Applications.pptx
PDF
Odoo Construction Management System by CandidRoot
PPTX
Chapter_05_System Modeling for software engineering
PPTX
Greedy best-first search algorithm always selects the path which appears best...
PPTX
MCP empowers AI Agents from Zero to Production
PDF
IT Consulting Services to Secure Future Growth
PPTX
WJQSJXNAZJVCVSAXJHBZKSJXKJKXJSBHJBJEHHJB
PDF
Understanding the Need for Systemic Change in Open Source Through Intersectio...
PPTX
Why 2025 Is the Best Year to Hire Software Developers in India
PDF
Mobile App for Guard Tour and Reporting.pdf
PPTX
Relevance Tuning with Genetic Algorithms
PPTX
ESDS_SAP Application Cloud Offerings.pptx
PPTX
FLIGHT TICKET API | API INTEGRATION PLATFORM
PPTX
Independent Consultants’ Biggest Challenges in ERP Projects – and How Apagen ...
PDF
Mobile App Backend Development with WordPress REST API: The Complete eBook
PPTX
AI Tools Revolutionizing Software Development Workflows
PDF
How to Set Realistic Project Milestones and Deadlines
ROI from Efficient Content & Campaign Management in the Digital Media Industry
Beige and Black Minimalist Project Deck Presentation (1).pptx
DevOpsDays Halifax 2025 - Building 10x Organizations Using Modern Productivit...
Swiggy API Scraping A Comprehensive Guide on Data Sets and Applications.pptx
Odoo Construction Management System by CandidRoot
Chapter_05_System Modeling for software engineering
Greedy best-first search algorithm always selects the path which appears best...
MCP empowers AI Agents from Zero to Production
IT Consulting Services to Secure Future Growth
WJQSJXNAZJVCVSAXJHBZKSJXKJKXJSBHJBJEHHJB
Understanding the Need for Systemic Change in Open Source Through Intersectio...
Why 2025 Is the Best Year to Hire Software Developers in India
Mobile App for Guard Tour and Reporting.pdf
Relevance Tuning with Genetic Algorithms
ESDS_SAP Application Cloud Offerings.pptx
FLIGHT TICKET API | API INTEGRATION PLATFORM
Independent Consultants’ Biggest Challenges in ERP Projects – and How Apagen ...
Mobile App Backend Development with WordPress REST API: The Complete eBook
AI Tools Revolutionizing Software Development Workflows
How to Set Realistic Project Milestones and Deadlines

Storing Cassandra Metrics (Chris Lohfink, DataStax) | C* Summit 2016

  • 2. About me • Software developer at DataStax • OpsCenter, Metrics & Cassandra interactions © DataStax, All Rights Reserved. 2
  • 3. What this talk is • What does the thing the metrics report mean (da dum tis) • How metrics evolved in C* © DataStax, All Rights Reserved. 3
  • 4. Collecting Not how, but what and why
  • 5. Cassandra Metrics • For the most part metrics do not break backwards compatibility • Until they do (from deprecation or bugs) • Deprecated metrics are hard to identify without looking at source code, so their disappearance may have surprising impacts even if deprecated for years. • i.e. Cassandra 2.2 removal of “Recent Latency” metrics © DataStax, All Rights Reserved. 5
  • 6. C* Metrics Pre-1.1 © DataStax, All Rights Reserved. 6 • Classes implemented MBeans and metrics were added in place • ColumnFamilyStore -> ColumnFamilyStoreMBean • Semi-adhoc, tightly coupled to code but had a “theme” or common abstractions
  • 7. Latency Tracker • LatencyTracker stores: • recent histogram • total histogram • number of ops • total latency • Use latency/#ops since last time called to compute “recent” average latency • Every time queried it will reset the latency and histogram. © DataStax, All Rights Reserved. 7
  • 8. Describing Latencies © DataStax, All Rights Reserved. 8 0 100 200 300 400 500 600 700 800 900 1000 • Listing the raw the values: 13ms, 14ms, 2ms, 13ms, 90ms, 734ms, 8ms, 23ms, 30ms • Doesn’t scale well • Not easy to parse, with larger amounts can be difficult to find high values
  • 9. Describing Latencies © DataStax, All Rights Reserved. 9 0 100 200 300 400 500 600 700 800 900 1000 • Average: • 103ms
  • 10. Describing Latencies © DataStax, All Rights Reserved. 10 0 100 200 300 400 500 600 700 800 900 1000 • Average: • 103ms
  • 11. Describing Latencies © DataStax, All Rights Reserved. 11 0 100 200 300 400 500 600 700 800 900 1000 • Average: • 103ms • Missing outliers
  • 12. Describing Latencies © DataStax, All Rights Reserved. 12 0 100 200 300 400 500 600 700 800 900 1000 • Average: • 103ms • Missing outliers • Max: 734ms • Min: 2ms
  • 13. Describing Latencies © DataStax, All Rights Reserved. 13 0 100 200 300 400 500 600 700 800 900 1000 • Average: • 103ms • Missing outliers • Max: 734ms • Min: 2ms
  • 14. Latency Tracker • LatencyTracker stores: • recent histogram • total histogram • number of ops • total latency • Use latency/#ops since last time called to compute “recent” average latency • Every time queried it will reset the latency and histogram. © DataStax, All Rights Reserved. 14
  • 15. Recent Average Latencies © DataStax, All Rights Reserved. 15 0 100 200 300 400 500 600 700 800 900 1000 • Reported latency from • Sum of latencies since last called • Number of requests since last called • Average: • 103ms • Outliers lost
  • 16. Histograms • Describes frequency of data © DataStax, All Rights Reserved. 16 1, 2, 1, 1, 3, 4, 3, 1
  • 17. Histograms • Describes frequency of data 1 © DataStax, All Rights Reserved. 17 1, 2, 1, 1, 3, 4, 3, 1
  • 18. Histograms • Describes frequency of data 1 2 © DataStax, All Rights Reserved. 18 1, 2, 1, 1, 3, 4, 3, 1
  • 19. Histograms • Describes frequency of data 11 2 © DataStax, All Rights Reserved. 19 1, 2, 1, 1, 3, 4, 3, 1
  • 20. Histograms • Describes frequency of data 111 2 © DataStax, All Rights Reserved. 20 1, 2, 1, 1, 3, 4, 3, 1
  • 21. Histograms • Describes frequency of data 111 2 3 © DataStax, All Rights Reserved. 21 1, 2, 1, 1, 3, 4, 3, 1
  • 22. Histograms • Describes frequency of data 111 2 3 4 © DataStax, All Rights Reserved. 22 1, 2, 1, 1, 3, 4, 3, 1
  • 23. Histograms • Describes frequency of data 111 2 33 4 © DataStax, All Rights Reserved. 23 1, 2, 1, 1, 3, 4, 3, 1
  • 24. Histograms • Describes frequency of data 1111 2 33 4 © DataStax, All Rights Reserved. 24 1, 2, 1, 1, 3, 4, 3, 1
  • 25. Histograms • Describes frequency of data 1111 2 33 4 © DataStax, All Rights Reserved. 25 1, 2, 1, 1, 3, 4, 3, 1 0 1 2 3 4 4 3 2 1 Count
  • 26. Histograms • "bin" the range of values • divide the entire range of values into a series of intervals • Count how many values fall into each interval © DataStax, All Rights Reserved. 26
  • 27. Histograms • "bin" the range of values—that is, divide the entire range of values into a series of intervals—and then count how many values fall into each interval © DataStax, All Rights Reserved. 27 0 100 200 300 400 500 600 700 800 900 1000 13, 14, 2, 20, 13, 90, 734, 8, 53, 23, 30
  • 28. Histograms • "bin" the range of values—that is, divide the entire range of values into a series of intervals—and then count how many values fall into each interval © DataStax, All Rights Reserved. 28 13, 14, 2, 20, 13, 90, 734, 8, 53, 23, 30
  • 29. Histograms • "bin" the range of values—that is, divide the entire range of values into a series of intervals—and then count how many values fall into each interval © DataStax, All Rights Reserved. 29 2, 8, 13, 13, 14, 20, 23, 30, 53, 90, 734
  • 30. Histograms • "bin" the range of values—that is, divide the entire range of values into a series of intervals—and then count how many values fall into each interval © DataStax, All Rights Reserved. 30 2, 8, 13, 13, 14, 20, 23, 30, 53, 90, 734 1-10 11-100 101-1000 2 8 1
  • 31. Histograms Approximations Max: 1000 (actual 734) © DataStax, All Rights Reserved. 31 1-10 11-100 101-1000 2 8 1
  • 32. Histograms Approximations Max: 1000 (actual 734) Min: 10 (actual 2) © DataStax, All Rights Reserved. 32 1-10 11-100 101-1000 2 8 1
  • 33. Histograms Approximations Max: 1000 (actual 734) Min: 10 (actual 2) Average: sum / count, (10*2 + 100*8 + 1000*1) / (2+8+1) = 165 (actual 103) © DataStax, All Rights Reserved. 33 1-10 11-100 101-1000 2 8 1
  • 34. Histograms Approximations Max: 1000 (actual 734) Min: 10 (actual 2) Average: sum / count, (10*2 + 100*8 + 1000*1) / (2+8+1) = 165 (actual 103) Percentiles: 11 requests, so we know 90 percent of the latencies occurred in the 11-100 bucket or lower. 90th Percentile: 100 © DataStax, All Rights Reserved. 34 1-10 11-100 101-1000 2 8 1
  • 35. Histograms Approximations Max: 1000 (actual 734) Min: 10 (actual 2) Average: sum / count, (10*2 + 100*8 + 1000) / (2+8+1) = 165 (actual 103) Percentiles: 11 requests, so we know 90 percent of the latencies occurred in the 11-100 bucket or lower. 90th Percentile: 100 © DataStax, All Rights Reserved. 35 1-10 11-100 101-1000 2 8 1
  • 36. EstimatedHistogram The series starts at 1 and grows by 1.2 each time 1, 2, 3, 4, 5, 6, 7, 8, 10, 12, 14, 17, 20, 24, 29, … 12108970, 14530764, 17436917, 20924300, 25109160 © DataStax, All Rights Reserved. 36
  • 37. LatencyTracker Has two histograms • Recent • Count of times a latency occurred since last time read for each bin • Total • Count of times a latency occurred since Cassandra started for each bin © DataStax, All Rights Reserved. 37
  • 38. Total Histogram Deltas If you keep track of histogram last time you read it can find delta to determine how many occurred in that interval Last Now © DataStax, All Rights Reserved. 38 1-10 11-100 101-1000 2 8 1 1-10 11-100 101-1000 4 8 2
  • 39. Total Histogram Deltas If you keep track of histogram last time you read it can find delta to determine how many occurred in that interval Last Now Delta © DataStax, All Rights Reserved. 1-10 11-100 101-1000 2 8 1 1-10 11-100 101-1000 4 8 2 1-10 11-100 101-1000 2 0 1
  • 40. Cassandra 1.1 • Yammer/Codahale/Dropwizard Metrics introduced • Awesome! • Not so awesome… © DataStax, All Rights Reserved. 40
  • 41. Reservoirs • Maintain a sample of the data that is representative of the entire set. • Can perform operations on the limited, fixed memory set as if on entire dataset • Vitters Algorithm R • Offers a 99.9% confidence level & 5% margin of error • Simple • Randomly include value in reservoir, less and less likely as more values seen © DataStax, All Rights Reserved. 41
  • 42. Reservoirs • Maintain a sample of the data that is representative of the entire set. • Can perform operations on the limited, fixed memory set as if on entire dataset • Vitters Algorithm R • Offers a 99.9% confidence level & 5% margin of error * When the stream has a normal distribution © DataStax, All Rights Reserved. 42
  • 43. Metrics Reservoirs • Random sampling, what can it miss? – Min – Max – Everything in 99th percentile? – The more rare, the less likely to be included 43
  • 44. Metrics Reservoirs • “Good enough” for basic adhoc viewing but too non-deterministic for many • Commonly resolved using replacement reservoirs (i.e. HdrHistogram) 44
  • 45. Metrics Reservoirs • “Good enough” for basic adhoc viewing but too non-deterministic for many • Commonly resolved using replacement reservoirs (i.e. HdrHistogram) – org.apache.cassandra.metrics.EstimatedHistogramReservoir 45
  • 46. Cassandra 2.2 • CASSANDRA-5657 – upgrade metrics library (and extend it) – Replaced reservoir with EH • Also exposed raw bin counts in values operation – Deleted deprecated metrics • Non EH latencies from LatencyTracker 46
  • 47. Cassandra 2.2 • No recency in histograms • Requires delta’ing on the total bin counts currently which is beyond some simple tooling • CASSANDRA-11752 (fixed 2.2.8, 3.0.9, 3.8) 47
  • 49. Storing the data • We have data, now to store it. Approaches tend to follow: – Store all data points • Provide aggregations either pre-computed as entered, MR, or on query – Round Robin Database • Only store pre-computed aggregations • Choice depends heavily on requirements 49
  • 50. Round Robin Database • Store state required to generate the aggregations, and only store the aggregations – Sum & Count for Average – Current min, max – “One pass” or “online” algorithms • Constant footprint 50
  • 51. Round Robin Database • Store state required to generate the aggregations, and only store the aggregations – Sum & Count for Average – Current min, max – “One pass” or “online” algorithms • Constant footprint 51 60 300 3600 Sum 0 0 0 Count 0 0 0 Min 0 0 0 Max 0 0 0
  • 52. Round Robin Database > 10ms @ 00:00 52 60 300 3600 Sum 10 10 10 Count 1 1 1 Min 10 10 10 Max 10 10 10
  • 53. Round Robin Database > 10ms @ 00:00 > 12ms @ 00:30 53 60 300 3600 Sum 22 22 22 Count 2 2 2 Min 10 10 10 Max 12 12 12
  • 54. Round Robin Database > 10ms @ 00:00 > 12ms @ 00:30 > 14ms @ 00:59 54 60 300 3600 Sum 36 36 36 Count 3 3 3 Min 10 10 10 Max 14 14 14
  • 55. Round Robin Database > 10ms @ 00:00 > 12ms @ 00:30 > 14ms @ 00:59 > 13ms @ 01:10 55 60 300 3600 Sum 36 36 36 Count 3 3 3 Min 10 10 10 Max 14 14 14
  • 56. Round Robin Database > 10ms @ 00:00 > 12ms @ 00:30 > 14ms @ 00:59 > 13ms @ 01:10 56 60 300 3600 Sum 36 36 36 Count 3 3 3 Min 10 10 10 Max 14 14 14 Average 12 Min 10 Max 14
  • 57. Round Robin Database > 10ms @ 00:00 > 12ms @ 00:30 > 14ms @ 00:59 > 13ms @ 01:10 57 60 300 3600 Sum 0 36 36 Count 0 3 3 Min 0 10 10 Max 0 14 14
  • 58. Round Robin Database > 10ms @ 00:00 > 12ms @ 00:30 > 14ms @ 00:59 > 13ms @ 01:10 58 60 300 3600 Sum 13 49 49 Count 1 4 4 Min 13 10 10 Max 13 14 14
  • 59. Max is a lie • The issue with the deprecated LatencyTracker metrics is that the 1 minute interval does not have a min/max. So we cannot compute true min/max the rollups min/max will be the minimum and maximum average 59
  • 60. Histograms to the rescue (again) • The histograms of the data does not have this issue. But storage is more complex. Some options include: – Store each bin of the histogram as a metric – Store the percentiles/min/max each as own metric – Store raw long[90] (possibly compressed) 60
  • 61. Histogram Storage Size • Some things to note: – “Normal” clusters have over 100 tables. – Each table has at least two histograms we want to record • Read latency • Write latency • Tombstones scanned • Cells scanned • Partition cell size • Partition cell count 61
  • 62. Histogram Storage Because we store the extra histograms we have a 600 (minimum) with upper bounds seen to be over 24,000 histograms per minute. • Storing 1 per bin means [54000] metrics (expensive to store, expensive to read) • Storing raw histograms is [600] metrics • Storing min, max, 50th, 90th, 99th is [3000] metrics – Additional problems with this • Cant compute 10th, 95th, 99.99th etc • Aggregations 62
  • 63. Aggregating Histograms Averaging the percentiles [ INSERT DISAPOINTED GIL TENE PHOTO ] © DataStax, All Rights Reserved. 63
  • 64. Aggregating Histograms • Consider averaging the maximum If there is a node with a 10 second GC, but the maximum latency on your other 9 nodes is 60ms. If you report a “Max 1 second” latency, it would be misleading. • Poor at representing hotspots affects on your application One node in 10 node raspberry pi cluster gets 1000 write reqs/sec while others get 10 reqs/sec. The 1 node being under heavy stress has a 90th percentile of 10 second. The other nodes are basically sub ms and writes are taking 1ms on 90th percentile. Would report a 1 second 90th percentile, even though 10% of our applications writes are taking >10 seconds © DataStax, All Rights Reserved. 64
  • 65. Aggregating Histograms Merging histograms from different nodes more accurately can be straight forward: Node1 Node2 Cluster © DataStax, All Rights Reserved. 65 1-10 11-100 101-1000 2 8 1 1-10 11-100 101-1000 2 1 5 1-10 11-100 101-1000 4 9 6
  • 66. Histogram Storage Because we store the extra histograms we have a 600 (minimum) with upper bounds seen to be over 24,000 histograms per minute. • Storing 1 per bin means [54000] metrics (expensive to store, expensive to read) • Storing raw histograms is [600] metrics • Storing min, max, 50th, 90th, 99th is [3000] metrics – Additional problems with this • Cant compute 10th, 95th, 99.99th etc • Aggregations 66
  • 67. Raw Histogram storage • Storing raw histograms 160 (default) longs is a minimum of 1.2kb bytes per rollup and hard sell – 760kb per minute (600 tables) – 7.7gb for the 7 day TTL we want to keep our 1 min rollups at – ~77gb with 10 nodes – ~2.3 Tb on 10 node clusters with 3k tables – Expired data isn’t immediately purged so disk space can be much worse 67
  • 68. Raw Histogram storage • Goal: We wanted this to be comparable to other min/max/avg metric storage (12 bytes each) – 700mb on expected 10 node cluster – 2gb on extreme 10 node cluster • Enter compression 68
  • 69. Compressing Histograms • Overhead of typical compression makes it a non-starter. – headers (ie 10 bytes for gzip) alone nearly exceeds the length used by existing rollup storage (~12 bytes per metric) • Instead we opt to leverage known context to reduce the size of the data along with some universal encoding. 69
  • 70. Compressing Histograms • Instead of storing every bin, only store the value of each bin with a value > 0 since most bin will have no data (ie, very unlikely for a read histogram to be between 1-10 microseconds which is first 10 bins) • Write the count of offset/count pairs • Use varint for the bin count – To reduce the value of the varint as much as possible we sort the offset/count pairs by the count and represent it as a delta sequence 70
  • 71. Compressing Histograms 0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 71 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte
  • 72. Compressing Histograms 0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 72 7
  • 73. Compressing Histograms 0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 {4:1, 8:100, 11:9999999, 14:1, 15:127, 16:128 17:129} 73 7
  • 74. Compressing Histograms 0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 {4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999} 74 7
  • 75. Compressing Histograms 0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 {4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999} 75 7 4 1
  • 76. Compressing Histograms 0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 {4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999} 76 7 4 1 14 0
  • 77. Compressing Histograms 0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 {4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999} 77 7 4 1 14 0 8 99
  • 78. Compressing Histograms 0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 {4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999} 78 7 4 1 14 0 8 99 15 27
  • 79. Compressing Histograms 0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 {4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999} 79 7 4 1 14 0 8 99 15 27 16 1 17 1
  • 80. Compressing Histograms 0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 {4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999} 80 7 4 1 14 0 8 99 15 27 16 1 17 1 11 9999870
  • 81. Compressing Histograms Real Life** results of compression: 81 Size in bytes Median 1 75th 3 95th 15 99th 45 Max** 124
  • 82. Note on HdrHistogram • Comes up every couple months • Very awesome histogram, popular replacement for Metrics reservoir. – More powerful and general purpose than EH – Only slightly slower for all it offers A issue comes up a bit with storage: • Logged HdrHistograms are ~31kb each (30,000x more than our average use) • Compressed version: 1kb each • Perfect for many many people when tracking 1 or two metrics. Gets painful when tracking hundreds or thousands 82