3

CockroachDB Performance Characteristics With YCSB(A) Benchmark

 2 years ago
source link: https://dzone.com/articles/cockroachdb-performance-characteristics-with-ycsba
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

CockroachDB Performance Characteristics With YCSB(A) Benchmark

Today, I'll talk about CockroachDB and the ever-so-popular YCSB benchmark suite by stepping through workload A in the YCSB suite of workloads against CockroachDB.

Join the DZone community and get the full member experience.

Join For Free

Up until 20.1, CockroachDB used RocksDB for the storage engine. RocksDB is a key-value store and is a foundational piece of the CockroachDB architecture. YCSB is an industry-adopted benchmark to test the performance of various databases and key-value stores. YCSB workloada is a workload with a combination of updates and reads at about a 50/50 split. We're going to test the performance of CockroachDB with workloada on a 9 node AWS cluster in a single region. 

We're going to look at the performance of RocksDB in 20.1 and then what our newly announced RocksDB replacement, Pebble, can do for the performance in our upcoming 20.2 release.

The side story to this is a POC I recently conducted where a prospect had put in place a set of success criteria based on their internal benchmarks of a separate database engine. The NoSQL database in question boasted excellent performance characteristics, like the 60-100k QPS across 9 nodes in a single AWS region. YCSB client would run on every node in the cluster. The machine type was i3.2x. CockroachDB prefers CPU over RAM and we're opting for double the CPU with c5d.4x. I am using CockroachDB 20.1.5 for my test.

Deploy CockroachDB in AWS

I will spare you the details of deploying CockroachDB in AWS and defer to our awesome documentation. All you need to know is that machine type is c5d.4xlarge, 9 nodes in a single region, three external client nodes running the YCSB workload. It is done in order to provide all available resources to the Cockroach nodes.

When the cluster starts, connect to the cluster and create the YCSB table.

CREATE TABLE usertable (
ycsb_key VARCHAR(255) PRIMARY KEY NOT NULL,
FIELD0 TEXT NOT NULL,
FIELD1 TEXT NOT NULL,
FIELD2 TEXT NOT NULL,
FIELD3 TEXT NOT NULL,
FIELD4 TEXT NOT NULL,
FIELD5 TEXT NOT NULL,
FIELD6 TEXT NOT NULL,
FIELD7 TEXT NOT NULL,
FIELD8 TEXT NOT NULL,
FIELD9 TEXT NOT NULL,
FAMILY (ycsb_key),
FAMILY (FIELD0),
FAMILY (FIELD1),
FAMILY (FIELD2),
FAMILY (FIELD3),
FAMILY (FIELD4),
FAMILY (FIELD5),
FAMILY (FIELD6),
FAMILY (FIELD7),
FAMILY (FIELD8),
FAMILY (FIELD9)
);

Change the Configuration of Lock Delay To Detect Coordinator Failures of Conflicting Transactions

Only use this property in case Zipfian distributed is used and only if clients are running with more than 128 threads. Otherwise, skip this step.

SET CLUSTER SETTING kv.lock_table.coordinator_liveness_push_delay = '50ms';

This property is not marked as public API and not documented. Meanwhile, here's the code for it.

Configure YCSB

Deploy YCSB suite on the client machines not meant for CockroachDB service.

SSH to the hosts, in my case nodes 10, 11, and 12. What we're going to do is actually download the YCSB suite, export the path, download a PostgreSQL driver and take care of the prerequisites.

curl --location https://github.com/brianfrankcooper/YCSB/releases/download/0.17.0/ycsb-jdbc-binding-0.17.0.tar.gz | gzip -dc - | tar -xvf -
export YCSB=~/ycsb-jdbc-binding-0.17.0
cd $YCSB/lib
curl -O --location https://jdbc.postgresql.org/download/postgresql-42.2.12.jar
sudo apt-get update && sudo apt-get install -y default-jre python

Repeat for nodes 11 and 12.

Warm Up the Leaseholders

In CockroachDB nomenclature, the leader replicas in charge of coordinating writes and reads.

You will need the list of IPs for each of the nodes in the cluster to construct a JDBC url. The following is a list of my IPs, ignoring the last three which are meant to be client nodes.

10.12.20.65
10.12.29.101
10.12.17.181
10.12.21.30
10.12.23.9
10.12.16.241
10.12.21.44
10.12.17.10
10.12.21.101
10.12.31.155
10.12.31.110
10.12.16.177

Given the node IPs, we can construct a loading phase of workload with the following command:

bin/ycsb load jdbc -s -P workloads/workloada -p db.driver=org.postgresql.Driver -p db.url="jdbc:postgresql://10.12.20.65:26257,10.12.29.101:26257,10.12.17.181:26257,10.12.21.30:26257,10.12.23.9:26257,10.12.16.241:26257,10.12.21.44:26257,10.12.17.10:26257,10.12.21.101:26257/defaultdb?autoReconnect=true&sslmode=disable&ssl=false&reWriteBatchedInserts=true&loadBalanceHosts=true" -p db.user=root -p db.passwd="" -p jdbc.fetchsize=10 -p jdbc.autocommit=true -p jdbc.batchupdateapi=true -p db.batchsize=128 -p recordcount=1000000 -p threadcount=32 -p operationcount=10000000

Execute it against one of the client nodes.

2020-08-24 14:55:41:431 10 sec: 447968 operations; 44796.8 current ops/sec; est completion in 13 seconds [INSERT: Count=447968, Max=223871, Min=3, Avg=698.46, 90=3, 99=99, 99.9=110975, 99.99=177535]
2020-08-24 14:55:51:431 20 sec: 910944 operations; 46297.6 current ops/sec; est completion in 2 second [INSERT: Count=462976, Max=166143, Min=3, Avg=692.56, 90=3, 99=12, 99.9=107071, 99.99=133503]
2020-08-24 14:55:53:594 22 sec: 1000000 operations; 41172.45 current ops/sec; [CLEANUP: Count=32, Max=95871, Min=4336, Avg=33047.88, 90=76479, 99=95871, 99.9=95871, 99.99=95871] [INSERT: Count=89056, Max=139135, Min=3, Avg=658.79, 90=3, 99=12, 99.9=98623, 99.99=120127]
[OVERALL], RunTime(ms), 22163
[OVERALL], Throughput(ops/sec), 45120.24545413527
[TOTAL_GCS_PS_Scavenge], Count, 15
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 106
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.4782746018138339
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 15
[TOTAL_GC_TIME], Time(ms), 106
[TOTAL_GC_TIME_%], Time(%), 0.4782746018138339
[CLEANUP], Operations, 32
[CLEANUP], AverageLatency(us), 33047.875
[CLEANUP], MinLatency(us), 4336
[CLEANUP], MaxLatency(us), 95871
[CLEANUP], 95thPercentileLatency(us), 86335
[CLEANUP], 99thPercentileLatency(us), 95871
[INSERT], Operations, 1000000
[INSERT], AverageLatency(us), 692.195737
[INSERT], MinLatency(us), 3
[INSERT], MaxLatency(us), 223871
[INSERT], 95thPercentileLatency(us), 5
[INSERT], 99thPercentileLatency(us), 66
[INSERT], Return=OK, 7808
[INSERT], Return=BATCHED_OK, 992192

Benchmark

My earlier tests had shown that adding additional clients to execute the workload produced better results than running a single client. Additionally, testing with various threadcount numbers resulted in best performance using 128 threads before facing diminishing results. 

Another point to note is that YCSB relies on the Zipfian distribution of keys by default and that lands itself in having worse performance than if we were to use uniform key distribution algorithm. For the sake of brevity, I am going to focus only on the uniform distribution. So given all of the conditions, the following command should be executed against all of the available clients as part of the transaction phase in workload.

bin/ycsb run jdbc -s -P workloads/workloada -p db.driver=org.postgresql.Driver -p db.url="jdbc:postgresql://10.12.20.65:26257,10.12.29.101:26257,10.12.17.181:26257,10.12.21.30:26257,10.12.23.9:26257,10.12.16.241:26257,10.12.21.44:26257,10.12.17.10:26257,10.12.21.101:26257/defaultdb?autoReconnect=true&sslmode=disable&ssl=false&reWriteBatchedInserts=true&loadBalanceHosts=true" -p db.user=root -p db.passwd="" -p db.batchsize=128  -p jdbc.fetchsize=10 -p jdbc.autocommit=true -p jdbc.batchupdateapi=true -p recordcount=1000000 -p operationcount=10000000  -p threadcount=128 -p maxexecutiontime=180 -p requestdistribution=uniform

Client 1

[OVERALL], RunTime(ms), 180056
[OVERALL], Throughput(ops/sec), 38026.78611098769
[TOTAL_GCS_PS_Scavenge], Count, 80
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 185
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.10274581241391566
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 80
[TOTAL_GC_TIME], Time(ms), 185
[TOTAL_GC_TIME_%], Time(%), 0.10274581241391566
[READ], Operations, 3425678
[READ], AverageLatency(us), 1402.806852541307
[READ], MinLatency(us), 307
[READ], MaxLatency(us), 347903
[READ], 95thPercentileLatency(us), 4503
[READ], 99thPercentileLatency(us), 8935
[READ], Return=OK, 3425678
[CLEANUP], Operations, 128
[CLEANUP], AverageLatency(us), 217.5234375
[CLEANUP], MinLatency(us), 27
[CLEANUP], MaxLatency(us), 1815
[CLEANUP], 95thPercentileLatency(us), 1392
[CLEANUP], 99thPercentileLatency(us), 1654
[UPDATE], Operations, 3421273
[UPDATE], AverageLatency(us), 5323.34846093837
[UPDATE], MinLatency(us), 756
[UPDATE], MaxLatency(us), 396543
[UPDATE], 95thPercentileLatency(us), 13983
[UPDATE], 99thPercentileLatency(us), 22655
[UPDATE], Return=OK, 3421273

Client 2

[OVERALL], RunTime(ms), 180062
[OVERALL], Throughput(ops/sec), 37611.283891104176
[TOTAL_GCS_PS_Scavenge], Count, 80
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 199
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.1105174884206551
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 80
[TOTAL_GC_TIME], Time(ms), 199
[TOTAL_GC_TIME_%], Time(%), 0.1105174884206551
[READ], Operations, 3387837
[READ], AverageLatency(us), 1425.872327977999
[READ], MinLatency(us), 304
[READ], MaxLatency(us), 352255
[READ], 95thPercentileLatency(us), 4567
[READ], 99thPercentileLatency(us), 9095
[READ], Return=OK, 3387837
[CLEANUP], Operations, 128
[CLEANUP], AverageLatency(us), 486.4453125
[CLEANUP], MinLatency(us), 22
[CLEANUP], MaxLatency(us), 3223
[CLEANUP], 95thPercentileLatency(us), 2945
[CLEANUP], 99thPercentileLatency(us), 3153
[UPDATE], Operations, 3384526
[UPDATE], AverageLatency(us), 5372.31603893721
[UPDATE], MinLatency(us), 721
[UPDATE], MaxLatency(us), 394751
[UPDATE], 95thPercentileLatency(us), 14039
[UPDATE], 99thPercentileLatency(us), 22703
[UPDATE], Return=OK, 3384526

Client 3

[OVERALL], RunTime(ms), 180044
[OVERALL], Throughput(ops/sec), 38504.49334607096
[TOTAL_GCS_PS_Scavenge], Count, 92
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 164
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.09108884494901245
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 92
[TOTAL_GC_TIME], Time(ms), 164
[TOTAL_GC_TIME_%], Time(%), 0.09108884494901245
[READ], Operations, 3465232
[READ], AverageLatency(us), 1392.0378548968727
[READ], MinLatency(us), 303
[READ], MaxLatency(us), 335359
[READ], 95thPercentileLatency(us), 4475
[READ], 99thPercentileLatency(us), 8911
[READ], Return=OK, 3465232
[CLEANUP], Operations, 128
[CLEANUP], AverageLatency(us), 963.7265625
[CLEANUP], MinLatency(us), 38
[CLEANUP], MaxLatency(us), 3945
[CLEANUP], 95thPercentileLatency(us), 3769
[CLEANUP], 99thPercentileLatency(us), 3921
[UPDATE], Operations, 3467271
[UPDATE], AverageLatency(us), 5247.070255541029
[UPDATE], MinLatency(us), 720
[UPDATE], MaxLatency(us), 401663
[UPDATE], 95thPercentileLatency(us), 13879
[UPDATE], 99thPercentileLatency(us), 22575
[UPDATE], Return=OK, 3467271

Test the Same Workload With 20.2.0-beta.1 and the Pebble Storage Engine

We're going to rebuild the test now using the latest 20.2 beta1 and see whether the Pebble storage engine replacing RocksDB has any positive effect on this workload.

The new IP list is as follows:

10.12.21.64
10.12.16.87
10.12.23.78
10.12.31.249
10.12.23.120
10.12.24.143
10.12.24.2
10.12.17.108
10.12.18.94
10.12.24.224
10.12.26.64
10.12.28.166

Once the cluster starts on 20.2, a quick way to check whether we're using Pebble is to grep the logs.

Navigate to one of the nodes and check the cockroach.log for storage engine string.

grep 'storage engine' cockroach.log
I200917 15:56:22.443338 89 server/config.go:624 ⋮ [n?] 1 storage engine‹› initialized
storage engine:      pebble

Perform all of the prerequisite steps as above. Pay attention to the JDBC URLs as I've fat-fingered the IPs and caused severe degradation in performance as the benchmark was targeting only a few of the nodes.

After loading the data and defining the data to be inserted.

bin/ycsb load jdbc -s -P workloads/workloada -p db.driver=org.postgresql.Driver -p db.url="jdbc:postgresql://10.12.21.64:26257,10.12.16.87:26257,10.12.23.78:26257,10.12.31.249:26257,10.12.23.120:26257,10.12.24.143:26257,10.12.24.2:26257,10.12.17.108:26257,10.12.18.94:26257/defaultdb?autoReconnect=true&sslmode=disable&ssl=false&reWriteBatchedInserts=true&loadBalanceHosts=true" -p db.user=root -p db.passwd="" -p jdbc.fetchsize=10 -p jdbc.autocommit=true -p jdbc.batchupdateapi=true -p db.batchsize=128 -p recordcount=1000000 -p threadcount=32 -p operationcount=10000000
[OVERALL], RunTime(ms), 75381
[OVERALL], Throughput(ops/sec), 13265.942346214564
[TOTAL_GCS_PS_Scavenge], Count, 15
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 92
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.12204666958517399
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 15
[TOTAL_GC_TIME], Time(ms), 92
[TOTAL_GC_TIME_%], Time(%), 0.12204666958517399
[CLEANUP], Operations, 32
[CLEANUP], AverageLatency(us), 141524.4375
[CLEANUP], MinLatency(us), 7964
[CLEANUP], MaxLatency(us), 390655
[CLEANUP], 95thPercentileLatency(us), 307199
[CLEANUP], 99thPercentileLatency(us), 390655
[INSERT], Operations, 1000000
[INSERT], AverageLatency(us), 2346.44805
[INSERT], MinLatency(us), 3
[INSERT], MaxLatency(us), 18563071
[INSERT], 95thPercentileLatency(us), 7
[INSERT], 99thPercentileLatency(us), 71
[INSERT], Return=OK, 7808
[INSERT], Return=BATCHED_OK, 992192

And downloading the YCSB suite as well as PostgreSQL driver on all of the clients, we can begin the benchmark.

bin/ycsb run jdbc -s -P workloads/workloada -p db.driver=org.postgresql.Driver -p db.url="jdbc:postgresql://10.12.21.64:26257,10.12.16.87:26257,10.12.23.78:26257,10.12.31.249:26257,10.12.23.120:26257,10.12.24.143:26257,10.12.24.2:26257,10.12.17.108:26257,10.12.18.94:26257/defaultdb?autoReconnect=true&sslmode=disable&ssl=false&reWriteBatchedInserts=true&loadBalanceHosts=true" -p db.user=root -p db.passwd="" -p db.batchsize=128  -p jdbc.fetchsize=10 -p jdbc.autocommit=true -p jdbc.batchupdateapi=true -p recordcount=1000000 -p operationcount=10000000  -p threadcount=128 -p maxexecutiontime=180 -p requestdistribution=uniform

Client 1

[OVERALL], RunTime(ms), 180068
[OVERALL], Throughput(ops/sec), 36933.636181886846
[TOTAL_GCS_PS_Scavenge], Count, 24
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 84
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.046649043694604264
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 24
[TOTAL_GC_TIME], Time(ms), 84
[TOTAL_GC_TIME_%], Time(%), 0.046649043694604264
[READ], Operations, 3325714
[READ], AverageLatency(us), 1568.609565945839
[READ], MinLatency(us), 308
[READ], MaxLatency(us), 353279
[READ], 95thPercentileLatency(us), 5283
[READ], 99thPercentileLatency(us), 10319
[READ], Return=OK, 3325714
[CLEANUP], Operations, 128
[CLEANUP], AverageLatency(us), 89.3515625
[CLEANUP], MinLatency(us), 20
[CLEANUP], MaxLatency(us), 1160
[CLEANUP], 95thPercentileLatency(us), 341
[CLEANUP], 99thPercentileLatency(us), 913
[UPDATE], Operations, 3324852
[UPDATE], AverageLatency(us), 5354.304089324878
[UPDATE], MinLatency(us), 747
[UPDATE], MaxLatency(us), 373759
[UPDATE], 95thPercentileLatency(us), 14463
[UPDATE], 99thPercentileLatency(us), 22655
[UPDATE], Return=OK, 3324852

Client 2

[OVERALL], RunTime(ms), 180048
[OVERALL], Throughput(ops/sec), 36497.55065315916
[TOTAL_GCS_PS_Scavenge], Count, 77
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 142
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.07886785746023282
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 77
[TOTAL_GC_TIME], Time(ms), 142
[TOTAL_GC_TIME_%], Time(%), 0.07886785746023282
[READ], Operations, 3286309
[READ], AverageLatency(us), 1590.261334524538
[READ], MinLatency(us), 313
[READ], MaxLatency(us), 371199
[READ], 95thPercentileLatency(us), 5331
[READ], 99thPercentileLatency(us), 10367
[READ], Return=OK, 3286309
[CLEANUP], Operations, 128
[CLEANUP], AverageLatency(us), 481.5234375
[CLEANUP], MinLatency(us), 37
[CLEANUP], MaxLatency(us), 3357
[CLEANUP], 95thPercentileLatency(us), 3099
[CLEANUP], 99thPercentileLatency(us), 3321
[UPDATE], Operations, 3285002
[UPDATE], AverageLatency(us), 5416.266051588401
[UPDATE], MinLatency(us), 761
[UPDATE], MaxLatency(us), 411647
[UPDATE], 95thPercentileLatency(us), 14519
[UPDATE], 99thPercentileLatency(us), 22703
[UPDATE], Return=OK, 3285002

Client 3

[OVERALL], RunTime(ms), 180051
[OVERALL], Throughput(ops/sec), 36760.86775413633
[TOTAL_GCS_PS_Scavenge], Count, 81
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 171
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.09497309095756203
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 81
[TOTAL_GC_TIME], Time(ms), 171
[TOTAL_GC_TIME_%], Time(%), 0.09497309095756203
[READ], Operations, 3310513
[READ], AverageLatency(us), 1582.5869117565767
[READ], MinLatency(us), 294
[READ], MaxLatency(us), 355839
[READ], 95thPercentileLatency(us), 5311
[READ], 99thPercentileLatency(us), 10335
[READ], Return=OK, 3310513
[CLEANUP], Operations, 128
[CLEANUP], AverageLatency(us), 501.859375
[CLEANUP], MinLatency(us), 35
[CLEANUP], MaxLatency(us), 2249
[CLEANUP], 95thPercentileLatency(us), 1922
[CLEANUP], 99thPercentileLatency(us), 2245
[UPDATE], Operations, 3308318
[UPDATE], AverageLatency(us), 5373.080574479237
[UPDATE], MinLatency(us), 708
[UPDATE], MaxLatency(us), 375807
[UPDATE], 95thPercentileLatency(us), 14487
[UPDATE], 99thPercentileLatency(us), 22687
[UPDATE], Return=OK, 3308318

We can see that performance dipped a bit, we're going to conduct one more test to check whether the dip is Pebble related by restarting the cluster with the RocksDB engine.

I stopped the cluster and restarted by passing COCKROACH_STORAGE_ENGINE=rocksdb environment variable. To confirm the cluster started with RocksDB, let's grep the logs.

I200917 16:31:25.474204 47 server/config.go:624 ⋮ [n?] 1 storage engine‹› initialized
storage engine:      rocksdb

Let's rerun the previous tests:

Client 1

[OVERALL], RunTime(ms), 180060
[OVERALL], Throughput(ops/sec), 36145.29045873597
[TOTAL_GCS_PS_Scavenge], Count, 75
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 152
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.08441630567588582
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 75
[TOTAL_GC_TIME], Time(ms), 152
[TOTAL_GC_TIME_%], Time(%), 0.08441630567588582
[READ], Operations, 3253510
[READ], AverageLatency(us), 1561.9589268205723
[READ], MinLatency(us), 342
[READ], MaxLatency(us), 418815
[READ], 95thPercentileLatency(us), 5103
[READ], 99thPercentileLatency(us), 9999
[READ], Return=OK, 3253510
[CLEANUP], Operations, 128
[CLEANUP], AverageLatency(us), 301.65625
[CLEANUP], MinLatency(us), 18
[CLEANUP], MaxLatency(us), 2069
[CLEANUP], 95thPercentileLatency(us), 1883
[CLEANUP], 99thPercentileLatency(us), 2031
[UPDATE], Operations, 3254811
[UPDATE], AverageLatency(us), 5510.587541334965
[UPDATE], MinLatency(us), 808
[UPDATE], MaxLatency(us), 441855
[UPDATE], 95thPercentileLatency(us), 14839
[UPDATE], 99thPercentileLatency(us), 22815
[UPDATE], Return=OK, 3254811

Client 2

[OVERALL], RunTime(ms), 180052
[OVERALL], Throughput(ops/sec), 35503.34347855064
[TOTAL_GCS_PS_Scavenge], Count, 24
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 73
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.04054384288983183
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 24
[TOTAL_GC_TIME], Time(ms), 73
[TOTAL_GC_TIME_%], Time(%), 0.04054384288983183
[READ], Operations, 3196538
[READ], AverageLatency(us), 1604.055252276056
[READ], MinLatency(us), 344
[READ], MaxLatency(us), 526335
[READ], 95thPercentileLatency(us), 5223
[READ], 99thPercentileLatency(us), 10143
[READ], Return=OK, 3196538
[CLEANUP], Operations, 128
[CLEANUP], AverageLatency(us), 247.4765625
[CLEANUP], MinLatency(us), 29
[CLEANUP], MaxLatency(us), 1959
[CLEANUP], 95thPercentileLatency(us), 1653
[CLEANUP], 99thPercentileLatency(us), 1799
[UPDATE], Operations, 3195910
[UPDATE], AverageLatency(us), 5597.925762302443
[UPDATE], MinLatency(us), 800
[UPDATE], MaxLatency(us), 578559
[UPDATE], 95thPercentileLatency(us), 14911
[UPDATE], 99thPercentileLatency(us), 22735
[UPDATE], Return=OK, 3195910

Client 3

[OVERALL], RunTime(ms), 180058
[OVERALL], Throughput(ops/sec), 36301.11963922736
[TOTAL_GCS_PS_Scavenge], Count, 78
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 154
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.08552799653445001
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 78
[TOTAL_GC_TIME], Time(ms), 154
[TOTAL_GC_TIME_%], Time(%), 0.08552799653445001
[READ], Operations, 3270285
[READ], AverageLatency(us), 1568.0345073900287
[READ], MinLatency(us), 350
[READ], MaxLatency(us), 432383
[READ], 95thPercentileLatency(us), 5103
[READ], 99thPercentileLatency(us), 9975
[READ], Return=OK, 3270285
[CLEANUP], Operations, 128
[CLEANUP], AverageLatency(us), 493.8984375
[CLEANUP], MinLatency(us), 37
[CLEANUP], MaxLatency(us), 4647
[CLEANUP], 95thPercentileLatency(us), 2353
[CLEANUP], 99thPercentileLatency(us), 4459
[UPDATE], Operations, 3266022
[UPDATE], AverageLatency(us), 5477.101235080474
[UPDATE], MinLatency(us), 819
[UPDATE], MaxLatency(us), 599551
[UPDATE], 95thPercentileLatency(us), 14807
[UPDATE], 99thPercentileLatency(us), 22655
[UPDATE], Return=OK, 3266022

So it seems we can rule out the performance dip is attributed to Pebble, in fact, Pebble benchmark seems to be more performant than RocksDB using the beta software. I will chalk the performance dip as an overall beta software-related issue. 

I filed an issue to investigate further with engineering. After the investigation, I will conduct another test to determine the performance characteristics.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK