7

It can make the difference between an 11ms max and a 1.4 Second max. You make an...

 1 year ago
source link: https://twitter.com/i/web/status/1581145880392265728
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client
Don’t miss what’s happening
People on Twitter are the first to know.

Tweet

See new Tweets

Conversation

Happy that's using the fix I have sent for coordinated omission

https://github.com/openmessaging/benchmark/pull/248…

Low latency won't matter if not measured right 👍

Quote Tweet
xbJBWdca_mini.jpg
🕺💃🤟 Alexander Gallego
@emaxerrno
Oct 13
For those of us wanting to run the bench ourselves, is all open source here - https://github.com/redpanda-data/openmessaging-benchmark…
Show this thread
And thanks to for making the performance community aware of Coordinated Omission. Hopefully, more Load Testing tools will pick up this solution for addressing it.
Something I have found about coordinated omission is; a) there are case ls for low throughput that it makes remarkably little difference, b) if the producer doesn't back off, not just the percentiles increase but the worst case is higher, up to 10x
Can be a lot more than 10x 😉. E.g. if you take in requests at 1K per second for 100 seconds into a system that serves requests one at a time and takes 10 msec to serve each, you’ll see a max response time of 90,000 msec. That’s 900x, and it’s real.
A basic sanity check to demonstrate CO in a load generator: if you tell it to generate a load with a rate that is greater than the system can deliver and the reported response time stats do not grow linearly with time (the length of the test), the load generator is exhibiting CO

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK