Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

On coordinated omission #120

Open
fee-mendes opened this issue Feb 5, 2025 · 0 comments
Open

On coordinated omission #120

fee-mendes opened this issue Feb 5, 2025 · 0 comments

Comments

@fee-mendes
Copy link

Benchmarking Apache Cassandra with Rust states:

Use async programming model: don’t block immediately on the returned future after spawning a single request, but spawn many requests and collect results asynchronously when they are ready. This way we’re making sending requests independent from each other, thus avoiding coordinated omission.

This makes sense, as long as we account the wait time when it misses (or do not respond) within its window. It looks like latte accounts for requests' service time instead of response time and has no scheduling notion when a target rate is specified (though I am far from familiar with the code, correct me if I'm wrong).

Consider this snapshot where I asked for a fixed rate of 150K:

Image

Naturally, the stressed system is unable to keep up with the load. Yet - latencies are too good to be true. As latencies do not grow, it effectively means the load generator does not account for the latency of requests which missed its window. In turn, the current latte implementation resembles cassandra-stress "throughput" or -throttle modes, where only service time gets measured instead of response time.

http://psy-lob-saw.blogspot.com/2016/07/fixing-co-in-cstress.html is a very good (wow, almost 10 years!) article addressing most of the points mentioned here. The expectation is that when --rate is specified, the load generator reports growing latencies back as the server is unable to meet its implied schedule.

Btw, latte performance is quite impressive - though unfortunately this effectively makes it a blocker for my use case :(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant