Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Limit the compute job thread pool size #6624

Open
wants to merge 4 commits into
base: dev
Choose a base branch
from
Open

Conversation

garypen
Copy link
Contributor

@garypen garypen commented Jan 22, 2025

The router has always observed the APOLLO_ROUTER_NUM_CORES environment variable to restrict the size of the main tokio async job scheduler.

We are now enhancing the compute job thread pool to both respect this environment variable and restrict the number of threads in the pool.

If the environment variable is not set, then the size of the pool is computed as a fraction of the total number of cores that the router has determined are available.

If it is set, then the environment variable is taken as the number of available cores.

From this number, let's call it available, the router then uses the following table to size the compute job thread pool:

/// available: 1 pool size: 1
/// available: 2 pool size: 1
/// available: 3 pool size: 2
/// available: 4 pool size: 3
/// available: 5 pool size: 4
/// ...
/// available: 8 pool size: 7
/// available: 9 pool size: 7
/// ...
/// available: 16 pool size: 14
/// available: 17 pool size: 14
/// ...
/// available: 32 pool size: 28
/// etc...

This table should not be relied upon as an explicit interface, since it may change in the future, but is provided here for informational purposes.

The router has always observed the APOLLO_ROUTER_NUM_CORES environment
variable to restrict the size of the main tokio async job scheduler.

We are now enhancing the compute job thread pool to both respect this
environment variable and restrict the number of threads in the pool.

If the environment variable is not set, then the size of the pool is
computed as a fraction of the total number of cores that the router has
determined are available.

If it is set, then the environment variable is taken as the number of
available cores.

From this number, let's call it available, the router then uses the
following table to size the compute job thread pool:

/// available: 1     pool size: 1
/// available: 2     pool size: 1
/// available: 3     pool size: 2
/// available: 4     pool size: 3
/// available: 5     pool size: 4
/// ...
/// available: 8     pool size: 7
/// available: 9     pool size: 7
/// ...
/// available: 16    pool size: 14
/// available: 17    pool size: 14
/// ...
/// available: 32    pool size: 28
/// etc...

This table should not be relied upon as an explicit interface, since it
may change in the future, but is provided here for informational
purposes.
@garypen garypen self-assigned this Jan 22, 2025
@garypen garypen requested review from a team as code owners January 22, 2025 17:52
@svc-apollo-docs
Copy link
Collaborator

svc-apollo-docs commented Jan 22, 2025

✅ Docs preview has no changes

The preview was not built because there were no changes.

Build ID: a4d6d55042cffd8b0dab3381

This comment has been minimized.

@router-perf
Copy link

router-perf bot commented Jan 22, 2025

CI performance tests

  • connectors-const - Connectors stress test that runs with a constant number of users
  • const - Basic stress test that runs with a constant number of users
  • demand-control-instrumented - A copy of the step test, but with demand control monitoring and metrics enabled
  • demand-control-uninstrumented - A copy of the step test, but with demand control monitoring enabled
  • enhanced-signature - Enhanced signature enabled
  • events - Stress test for events with a lot of users and deduplication ENABLED
  • events_big_cap_high_rate - Stress test for events with a lot of users, deduplication enabled and high rate event with a big queue capacity
  • events_big_cap_high_rate_callback - Stress test for events with a lot of users, deduplication enabled and high rate event with a big queue capacity using callback mode
  • events_callback - Stress test for events with a lot of users and deduplication ENABLED in callback mode
  • events_without_dedup - Stress test for events with a lot of users and deduplication DISABLED
  • events_without_dedup_callback - Stress test for events with a lot of users and deduplication DISABLED using callback mode
  • extended-reference-mode - Extended reference mode enabled
  • large-request - Stress test with a 1 MB request payload
  • no-tracing - Basic stress test, no tracing
  • reload - Reload test over a long period of time at a constant rate of users
  • step-jemalloc-tuning - Clone of the basic stress test for jemalloc tuning
  • step-local-metrics - Field stats that are generated from the router rather than FTV1
  • step-with-prometheus - A copy of the step test with the Prometheus metrics exporter enabled
  • step - Basic stress test that steps up the number of users over time
  • xlarge-request - Stress test with 10 MB request payload
  • xxlarge-request - Stress test with 100 MB request payload

@garypen garypen requested a review from a team as a code owner January 22, 2025 17:53
@lrlna lrlna added the backport-1.x Backport this PR to 1.x label Jan 23, 2025
@lrlna
Copy link
Member

lrlna commented Jan 23, 2025

I am going to try backporting it to 1.x so I can run some more perf tests.

@lrlna lrlna removed the backport-1.x Backport this PR to 1.x label Jan 23, 2025
@lrlna
Copy link
Member

lrlna commented Jan 23, 2025

@Mergifyio backport 1.x

Copy link

mergify bot commented Jan 23, 2025

backport 1.x

🟠 Waiting for conditions to match

  • merged [📌 backport requirement]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants