[FEA] Look at ways to reduce memory usage in operations that use hash tables #12139
Labels
? - Needs Triage
Need team to review and classify
feature request
New feature or request
performance
A performance related task/issue
Is your feature request related to a problem? Please describe.
Hash joins and aggregations are some of the most common operations that we do. Joins especially, but there are a lot of aggregations that end up using hash tables.
Currently in CUDF all hash tables are allocated with a maximum occupancy of 50%.
https://github.com/rapidsai/cudf/blob/6c281fded5590e9896a901b5e12ce3a05d510be7/cpp/include/cudf/hashing/detail/helper_functions.cuh#L23
But when talking to the cuco team they mentioned that their hash tables, especially static_multimap has really good performance going up to a 90%
For a key multiplicity of 1 there is effectively no performance drop between 10% and 90% occupancy
For a key multiplicity of 8 there is about a 30% performance drop between 10% and 90% occupancy
In some cases we know a lot about the join cardinality and are already telling CUDF if the build side has distinct keys or not. We could potentially give more hints about the average key multiplicity.
For aggregations we can use previous batches to give us hints about future aggregation batches. We also know for merge aggregations how many upstream tasks there are or how many batches we are combining together to get a worst case key multiplicity.
The idea would be to see if we could reduce the memory usage without impacting the performance of average case queries. If we can reduce the memory pressure, then ideally we can reduce spilling/memory pressure.
So this is not really a performance improvement that we would see everywhere, but it is an experiment to see if we can improve really common operations that use lots of memory.
The text was updated successfully, but these errors were encountered: