You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
We are trying to use cuml-cpu in an environment that cannot access a GPU and can only access a private PyPI repository. Right now the cuml-cpu package is only listed in the rapidsai conda repository, not in PyPI. There are contributions that create the wheel for the cuml package, but not for the cuml-cpu.
Describe the solution you'd like
We have seen a contribution by@jameslamb that creates the wheel for the cuml library. I have tried to create a script that could use the rapidsai.disable-cuda=true flag in the pip wheel command, but this package has a high level of complexity and do not think I am capable of doing it correctly. We would love to have a /ci/build_wheel_cuml-cpu.sh script that would create the wheel for the cuml-cpu version.
Describe alternatives you've considered
We could maybe install the dependencies listed for the conda package and then build and install from source using the code found in the build.sh section for cuml-cpu. This could work, but we work in an environment that does not allow us much more than just installing a wheel from a PyPI repository prior to the creation of the environment.
Additional context
We are trying to use the very promising NVIDIA/spark-rapids-ml package in our Spark clusters so that we could use DBSCAN, UMAP and other algorithms. Unfortunately, we do not have access to clusters that have GPUs. We were hoping to be able to use the cuml-cpu package as a dependency for spark-rapids-ml and thus bypass the dependency that spark-rapids-ml has on GPUs and CUDA. Do you think that this could be possible?
Thank you very much for the amazing work that you are doing.
The text was updated successfully, but these errors were encountered:
Thanks for the issue @marcosgalleterobbva. The usecase of spark-rapids-ml on CPU is not one I had thought about before, let me circle with the more Spark savy folks of the team and circle back here about enabling spark-rapids-ml on CPUs
Is your feature request related to a problem? Please describe.
We are trying to use
cuml-cpu
in an environment that cannot access a GPU and can only access a private PyPI repository. Right now thecuml-cpu
package is only listed in the rapidsai conda repository, not in PyPI. There are contributions that create the wheel for thecuml
package, but not for thecuml-cpu
.Describe the solution you'd like
We have seen a contribution by @jameslamb that creates the wheel for the
cuml
library. I have tried to create a script that could use therapidsai.disable-cuda=true
flag in thepip wheel
command, but this package has a high level of complexity and do not think I am capable of doing it correctly. We would love to have a/ci/build_wheel_cuml-cpu.sh
script that would create the wheel for thecuml-cpu
version.Describe alternatives you've considered
We could maybe install the dependencies listed for the conda package and then build and install from source using the code found in the build.sh section for cuml-cpu. This could work, but we work in an environment that does not allow us much more than just installing a wheel from a PyPI repository prior to the creation of the environment.
Additional context
We are trying to use the very promising NVIDIA/spark-rapids-ml package in our Spark clusters so that we could use DBSCAN, UMAP and other algorithms. Unfortunately, we do not have access to clusters that have GPUs. We were hoping to be able to use the
cuml-cpu
package as a dependency forspark-rapids-ml
and thus bypass the dependency thatspark-rapids-ml
has on GPUs and CUDA. Do you think that this could be possible?Thank you very much for the amazing work that you are doing.
The text was updated successfully, but these errors were encountered: