Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade to onnxruntime 0.19 #753

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open

Upgrade to onnxruntime 0.19 #753

wants to merge 6 commits into from

Conversation

yeldarby
Copy link
Contributor

Description

This pulls up our dependencies the the latest onnxruntime and updates our Docker containers to match.

Benefits:

  • Gets MPS acceleration on Mac since onnxruntime-silicon is now mainlined into onnxruntime
  • Unlocks upgrade to CUDA 12 and newer NVIDIA compute capabilities (H100 Support #181)
  • Preps for adding Jetpack 6 support to our Jetson containers

Type of change

  • New feature (non-breaking change which adds functionality)

How has this change been tested, please provide a testcase or example of how you tested the change?

Have run the tests locally. About to build all the containers via our github actions.

Any specific deployment considerations

No.

Docs

  • Not needed

probicheaux
probicheaux previously approved these changes Oct 18, 2024
RUN python3 -m pip install --extra-index-url https://download.pytorch.org/whl/cu118 \
RUN python3 -m pip install \
--extra-index-url https://download.pytorch.org/whl/cu118 \
--extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-11/pypi/simple/ \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is this guy doing? changing the default cuda version for onnxruntime? so onnxruntime installs cuda?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default is now to install the onnxruntime that's compatible with CUDA 12; this forces it to install the one built for CUDA 11 instead. (It used to be the reverse; you had to choose the channel for CUDA 12 and the default was 11)

@@ -264,7 +264,7 @@

ONNXRUNTIME_EXECUTION_PROVIDERS = os.getenv(
"ONNXRUNTIME_EXECUTION_PROVIDERS",
"[CUDAExecutionProvider,OpenVINOExecutionProvider,CPUExecutionProvider]",
"[CUDAExecutionProvider,OpenVINOExecutionProvider,CoreMLExecutionProvider,CPUExecutionProvider]",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

now everyone not on a mac is gonna get stupid warnings about not having coremlexecution provider :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeh I was thinking we should probably update this default to be what onnxruntime.get_available_providers() reports but was going to do that in a different PR.

requirements/requirements.cpu.txt Show resolved Hide resolved
@yeldarby
Copy link
Contributor Author

Update: tests have all passed & containers build.

image image

@yeldarby yeldarby marked this pull request as ready for review October 19, 2024 00:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants