Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error While using onnxruntime GPU Inference in cpp #13481

Open
1 of 2 tasks
Abish7 opened this issue Jan 3, 2025 · 3 comments
Open
1 of 2 tasks

Error While using onnxruntime GPU Inference in cpp #13481

Abish7 opened this issue Jan 3, 2025 · 3 comments
Labels
bug Something isn't working detect Object Detection issues, PR's exports Model exports (ONNX, TensorRT, TFLite, etc.)

Comments

@Abish7
Copy link

Abish7 commented Jan 3, 2025

Search before asking

  • I have searched the YOLOv5 issues and found no similar bug report.

YOLOv5 Component

Detection

Bug

[E:onnxruntime:, sequential_executor.cc:368 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Sigmoid node. Name:'/model.0/act/Sigmoid' Status Message: CUDA error cudaErrorNoKernelImageForDevice:no kernel image is available for execution on the device Non-zero status code returned while running Sigmoid node. Name:'/model.0/act/Sigmoid' Status Message: CUDA error cudaErrorNoKernelImageForDevice:no kernel image is available for execution on the device

I am getting this while session run.

Environment

No response

Minimal Reproducible Example

No response

Additional

No response

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!
@Abish7 Abish7 added the bug Something isn't working label Jan 3, 2025
@UltralyticsAssistant UltralyticsAssistant added detect Object Detection issues, PR's exports Model exports (ONNX, TensorRT, TFLite, etc.) labels Jan 3, 2025
@UltralyticsAssistant
Copy link
Member

👋 Hello @Abish7, thank you for your interest in YOLOv5 🚀! To help us assist you better, please ensure you provide a minimum reproducible example (MRE) that we can use to debug the issue you are facing with ONNX GPU inference. An example could include details such as:

  1. A small snippet of the code you are using.
  2. The specific GPU and CUDA version you have installed.
  3. Any modifications you may have made to the YOLOv5 repository or exported ONNX model.
  4. A description of the steps to reproduce the error.

For your environment, ensure you are using up-to-date dependencies, including Python, PyTorch, and CUDA. Installing the dependencies specified in the requirements.txt file and matching system CUDA with your PyTorch version are critical for compatibility.

You can also try running YOLOv5 in verified environments such as Google Colab, Paperspace, Docker, or similar setups for compliance. If possible, test your workflow there to rule out any environment-specific issues.

This is an automated response to guide you, and an Ultralytics engineer will review your issue and assist you further soon! Let us know if you have more details to share 😊🚀

@Abish7
Copy link
Author

Abish7 commented Jan 3, 2025

Screenshot 2025-01-03 094202

GPU : NVIDIA Geforce GT 710
CUDA Version : 11.4
OnnxRuntime GPU : 1.12.1
while exporting i have changed the model size to 2016
while running onnxruntime gpu model is loaded in gpu but in session.run it crashes.

@Abish7 Abish7 changed the title Error While using onnx for GPU Inference Error While using onnxruntime GPU Inference in cpp Jan 3, 2025
@pderrenger
Copy link
Member

@Abish7 the error indicates that your GPU, the NVIDIA GeForce GT 710, does not support CUDA compute capability required by ONNX Runtime GPU kernels. The GT 710 has a compute capability of 3.5, while ONNX Runtime GPU typically requires a minimum of 5.0. Unfortunately, you'll need a more capable GPU or switch to CPU inference for compatibility. For more details, refer to the ONNX Runtime GPU requirements and verify compatibility.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working detect Object Detection issues, PR's exports Model exports (ONNX, TensorRT, TFLite, etc.)
Projects
None yet
Development

No branches or pull requests

3 participants