You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched the YOLOv5 issues and found no similar bug report.
YOLOv5 Component
Detection
Bug
[E:onnxruntime:, sequential_executor.cc:368 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Sigmoid node. Name:'/model.0/act/Sigmoid' Status Message: CUDA error cudaErrorNoKernelImageForDevice:no kernel image is available for execution on the device Non-zero status code returned while running Sigmoid node. Name:'/model.0/act/Sigmoid' Status Message: CUDA error cudaErrorNoKernelImageForDevice:no kernel image is available for execution on the device
I am getting this while session run.
Environment
No response
Minimal Reproducible Example
No response
Additional
No response
Are you willing to submit a PR?
Yes I'd like to help by submitting a PR!
The text was updated successfully, but these errors were encountered:
👋 Hello @Abish7, thank you for your interest in YOLOv5 🚀! To help us assist you better, please ensure you provide a minimum reproducible example (MRE) that we can use to debug the issue you are facing with ONNX GPU inference. An example could include details such as:
A small snippet of the code you are using.
The specific GPU and CUDA version you have installed.
Any modifications you may have made to the YOLOv5 repository or exported ONNX model.
A description of the steps to reproduce the error.
For your environment, ensure you are using up-to-date dependencies, including Python, PyTorch, and CUDA. Installing the dependencies specified in the requirements.txt file and matching system CUDA with your PyTorch version are critical for compatibility.
You can also try running YOLOv5 in verified environments such as Google Colab, Paperspace, Docker, or similar setups for compliance. If possible, test your workflow there to rule out any environment-specific issues.
This is an automated response to guide you, and an Ultralytics engineer will review your issue and assist you further soon! Let us know if you have more details to share 😊🚀
GPU : NVIDIA Geforce GT 710
CUDA Version : 11.4
OnnxRuntime GPU : 1.12.1
while exporting i have changed the model size to 2016
while running onnxruntime gpu model is loaded in gpu but in session.run it crashes.
Abish7
changed the title
Error While using onnx for GPU Inference
Error While using onnxruntime GPU Inference in cpp
Jan 3, 2025
@Abish7 the error indicates that your GPU, the NVIDIA GeForce GT 710, does not support CUDA compute capability required by ONNX Runtime GPU kernels. The GT 710 has a compute capability of 3.5, while ONNX Runtime GPU typically requires a minimum of 5.0. Unfortunately, you'll need a more capable GPU or switch to CPU inference for compatibility. For more details, refer to the ONNX Runtime GPU requirements and verify compatibility.
Search before asking
YOLOv5 Component
Detection
Bug
[E:onnxruntime:, sequential_executor.cc:368 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Sigmoid node. Name:'/model.0/act/Sigmoid' Status Message: CUDA error cudaErrorNoKernelImageForDevice:no kernel image is available for execution on the device Non-zero status code returned while running Sigmoid node. Name:'/model.0/act/Sigmoid' Status Message: CUDA error cudaErrorNoKernelImageForDevice:no kernel image is available for execution on the device
I am getting this while session run.
Environment
No response
Minimal Reproducible Example
No response
Additional
No response
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: