-
Notifications
You must be signed in to change notification settings - Fork 681
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Torch2trt broken for latest L4T 32.4.3 release? Seeing "add_constant incompatible" errors. #375
Comments
You can convert your pytorch YOLO model before the three YOLO layers, because this series of problems come from them. |
Same issue |
same issue |
any Solution found guys ? |
same here, any solution? |
do you mean only accelerate the backbone of yolo with tensorrt while only keeping single output? |
try
|
Hi,
I upgraded L4T versions from 32.2.1 to the latest release 32.4.3, which has me upgrading from PyTorch 1.3 to PyTorch 1.6.0 and TensorRT 6 to TensorRT 7.1.3. I believe I also upgraded from CUDA 10.0 to CUDA 10.2.
It looks like torch2trt is now broken. After upgrading, I'm now receiving this warning:
Followed by this error:
As per issue #313, I tried the fix in this comment. The issue disappears but the problem then arises in other areas of the code.
This same conversion using the same PyTorch model works in L4T 32.2.1 (PyTorch 1.3 and TensorRT 6).
The text was updated successfully, but these errors were encountered: