Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Torch2trt broken for latest L4T 32.4.3 release? Seeing "add_constant incompatible" errors. #375

Open
Jaftem opened this issue Jul 28, 2020 · 7 comments

Comments

@Jaftem
Copy link

Jaftem commented Jul 28, 2020

Hi,

I upgraded L4T versions from 32.2.1 to the latest release 32.4.3, which has me upgrading from PyTorch 1.3 to PyTorch 1.6.0 and TensorRT 6 to TensorRT 7.1.3. I believe I also upgraded from CUDA 10.0 to CUDA 10.2.

It looks like torch2trt is now broken. After upgrading, I'm now receiving this warning:

WARNING: Unsupported numpy data type. Cannot implicitly convert to tensorrt.Weights.

Followed by this error:

TypeError: add_constant(): incompatible function arguments. The following argument types are supported:
    1. (self: tensorrt.tensorrt.INetworkDefinition, shape: tensorrt.tensorrt.Dims, weights: tensorrt.tensorrt.Weights) -> tensorrt.tensorrt.IConstantLayer

Invoked with: <tensorrt.tensorrt.INetworkDefinition object at 0x7f1c56e1b8>, (), array(17287)

As per issue #313, I tried the fix in this comment. The issue disappears but the problem then arises in other areas of the code.

This same conversion using the same PyTorch model works in L4T 32.2.1 (PyTorch 1.3 and TensorRT 6).

@yuzhiyiliu
Copy link

You can convert your pytorch YOLO model before the three YOLO layers, because this series of problems come from them.

@austinmw
Copy link

Same issue

@iAlexKai
Copy link

iAlexKai commented Apr 4, 2021

same issue

@zubairbaqai
Copy link

any Solution found guys ?

@RoyCopter
Copy link

same here, any solution?

@ResonWang
Copy link

You can convert your pytorch YOLO model before the three YOLO layers, because this series of problems come from them.

do you mean only accelerate the backbone of yolo with tensorrt while only keeping single output?

@yqchau
Copy link

yqchau commented Aug 12, 2022

try

model.eval().cuda() instead of model.cuda()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants