Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cudnn Error in enqueue: 8 (CUDNN_STATUS_EXECUTION_FAILED) #878

Open
Merealtea opened this issue Aug 16, 2023 · 0 comments
Open

Cudnn Error in enqueue: 8 (CUDNN_STATUS_EXECUTION_FAILED) #878

Merealtea opened this issue Aug 16, 2023 · 0 comments

Comments

@Merealtea
Copy link

System : Uubuntu 20.04

cuda version : 12.1

cudnn version : 8500 (presented by pytorch, using command cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2
shows that

cat: /usr/local/cuda/include/cudnn.h: No such file or directory)

GPU : RTX 4090 24G

tensortrt version : 8.6.1.6

pytorch version: 1.13.1+cu117

torch2trt version : 0.4.0

Bug description:

I try to use the torch2trt to turn fast-reid model to trt model, and the code is :
fast_reid = FastReid().to(device).eval()
x = torch.ones((64, 3, 256, 256)).to(device)
fast_reid.model.net = torch2trt(fast_reid.model.net, [x], int8_mode=False, fp16_mode=False, use_onnx=True, max_batch_size = 128)

And when beginning the inference, the error appeared :

[TRT] [E] plugin/instanceNormalizationPlugin/instanceNormalizationPlugin.cu (335) - Cudnn Error in enqueue: 8 (CUDNN_STATUS_EXECUTION_FAILED)
terminate called after throwing an instance of 'nvinfer1::plugin::CudnnError'
what(): std::exception

And I also want to know that whether the torch2trt support dynamic shape inference or not. Thx in advance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant