You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
cudnn version : 8500 (presented by pytorch, using command cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2
shows that
cat: /usr/local/cuda/include/cudnn.h: No such file or directory)
GPU : RTX 4090 24G
tensortrt version : 8.6.1.6
pytorch version: 1.13.1+cu117
torch2trt version : 0.4.0
Bug description:
I try to use the torch2trt to turn fast-reid model to trt model, and the code is : fast_reid = FastReid().to(device).eval() x = torch.ones((64, 3, 256, 256)).to(device) fast_reid.model.net = torch2trt(fast_reid.model.net, [x], int8_mode=False, fp16_mode=False, use_onnx=True, max_batch_size = 128)
And when beginning the inference, the error appeared :
[TRT] [E] plugin/instanceNormalizationPlugin/instanceNormalizationPlugin.cu (335) - Cudnn Error in enqueue: 8 (CUDNN_STATUS_EXECUTION_FAILED)
terminate called after throwing an instance of 'nvinfer1::plugin::CudnnError'
what(): std::exception
And I also want to know that whether the torch2trt support dynamic shape inference or not. Thx in advance!
The text was updated successfully, but these errors were encountered:
System : Uubuntu 20.04
cuda version : 12.1
cudnn version : 8500 (presented by pytorch, using command
cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2
shows that
GPU : RTX 4090 24G
tensortrt version : 8.6.1.6
pytorch version: 1.13.1+cu117
torch2trt version : 0.4.0
Bug description:
I try to use the torch2trt to turn fast-reid model to trt model, and the code is :
fast_reid = FastReid().to(device).eval()
x = torch.ones((64, 3, 256, 256)).to(device)
fast_reid.model.net = torch2trt(fast_reid.model.net, [x], int8_mode=False, fp16_mode=False, use_onnx=True, max_batch_size = 128)
And when beginning the inference, the error appeared :
And I also want to know that whether the torch2trt support dynamic shape inference or not. Thx in advance!
The text was updated successfully, but these errors were encountered: