You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm experiencing a discrepancy between the inference results of my PyTorch model and the TensorRT model obtained by converting it using the torch2trt tool.
Reproduce
This can be reproduced by the following script:
fromtorch2trtimporttorch2trtimporttorchfromtorch.nnimportModulemodel=torch.nn.ELU(inplace=True,).cuda()
input_data=torch.randn([1, 3, 10, 10], dtype=torch.float32).cuda()
model_trt=torch2trt(model, [input_data])
y=model(input_data)
y_trt=model_trt(input_data)
# check the output against PyTorchprint(torch.max(torch.abs(y-y_trt)))
The output is:
tensor(0.0909, device='cuda:0')
Environment
torch: 1.11.0
torch2trt: 0.4.0
tensorrt: 8.6.1.6
The text was updated successfully, but these errors were encountered:
Description:
I'm experiencing a discrepancy between the inference results of my PyTorch model and the TensorRT model obtained by converting it using the torch2trt tool.
Reproduce
This can be reproduced by the following script:
The output is:
Environment
The text was updated successfully, but these errors were encountered: