-
Notifications
You must be signed in to change notification settings - Fork 680
Issues: NVIDIA-AI-IOT/torch2trt
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Encountered known unsupported method torch.nn.functional.pixel_shuffle
#493
opened Jan 24, 2021 by
zlheos
Problem about quantizing model with external module like DCNv2 and multi-head
#473
opened Dec 22, 2020 by
KiedaTamashi
Does torch2trt support conversion on
torch.nn.functional.avg_pool2d
?
#457
opened Nov 27, 2020 by
oliviawindsir
RuntimeError: CUDA error: an illegal memory access was encountered
#438
opened Nov 3, 2020 by
BarryKCL
How to install tensorrt for pytorch in conda environment
#429
opened Oct 19, 2020 by
govindamagrawal
converting .pth to .trt engine, do inference in C++, input and output names not matched
#405
opened Sep 7, 2020 by
cam401
I got wrong output when fp16_mode is True
bug
Something isn't working
#395
opened Aug 31, 2020 by
marigoold
Torch2trt broken for latest L4T 32.4.3 release? Seeing "add_constant incompatible" errors.
#375
opened Jul 28, 2020 by
Jaftem
'TypeError: add_constant(): incompatible function arguments' while executing torch.Tensor.reshape()
#363
opened Jul 15, 2020 by
przybyszewskiw
ProTip!
no:milestone will show everything without a milestone.