-
Notifications
You must be signed in to change notification settings - Fork 681
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to run the demo on Jetson Nano #428
Comments
Any luck with this? |
I would like to say that may be the issue of insufficient CPU main memory of the mobile device. I have successfully run it on Jetson TX2, which has 8 GB of main memory.
|
I have the same issue on Jetson Nano 4GB version.
After enlarging the Linux swap space from 2GB(default) to 4GB, it ran successfully. |
jetson nano 4GB swap space12GB [TensorRT] ERROR: 4: Tensor: output_0 trying to set to TensorLocation::kHOST but only kDEVICE is supported (only network inputs may be on host) code: create some regular pytorch model...model = PPLCNet_x2_5() create example datax = torch.ones((1, 3, 224, 224)) convert to TensorRT feeding sample data as inputmodel_trt = torch2trt(model, [x], fp16_mode=True, log_level=trt.Logger.INFO, strict_type_constraints=True) for i in range(50): |
I am using Jetson Xavier NX, and I added 20G swap. My process was still killed when converting the model. Only about 4G swap was used. I don't know why. |
jetson tx2 enlarge swap to 4GB fix the error, thanks |
Dear Authors,
We have the following hardware and software configuration:
Hardware: Nvidia Jetson Nano 4GB
Software: 1. JetPack 4.4 (L4T 32.4.3); 2. PyTorch 1.6.0; 3. TensorRT 7.1.3.0
However, we failed to run even the demo example as follows:
import torch
from torch2trt import torch2trt
from torchvision.models.alexnet import alexnet
create some regular pytorch model...
model = alexnet(pretrained=True).eval().cuda()
create example data
x = torch.ones((1, 3, 224, 224)).cuda()
convert to TensorRT feeding sample data as input
model_trt = torch2trt(model, [x])
y = model(x)
y_trt = model_trt(x)
check the output against PyTorch
print(torch.max(torch.abs(y - y_trt)))
The system gives a hint of "killed". We notice that you have displayed some experimental results on Jetson Nano. Would you mind suggesting the software version, such as PyTorch or TensorRT?
Looking forward to hearing from you!
Best Regards,
Qiang Wang
The text was updated successfully, but these errors were encountered: