You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Trying to run whole-body segmentation model (Model_lowers.pt) on colab gpu and getting this error:
error message:
NotImplementedError: Could not run 'aten::slow_conv3d_forward' with arguments from the 'CUDA' backend.
device is detected by torch.device = [Tesla 4]
cuda version = 12.1
pytorch installed = 2.1.0+cu121
inference json device line = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
Any advice to resolve this?
The text was updated successfully, but these errors were encountered:
Looks like a CPU-GPU device tensor convert issue. Are you using only GPUs for inference. Any issues/inconsistencies of CPU/GPU input/output conversions?
I assume this question is about segmenting CT scans, right?
Have you tried starting the server using a terminal following these instrutions?. Once started, you could use the REST APIs available to interact with the server. Is there a reason for not using MONAI Label and 3DSlicer?
Good day,
Trying to run whole-body segmentation model (Model_lowers.pt) on colab gpu and getting this error:
error message:
NotImplementedError: Could not run 'aten::slow_conv3d_forward' with arguments from the 'CUDA' backend.
device is detected by torch.device = [Tesla 4]
cuda version = 12.1
pytorch installed = 2.1.0+cu121
inference json device line = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
Any advice to resolve this?
The text was updated successfully, but these errors were encountered: