Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How much GPU memory is required during conversion? #105

Open
yezifeiafei opened this issue Oct 9, 2019 · 4 comments
Open

How much GPU memory is required during conversion? #105

yezifeiafei opened this issue Oct 9, 2019 · 4 comments

Comments

@yezifeiafei
Copy link

yezifeiafei commented Oct 9, 2019

How much GPU memory is required during conversion? The GPU memory usage of my original model is about 3.5G. But it can’t be successfully converted in RTX 2080 8G.

Here’s the error message:

Traceback (most recent call last):
  File "inference.py", line 166, in <module>
    model = init_model(FLAGS.model_path, config=config)
  File "inference.py", line 74, in init_model
    is_encrypted_model=is_encrypted_model)
  File "inference.py", line 42, in __init__
    model_trt = torch2trt(self.model, [x], fp16_mode=False)
  File "/usr/local/lib/python3.6/dist-packages/torch2trt/torch2trt.py", line 347, in torch2trt
    outputs = module(*inputs)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/public/algorithms/tct/../../algorithms/tct/model.py", line 51, in forward
    x = self.backbone(x)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/public/algorithms/tct/../../algorithms/tct/resnet.py", line 211, in forward
    x = self.layer1(x)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 92, in forward
    input = module(input)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/public/algorithms/tct/../../algorithms/tct/resnet.py", line 116, in forward
    out = self.conv3(out)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/torch2trt/torch2trt.py", line 190, in wrapper
    outputs = method(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 343, in forward
    return self.conv2d_forward(input, self.weight)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 340, in conv2d_forward
    self.padding, self.dilation, self.groups)
RuntimeError: CUDA out of memory. Tried to allocate 554.00 MiB (GPU 0; 7.76 GiB total capacity; 6.10 GiB already allocated; 381.31 MiB free; 53.57 MiB cached)

Is there any way to make it work?

@dujiangsu
Copy link

How do you solve the problem ? Many thanks.

@huanmx
Copy link

huanmx commented Nov 17, 2021

How do you solve the problem ? Many thanks.

Did you solve the problem? Thanks

@Biaocsu
Copy link

Biaocsu commented Jan 25, 2022

How do you solve the problem ? Many thanks.

Did you solve the problem? Thanks

I have met the same problem, do you solve it? Thanks

@tldrafael
Copy link

Same here. Did anyone find one workaround?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants