Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'TypeError: add_constant(): incompatible function arguments' while executing torch.Tensor.reshape() #363

Open
przybyszewskiw opened this issue Jul 15, 2020 · 4 comments

Comments

@przybyszewskiw
Copy link

For the following code:

import torch
from torch import nn
from torch2trt import torch2trt


class Example(nn.Module):

    def __init__(self, stride=8):
        super(Example, self).__init__()
        self.stride = stride

    def grid_anchors(self, grid_size):
        grid_height, grid_width = grid_size
        shifts_x = torch.arange(
            0, grid_width * self.stride, step=self.stride, dtype=torch.float32
        )
        shifts_y = torch.arange(
            0, grid_height * self.stride, step=self.stride, dtype=torch.float32
        )
        shift_y, shift_x = torch.meshgrid(shifts_y, shifts_x)
        shift_x = shift_x.reshape(-1)
        shift_y = shift_y.reshape(-1)
        shifts = torch.stack((shift_x, shift_y, shift_x, shift_y), dim=1)
        return shifts.view(-1, 4)

    def forward(self, feature_map):
        grid_size = feature_map.shape[-2:]
        return self.grid_anchors(grid_size)


if __name__ == '__main__':
    model = Example()
    x = torch.ones(3, 400, 500)
    print(model(x))
    model_trt = torch2trt(model, [x])

I get an error:

 File "/opt/conda/lib/python3.6/site-packages/torch2trt-0.1.0-py3.6.egg/torch2trt/torch2trt.py", line 151, in trt_
TypeError: add_constant(): incompatible function arguments. The following argument types are supported:
    1. (self: tensorrt.tensorrt.INetworkDefinition, shape: tensorrt.tensorrt.Dims, weights: tensorrt.tensorrt.Weights) -> tensorrt.tensorrt.IConstantLayer

Invoked with: <tensorrt.tensorrt.INetworkDefinition object at 0x7f845d574110>, (400, 500), array([[   0.,    8.,   16., ..., 3976., 3984., 3992.],
       [   0.,    8.,   16., ..., 3976., 3984., 3992.],
       [   0.,    8.,   16., ..., 3976., 3984., 3992.],
       ...,
       [   0.,    8.,   16., ..., 3976., 3984., 3992.],
       [   0.,    8.,   16., ..., 3976., 3984., 3992.],
       [   0.,    8.,   16., ..., 3976., 3984., 3992.]], dtype=float32)

I'm using PyTorch version 1.6.0a0+9907a3e and TensorRT 7.1.2-1+cuda11.0. I was trying torch2trt both with and without plugins.

@jaybdub
Copy link
Contributor

jaybdub commented Jul 17, 2020

Hi @przybyszewskiw,

Thanks for sharing this issue.

There is an experimental PR to support adding custom converters for user defined methods.

#175

Because the grid map seems to be constant for a given image shape, I'm curious if this would help here.

For example,

def convert_example(ctx):
    module = ctx.method_args[0]
    feature_map = ctx.method_args[1]
    grid_size = feature_map.shape[-2:]

    # ... (insert TensorRT code to add grid as constant)

    output = ctx.method_return

    output._trt = # ...  set output _trt attribute to constant 
    
class Example(nn.Module):
 
    @tensorrt_method(convert_example)
    def forward(self, feature_map):
        # ...

Please let me know if this helps or you run into any issues.

Best,
John

@czs1886
Copy link

czs1886 commented Jul 28, 2020

Got the similar error here:

TypeError: add_constant(): incompatible function arguments. The following argument types are supported:
    1. (self: tensorrt.tensorrt.INetworkDefinition, shape: tensorrt.tensorrt.Dims, weights: tensorrt.tensorrt.Weights) -> tensorrt.tensorrt.IConstantLayer

Invoked with: <tensorrt.tensorrt.INetworkDefinition object at 0x7fa66bc4aca8>, (336,), array([ 0 ..... 335])

I think mine is caused by the unsupported function torch.arange since it shows
Warning: Encountered known unsupported method torch.arange

@DuyguSerbes
Copy link

DuyguSerbes commented Oct 21, 2021

Hello, I also faced the same problem. Did you able to solve it?

  File "/media/duygu/3a331d5c-a3b3-468b-b8f7-06df52fefee6/ocakirog/tensorrt_local/torch2trt/torch2trt/torch2trt.py", line 364, in wrapper
    ret = attr(*args, **kwargs)
TypeError: add_constant(): incompatible function arguments. The following argument types are supported:
    1. (self: tensorrt.tensorrt.INetworkDefinition, shape: tensorrt.tensorrt.Dims, weights: tensorrt.tensorrt.Weights) -> tensorrt.tensorrt.IConstantLayer

@ultmaster
Copy link

For me, it was this line which induced the error.

  File "/home/xxx/python3.8/site-packages/torch/nn/modules/batchnorm.py", line 147, in forward
    self.num_batches_tracked = self.num_batches_tracked + 1  # type: ignore[has-type]

It doesn't make sense that tensorrt doesn't support batchnorm.

Then I realized that I forgot to call model.eval().

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants