-
Notifications
You must be signed in to change notification settings - Fork 681
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
accuracy drops a lot in fp16 mode #879
Comments
Hi, the result in my computer is 0.6877 with your code when set fp16_mode=False (and 5.3 when fp16_mode=True). I don't know is this error normal? |
0.6877 is likely to give you the wrong outputs. |
Instead of testing the absolute sum of differences between the two models, I believe an element-wise difference check to see if all the element-wise differences do not exceed a certain threshold might be a more accurate measure. For example (did not test the code), we can check whether all elements inside of source and target tensor are within a certain absolute threshold. output_pt = model(data)
output_trt = model_trt(data)
# Set this value to something that seems appropriate to you
# 1e-5 is generally reasonable
import numpy as np
absolute_tolerance = 1e-5
np.allclose(output_pt, output_trt, atol=absolute_tolerance) |
np.allclose(output_pt, output_trt, atol=absolute_tolerance) returns False |
my model's accuracy drops a lot when I convert it into fp16 mode, even a pretrained resnet34 experienced accuracy drop in fp16 mode.
if I set fp16_mode=False
then the output is
The text was updated successfully, but these errors were encountered: