-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
int8 ERROR when I run convert_tflite.py #224
Comments
Answers to this issue? I'm having the same one |
1 similar comment
Answers to this issue? I'm having the same one |
Can someone give me some suggestion? Thank you. |
A similar issue occurred to me, I use yolov3. Any suggestions? |
Fix in |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I run this command:
python convert_tflite.py --weights ./checkpoints/yolov4-416 --output ./checkpoints/yolov4-416-int8.tflite --quantize_mode int8 --dataset ./coco_dataset/coco/val207.txt
And occur:
calibration image /home/soc507/darknet/data/coco/images/val2014//COCO_val2014_000000183693.jpg
calibration image /home/soc507/darknet/data/coco/images/val2014//COCO_val2014_000000150834.jpg
calibration image /home/soc507/darknet/data/coco/images/val2014//COCO_val2014_000000140270.jpg
I0907 13:29:51.712005 139893978974016 convert_tflite.py:48] model saved to: ./checkpoints/yolov3-tiny-int8.tflite
Traceback (most recent call last):
File "convert_tflite.py", line 76, in
app.run(main)
File "/home/aluds/anaconda3/envs/tf230/lib/python3.5/site-packages/absl/app.py", line 300, in run
_run_main(main, args)
File "/home/aluds/anaconda3/envs/tf230/lib/python3.5/site-packages/absl/app.py", line 251, in _run_main
sys.exit(main(argv))
File "convert_tflite.py", line 72, in main
demo()
File "convert_tflite.py", line 52, in demo
interpreter.allocate_tensors()
File "/home/aluds/anaconda3/envs/tf230/lib/python3.5/site-packages/tensorflow/lite/python/interpreter.py", line 243, in allocate_tensors
return self._interpreter.AllocateTensors()
RuntimeError: tensorflow/lite/kernels/dequantize.cc:61 op_context.input->type == kTfLiteUInt8 || op_context.input->type == kTfLiteInt8 || op_context.input->type == kTfLiteInt16 || op_context.input->type == kTfLiteFloat16 was not true.Node number 64 (DEQUANTIZE) failed to prepare.
Question:
Can someone give me some suggestion? Thank you.
The text was updated successfully, but these errors were encountered: