Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

int8 ERROR when I run convert_tflite.py #224

Open
aluds123 opened this issue Sep 7, 2020 · 5 comments
Open

int8 ERROR when I run convert_tflite.py #224

aluds123 opened this issue Sep 7, 2020 · 5 comments

Comments

@aluds123
Copy link

aluds123 commented Sep 7, 2020

I run this command:
python convert_tflite.py --weights ./checkpoints/yolov4-416 --output ./checkpoints/yolov4-416-int8.tflite --quantize_mode int8 --dataset ./coco_dataset/coco/val207.txt

And occur:
calibration image /home/soc507/darknet/data/coco/images/val2014//COCO_val2014_000000183693.jpg
calibration image /home/soc507/darknet/data/coco/images/val2014//COCO_val2014_000000150834.jpg
calibration image /home/soc507/darknet/data/coco/images/val2014//COCO_val2014_000000140270.jpg
I0907 13:29:51.712005 139893978974016 convert_tflite.py:48] model saved to: ./checkpoints/yolov3-tiny-int8.tflite
Traceback (most recent call last):
File "convert_tflite.py", line 76, in
app.run(main)
File "/home/aluds/anaconda3/envs/tf230/lib/python3.5/site-packages/absl/app.py", line 300, in run
_run_main(main, args)
File "/home/aluds/anaconda3/envs/tf230/lib/python3.5/site-packages/absl/app.py", line 251, in _run_main
sys.exit(main(argv))
File "convert_tflite.py", line 72, in main
demo()
File "convert_tflite.py", line 52, in demo
interpreter.allocate_tensors()
File "/home/aluds/anaconda3/envs/tf230/lib/python3.5/site-packages/tensorflow/lite/python/interpreter.py", line 243, in allocate_tensors
return self._interpreter.AllocateTensors()
RuntimeError: tensorflow/lite/kernels/dequantize.cc:61 op_context.input->type == kTfLiteUInt8 || op_context.input->type == kTfLiteInt8 || op_context.input->type == kTfLiteInt16 || op_context.input->type == kTfLiteFloat16 was not true.Node number 64 (DEQUANTIZE) failed to prepare.

Question:
Can someone give me some suggestion? Thank you.

@BernardinD
Copy link

Answers to this issue? I'm having the same one

1 similar comment
@drahmad89
Copy link

Answers to this issue? I'm having the same one

@4yougames
Copy link

Can someone give me some suggestion? Thank you.

@YLTsai0609
Copy link

A similar issue occurred to me, I use yolov3. Any suggestions?

@istomoya
Copy link

Fix in
#214
worked in converting yolov3-tiny to TF-Lite int8.
I had to comment out lines in save_model.py and add the lines to detect.py.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants