-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is the inference time of MobileNetv2 smaller than v1??? #4
Comments
I have tested. |
@yiran-THU Thank you for your response, but I read the paper, paper mentioned that the inference time of MibileNet v2 would be less than MobileNet v1, are there any other versions of MobileNet v2?? |
The original implement was come from here : In this project , the deploy model of mobilenet-v1 was made by merge.py , which combined batch norm and scale layers into convolution layers , so it will be faster |
@eric612 Hi, so if you use merge.py deploying the model of mobilenet-v2, is it possible to be faster? |
Hello, in this project, is MobileNetv2 more faster? if it's this situation, then what's the fps of v2???
The text was updated successfully, but these errors were encountered: