-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use lammps inference not quickly, but use python to run model is fast #57
Comments
1: as we all know ,use scriptmodel run in C++ have good performance.but in this model ,it seemd bad. |
The TorchScript compiler can take a few iterations to warm up, which often manifests in very slow time steps initially. |
i get the log ,can you help me to know what reason make the C++ run model dont not fast than use python to run model? this is python this is pair_allegro |
in pair_allegro_kokkos.cpp line 296. the model need input ,after compute give the lmp an output.
I test the " auto output = this->model.forward(input_vector).toGenericDict();" time ,
in RTX3060, 1step need 0.4s.and i write a python code .load the model and give input.,and 1 step need 0.05s!!!
so,what is the problem in lammps?and how to make the model inference fast?
Thanks !
The text was updated successfully, but these errors were encountered: