Replies: 4 comments 3 replies
-
Hi @ipcamit ,
When you call However, I think your inability to reproduce below this threshold of numerics (note the final scalings, and how they relate numerical noise to real units) is also probably largely due to GPU nondeterminism, which can have a meaningful effect (all |
Beta Was this translation helpful? Give feedback.
-
Thank you for the clarification. Reg. GPU nondeterminism, I am using single-threaded CPU for this test. Actually I also wanted to provide an FYI and ask for comments. To demonstrate it, I have trained a Nequip model using the Could you suggest any pitfalls I should be aware of? Any comments to make this process easier? I can provide more details if needed. biggest pro of this approach is that Nequip can work in distributed environment like allegro, and for a trained model, end-user would just need to add one line in LAMMPS script:
As OpenKIM API is part of LAMMPS core, user need not install any extra plugins. Users will also able to run same model on all supported MD softwares like ASE, DL_POLY, GULP etc. with support for OpenMM coming up in future. |
Beta Was this translation helpful? Give feedback.
-
I see @ipcamit , very nice! Glad to hear that you are working to integrate NequIP here! I'd certainly be curious to read about your graph unrolling approach; would you post a link to the preprint when it is public? I'm also happy to discuss by email (listed in my GitHub profile here). I'm assuming you're also writing an integration for Allegro since that comes "free" and without all the graph-unrolling necessary? You would still have to build LAMMPS against libtorch, right? |
Beta Was this translation helpful? Give feedback.
-
@Linux-cpp-lisp Sorry for raising such an old thread, but I thought you might be interested in followup, We were able to parallelize and deploy nequip models. Currently a very modest model for Si is already up (trained on small GAP Si dataset). But soon others will follow! It can run on multiple GPUs and nodes without any issue and performance wise matches the model, and its benchmark results (predicted lattice constants etc) can be accessed here: We also provide easy to use guide to port existing trained models to KIM framework: Paper will be out "soon"! Thank you for your support. |
Beta Was this translation helpful? Give feedback.
-
I am trying to construct a Nequip model manually on my own and compare it with model generated by Nequip. While my results do match Nequip's within 10 eV, I can't get exact numerical match (lets say, upto single precision) because of what I believe small random noise in input data.
Example, Given below is an XYZ file coordinates:
Here is the output of when I read these coordinates:
Below is the output of ASEDataset class:
I generated the ASEDataset by passing the following config directly to
dataset_from_config
functionIt seems like it reading the data in 32 bits and then use it as 64 bit. Is there a way overcome this behavior?
Beta Was this translation helpful? Give feedback.
All reactions