Replies: 1 comment
-
Hi @jangwon00 . Indeed, sharding and chunking are not implemented. However, if I recall correctly, this won't work with sharding and will give several uncomprehensible jax errors. -- Let me point out however a very, very important point. See for example Figure 2 from that paper, where we are showing that the gradient estimator of netket_fidelity ( Essentially this is telling you that you should compute the gradient without CV, but compute the infidelity itself with CV. This is even more striking if you look at the stability of natural gradient descent (figure 3), -- In short: be careful if you us netket fidelity, as it is very hard to use in a stable way. |
Beta Was this translation helpful? Give feedback.
-
hi = nk.hilbert.Spin(0.5, 4)
sampler = nk.sampler.MetropolisLocal(hilbert=hi, n_chains_per_rank=10)
model = nk.models.RBM(alpha=1, param_dtype=complex, use_visible_bias=False)
vs = nk.vqs.MCState(sampler=sampler, model=model, n_samples=10,chunk_size=5)
vs_target = nk.vqs.MCState(sampler=sampler, model=model, n_samples=10,chunk_size=5)
optimizer = nk.optimizer.Adam()
driver = nkf.driver.InfidelityOptimizer(vs_target, optimizer, variational_state=vs)
log = nk.logging.RuntimeLog()
driver.run(300, out=log)
##############################################################################
thank you for these remarkable contribution for NQS.
I'm trying to use fidelity or pTVMC for larger models, but i have noticed that i need to utilize either sharding or chunking. But as far as i know, nkf doesn't support it. then, it would be tricky to customize it? or do you by any chance know some simple way to apply chunking or sharding for nkf?
Beta Was this translation helpful? Give feedback.
All reactions