You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use MJX (prerelease/HEAD version) on a WSL setup.
What's happening? What did you expect?
I expected initializing an MJX model to be faster with calling make_data() on a MJX model, when compared to creating the data struct with CPU MuJoCo, then calling put_data(). I expected this to be especially true with a GPU backend. However, I observe the contrary when trying to quantify execution times using timeit. (~2x as much execution time using make_data(), more with complex models).
When I use a CPU backend, make_data() is indeed faster as expected.
Steps for reproduction
Example code to with `timeit` and the MuJoCo humanoid:
Intro
Hi!
I am using MJX for RL in motor control scenarios.
My setup
I use MJX (prerelease/HEAD version) on a WSL setup.
What's happening? What did you expect?
I expected initializing an MJX model to be faster with calling
make_data()
on a MJX model, when compared to creating the data struct with CPU MuJoCo, then callingput_data()
. I expected this to be especially true with a GPU backend. However, I observe the contrary when trying to quantify execution times usingtimeit
. (~2x as much execution time usingmake_data()
, more with complex models).When I use a CPU backend,
make_data()
is indeed faster as expected.Steps for reproduction
Example code to with `timeit` and the MuJoCo humanoid:
Minimal model for reproduction
No response
Code required for reproduction
No response
Confirmations
The text was updated successfully, but these errors were encountered: