You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I already started experiments with even smaller float data types (like they are used in GPUs) because this would accelerate the training. Up to now my experiments were not successful, but who knows, this might change in the future with better compiler support and the right libraries. Therefore knowing the code locations and having a special data type is still very helpful.
But I also don't think that anybody still has the need for the old double implementation.
We should unconditionally use 32-bit float.
The text was updated successfully, but these errors were encountered: