Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support adam bf16 state #1465

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

XiaobingSuper
Copy link

@XiaobingSuper XiaobingSuper commented Feb 8, 2025

Description

This PR supports Adam bf16 state to reduce memory usage, we have tested some smaller models or largest models (LLM models), and we get a similar convergence even with this bf16 lower precision state. For the deepseek-v3 report, they also used the bf16 state to reduce training memory usage.

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Infra/Build change
  • Code refactoring

Changes

Please list the changes introduced in this PR:

  • Change A
  • Change B

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

@XiaobingSuper XiaobingSuper force-pushed the xiaobing/adam_bf16_state branch from 035e623 to 89477f9 Compare February 8, 2025 01:50
Signed-off-by: XiaobingSuper <[email protected]>
@XiaobingSuper XiaobingSuper force-pushed the xiaobing/adam_bf16_state branch from 298ee6f to 990507e Compare February 8, 2025 01:55
@XiaobingSuper
Copy link
Author

XiaobingSuper commented Feb 10, 2025

@timmoon10 please help review it, thanks.

Comment on lines +630 to +633
DISPATCH_DOUBLE_FLOAT_HALF_AND_BFLOAT(
m_in_type, 2, "adam",
DISPATCH_DOUBLE_FLOAT_HALF_AND_BFLOAT(
v_in_type, 3, "adam",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will involve increasing the number of template instantiations by 16x. As an alternative approach, how about we cast the Adam state to FP32 and reuse the existing template instantiations? This is the approach we use for FP16 state. We can consider alternative approaches like JIT compilation if the memory and compute overhead become burdensome.

Comment on lines +287 to +289
elif dtype == torch.bfloat16:
assert state[state_name].dtype == torch.bfloat16
unscaled = state[state_name]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's odd that the FP16 case involves per-tensor scaling while the BF16 case does not. This is not necessarily a problem with this PR, but it is a sign that the per-tensor scaling logic does not generalize and that its design should be improved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants