Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

supporting float32 on Scalar values #3816

Open
jjsjann123 opened this issue Feb 3, 2025 · 0 comments
Open

supporting float32 on Scalar values #3816

jjsjann123 opened this issue Feb 3, 2025 · 0 comments

Comments

@jjsjann123
Copy link
Collaborator

Tracking the issue for user experience. In the issue comment here.

The to translate a user program like this vvv

# t18 = prims.where(t17, -0.0, 0.0) # t18: "cuda:0 f32[4, 2, 3]"

We are calling nvfuser where on (boolean_tv, scalar_1, scalar_2).
The type promotion/inference logic is that, we determine the output dtype of where based on input[1] and input[2].

We are producing output TV from this operation, but since all scalar types are in double, nvfuser generates a double tensor as output, instead of a float32 as the thunder scripts. (since thunder sees input scalar as in float).

This isn't blocking, as we can always patch the executor in thunder for explicit type inference: Lightning-AI/lightning-thunder#1734

Per our thunder developer's request, we still want an issue to track this so we no longer needed these WAR on user of nvfuser.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant