Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add @weiji14 to the gpu list #33

Merged
merged 1 commit into from
Jul 26, 2024
Merged

Add @weiji14 to the gpu list #33

merged 1 commit into from
Jul 26, 2024

Conversation

weiji14
Copy link
Contributor

@weiji14 weiji14 commented Jul 26, 2024

Xref conda-forge/admin-requests#1040 and conda-forge/flash-attn-feedstock#4

To obtain access to the CI server, you must complete the form below:

  • I have read the Terms of Service and Privacy Policy and accept them.
  • I have included my GitHub username and unique identifier to the relevant access/*.json file.

@carterbox
Copy link
Contributor

We're trying to build the flash-attention package which is a pytorch extension module which implements optimize attention layers for machine learning, but some of the builds will time out at 6 hours even though we are only building for a single CUDA architecture.

@jaimergp jaimergp merged commit 8195903 into Quansight:main Jul 26, 2024
1 check passed
@weiji14 weiji14 deleted the patch-1 branch July 26, 2024 18:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants