-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove dependency on HuggingFace token #14
Comments
tengyifei
added a commit
that referenced
this issue
Jan 8, 2025
Similar to #13, we also add a CPU github action. This action will run `pytest` on the repo. Currently there is only one test, which is the Llama test in torch_xla_models. In order to run the test today, we need a HF_TOKEN. I created a personal read only token and #14 tracks avoiding the need for HF_TOKEN, after which I'll need to remember to invalidate the token.
Merged
tengyifei
added a commit
that referenced
this issue
Jan 8, 2025
Similar to #13, we also add a CPU github action. This action will run `pytest` on the repo. Currently there is only one test, which is the Llama test in torch_xla_models. In order to run the test today, we need a HF_TOKEN. I created a personal read only token and #14 tracks avoiding the need for HF_TOKEN, after which I'll need to remember to invalidate the token.
tengyifei
added a commit
that referenced
this issue
Jan 9, 2025
Similar to #13, we also add a CPU github action. This action will run `pytest` on the repo. Currently there is only one test, which is the Llama test in torch_xla_models. In order to run the test today, we need a HF_TOKEN. I created a personal read only token and #14 tracks avoiding the need for HF_TOKEN, after which I'll need to remember to invalidate the token.
tengyifei
added a commit
that referenced
this issue
Jan 9, 2025
Similar to #13, we also add a CPU github action. This action will run `pytest` on the repo. Currently there is only one test, which is the Llama test in torch_xla_models. In order to run the test today, we need a HF_TOKEN. I created a personal read only token and #14 tracks avoiding the need for HF_TOKEN, after which I'll need to remember to invalidate the token.
tengyifei
added a commit
that referenced
this issue
Jan 9, 2025
Similar to #13, we also add a CPU github action. This action will run `pytest` on the repo. Currently there is only one test, which is the Llama test in torch_xla_models. In order to run the test today, we need a HF_TOKEN. I created a personal read only token and #14 tracks avoiding the need for HF_TOKEN, after which I'll need to remember to invalidate the token.
tengyifei
added a commit
that referenced
this issue
Jan 9, 2025
Similar to #13, we also add a CPU github action. This action will run `pytest` on the repo. Currently there is only one test, which is the Llama test in torch_xla_models. In order to run the test today, we need a HF_TOKEN. I created a personal read only token and #14 tracks avoiding the need for HF_TOKEN, after which I'll need to remember to invalidate the token.
tengyifei
added a commit
that referenced
this issue
Jan 9, 2025
Similar to #13, we also add a CPU github action. This action will run `pytest` on the repo. Currently there is only one test, which is the Llama test in torch_xla_models. In order to run the test today, we need a HF_TOKEN. I created a personal read only token and #14 tracks avoiding the need for HF_TOKEN, after which I'll need to remember to invalidate the token.
qihqi
pushed a commit
that referenced
this issue
Jan 10, 2025
Similar to #13, we also add a CPU github action. This action will run `pytest` on the repo. Currently there is only one test, which is the Llama test in torch_xla_models. In order to run the test today, we need a HF_TOKEN. I created a personal read only token and #14 tracks avoiding the need for HF_TOKEN, after which I'll need to remember to invalidate the token.
qihqi
pushed a commit
that referenced
this issue
Jan 13, 2025
Similar to #13, we also add a CPU github action. This action will run `pytest` on the repo. Currently there is only one test, which is the Llama test in torch_xla_models. In order to run the test today, we need a HF_TOKEN. I created a personal read only token and #14 tracks avoiding the need for HF_TOKEN, after which I'll need to remember to invalidate the token.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Right now we need a HuggingFace token in order to run tests/run the model locally. This token is used to fetch the model config and fetch tokenizers.
It should be possible to drop in a tokenizer model as a file. It should also be possible to specify a model config as a
dict
as opposed to astr
likeMeta-Llama/Llama-3.1-8B
, which would causetransformers
to look up the model using a token. Once we can do these, we'll no longer need a token to do dev work and testing.The text was updated successfully, but these errors were encountered: