-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Qwen-2.5-VL-7B finetuning isssue #40
Comments
I've wrote it in the README, you should install the correct version. |
Sorry, I didn't notice the README instructions earlier. However, even after installing transformers as instructed in the README, I am still encountering the error. @2U1 |
The version should be |
Installing transformers from the commit that you mentioned also didn't worked for me. Same error. And I double checked the version of transformers, it is |
Okay I got it it was my typo. |
Using the updated code gives me the following error
|
@ragesh-beo I don't know what exactly is changed but, you need to use zero2 for mixed-modality now. Sorry for the inconvinience. I'll soon make an update for supporting zero3. |
Thanks @2U1 |
Let me know if the code still dosen't work |
Seems like all working fine with the new setup @2U1 |
@ragesh-beo I've updated the code to support zero3 with mixed-modality data. |
Thanks |
Hi, I got the following issues while finetuning Qwen-2.5-VL-Instruct.
transformers==4.48.0
and as far as I know,Qwen2_5_VLForConditionalGeneration
cannot be imported from this versiongit+https://github.com/huggingface/transformers
, it gives me an errorThe text was updated successfully, but these errors were encountered: