You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is it possible to finetune the InternVL2_5_AWQ model? I've had no issues finetuning the base models but finetuning the AWQ version stops the model from giving any responses to questions asked. I have tried loading the model as normal, and also with the --load-in4bit and --load-in-8bit parameters and yet no matter what all the responses are blank?
The text was updated successfully, but these errors were encountered:
The AWQ model is a quantized model whose weights have been converted to low-precision representation. Generally speaking, the AWQ model does not directly support gradient updates, that is, it cannot be fully fine-tuned.
Is it possible to finetune the InternVL2_5_AWQ model? I've had no issues finetuning the base models but finetuning the AWQ version stops the model from giving any responses to questions asked. I have tried loading the model as normal, and also with the --load-in4bit and --load-in-8bit parameters and yet no matter what all the responses are blank?
The text was updated successfully, but these errors were encountered: