You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am trying to fine-tune Llava 1.5 7B model using QLoRa and for that I need bitsandbytes module. However, I am working on macos and pytorch doesn't support cuda on macos due to its integrated unified memory. Hence, even after installing bitsandbytes and accelerate, I am facing the same issue:
Using `bitsandbytes` 4-bit quantization requires the latest version of bitsandbytes
Motivation
Fine-tuning large-scale models such as LLaVA 1.5 7B requires computational efficiency and optimization due to their size and complexity. Leveraging QLoRA (Quantized Low-Rank Adaptation) is an ideal approach for this task, as it significantly reduces memory consumption and compute requirements while maintaining high performance. QLoRA relies on low-bit quantization techniques, for which the bitsandbytes library is crucial. However, the lack of CUDA support on macOS, primarily due to its integrated unified memory architecture, creates a roadblock in fully utilizing these optimizations.
Your contribution
I would definitely wait for it!
The text was updated successfully, but these errors were encountered:
Feature request
Hi, I am trying to fine-tune Llava 1.5 7B model using QLoRa and for that I need
bitsandbytes
module. However, I am working on macos and pytorch doesn't support cuda on macos due to its integrated unified memory. Hence, even after installingbitsandbytes
andaccelerate
, I am facing the same issue:Motivation
Fine-tuning large-scale models such as LLaVA 1.5 7B requires computational efficiency and optimization due to their size and complexity. Leveraging QLoRA (Quantized Low-Rank Adaptation) is an ideal approach for this task, as it significantly reduces memory consumption and compute requirements while maintaining high performance. QLoRA relies on low-bit quantization techniques, for which the bitsandbytes library is crucial. However, the lack of CUDA support on macOS, primarily due to its integrated unified memory architecture, creates a roadblock in fully utilizing these optimizations.
Your contribution
I would definitely wait for it!
The text was updated successfully, but these errors were encountered: