-
-
Notifications
You must be signed in to change notification settings - Fork 235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Offline Model Loading Issue and Feature Request #494
Comments
Hi. Then the directory name
Regarding the |
m4a File Issue Resolved, Offline Model Loading Still Problematic The m4a file issue appears to be resolved with the fix in PR #493 Thank you! However, I'm still encountering problems loading models for offline use. Initially, I copied the model files from a working installation on another machine. This installation had a different directory structure (blobs, refs, snapshots). I then cloned the individual models and placed them in the directory as specified in your response. I don't have the After manually selecting a downloaded model from the dropdown menu, the application displays "initializing model" and then hangs indefinitely, freezing my WSL instance. I have to kill and restart WSL to regain control. Any ideas what might be causing this? |
This structure is huggingface's model caching system. The structure indicates that you may have previously tried to download model files using huggingface's API. And probably the model files are not fully downloaded. Remove the directory ( the model directory name is probably something like If you want to use Systran/faster-distil-whisper-small.en,
|
I followed all the structure advice, and I still cannot see the model I placed in faster-whisper folder. (I placed ivrit-ai/faster-whisper-v2-d4 there). I have reloaded the page, restarted the container. the log says:
And the screen shows this list, without the model I placed: |
I'm encountering a problem loading a Hugging Face model offline, even though I've manually transferred the model files. My setup is:
OS: Windows 11
Docker Desktop
Due to network restrictions in my environment, I can't directly access Hugging Face. I've copied the model files (Systran/faster-distil-whisper-small.en) from another installation, maintaining the same directory structure.
Despite this, the application log (below) shows it's still attempting to connect to Hugging Face before checking the local cache. This seems counterintuitive. If the model files are present locally, shouldn't it prioritize loading from the cache and only attempt a connection if the model isn't found?
Furthermore, even though the log indicates it's trying to load from the local cache, it doesn't appear to be successful. The UI displays "Initializing Model," runs for a while, and eventually shows "error" in the Output/Downloaded file sections. The log, however, just shows the connection error and the subsequent attempt to load from cache, without any further output indicating success or failure in loading the local files. I've double-checked the directory structure, and it's identical to the working installation. Am I missing something in the model transfer process? How can I ensure the application correctly identifies and loads the locally stored model?
Ideally, I'd like to be able to run the application completely offline, managing model downloads manually.
Finally, I have a feature request: Would it be possible to add a "Stop" button to interrupt processing if needed?
A separate issue I notice
.m4a
files don't work I have to use MP3 for audio files.The text was updated successfully, but these errors were encountered: