You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
When using Ollama, the model remains loaded and takes up a lot of memory.
Describe the solution you'd like
Add an “Unload after Run” feature to the ☁️API LLM general link node, like other similar LLM nodes.
Describe alternatives you've considered
The only solution for unloading the model and emptying memory for the moment is to close ollama just after the node has given the result.
Additional context
For example:
The text was updated successfully, but these errors were encountered:
There is a node called "clear model". Turn on the "is ollama" option on this node, and insert this node at any point in the workflow where you want to unload the model. When the workflow reaches this node, it will exit the ollama model.
Is your feature request related to a problem? Please describe.
When using Ollama, the model remains loaded and takes up a lot of memory.
Describe the solution you'd like
Add an “Unload after Run” feature to the ☁️API LLM general link node, like other similar LLM nodes.
Describe alternatives you've considered
The only solution for unloading the model and emptying memory for the moment is to close ollama just after the node has given the result.
Additional context
For example:
The text was updated successfully, but these errors were encountered: