Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

using in scripting #484

Open
simit22 opened this issue Jan 21, 2025 · 4 comments
Open

using in scripting #484

simit22 opened this issue Jan 21, 2025 · 4 comments

Comments

@simit22
Copy link

simit22 commented Jan 21, 2025

hello sorry to ask can i use my installed models in a python script as well
for example it do x and y and than ask a certain quesion from ai model and continue according to answer
if it is possible how can i use it inside python scripts
it doesnt just have to be python any other way to use it for code or scripting is also fine !!

@Jeffser
Copy link
Owner

Jeffser commented Jan 21, 2025

Hi, Alpaca is developed in Python so of course you can also do some cool stuff with AI yourself using that language.

I suggest you look into this documentation page.

There's also this python library that might make it easier for you to start playing with Ollama.

Alpaca doesn't use that library because of some technical limitations but it will probably be enough for you to get started.

Also please note that Alpaca exposes the Ollama instance using the port 11435 instead of 11434.

@simit22
Copy link
Author

simit22 commented Jan 23, 2025

Hi, Alpaca is developed in Python so of course you can also do some cool stuff with AI yourself using that language.

I suggest you look into this documentation page.

There's also this python library that might make it easier for you to start playing with Ollama.

Alpaca doesn't use that library because of some technical limitations but it will probably be enough for you to get started.

Also please note that Alpaca exposes the Ollama instance using the port 11435 instead of 11434.

after using alpaca for a while i think it use cpu only can i enable gpu for it as well ???

@Jeffser
Copy link
Owner

Jeffser commented Jan 23, 2025

Nvidia should work by itself, Ollama only supports these AMD GPUs, to activate AMD support please read this wiki article

@CodingKoalaGeneral
Copy link

CodingKoalaGeneral commented Jan 26, 2025

If you GPU does not have enough VRAM it uses the CPU (in case your PC has enough RAM)

use medium models for them to be able to fit in the VRAM.

What GPU do you have? Does your pc contain more than one GPU?

from ollama import Client

def main():
    client = Client(host='http://localhost:11434')

    print("Welcome to the Qwen2.5-Coder:0.5b CLI Chatbot!")
    print("Type 'exit' to end the chat.\n")

    conversation = []

    while True:
        user_input = input("You: ")
        if user_input.lower() == 'exit':
            print("Ending the chat. Goodbye!")
            break

        conversation.append({'role': 'user', 'content': user_input})

        response = client.chat(model='qwen2.5-coder:0.5b', messages=conversation)

        assistant_message = response['message']['content']
        print(f"Assistant: {assistant_message}\n")

        conversation.append({'role': 'assistant', 'content': assistant_message})

if __name__ == "__main__":
    main()

dep: pip install ollama

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants