Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU not detected and Instance errors #563

Open
Sirius3331 opened this issue Feb 23, 2025 · 8 comments
Open

GPU not detected and Instance errors #563

Sirius3331 opened this issue Feb 23, 2025 · 8 comments
Labels
bug Something isn't working

Comments

@Sirius3331
Copy link

Describe the bug
Yesterday the Alpaca update came out but I was running the old version which worked fine. Today I started Alpaca with the new version (5.0.1) and it gave me two errors or the app is crashing at startup. I needed to launch the Ollama Instance too after every start. After I did this and continued the chat I found out that my GPU (GTX 1060 6GB) didn't run the model but on my cpu. The Alpaca App is installed from Flathub.

Expected behavior
Alpaca will take awhile by startup because of one huge chat I have. No Errors are appearing and my gpu is detected and used normally.

System information
OS: Arch Linux
Mainboard: MSI MPG B550 Gaming Plus
GPU: Asus GTX 1060 6GB DUAL
CPU: AMD Ryzen 9 5950X
RAM: 128GB DDR4 3200MHz

Screenshots

Image

Image

Debugging information
Ollama Log:


time=2025-02-23T14:25:35.473+01:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
time=2025-02-23T14:25:35.473+01:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="125.7 GiB" available="120.5 GiB"
[GIN] 2025/02/23 - 14:25:35 | 200 |      524.39µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/02/23 - 14:25:35 | 200 |     5.17857ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/23 - 14:25:35 | 200 |    9.942254ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/23 - 14:25:35 | 200 |   16.881746ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/23 - 14:25:35 | 200 |   17.335899ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/23 - 14:25:35 | 200 |   19.825733ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/23 - 14:25:35 | 200 |   25.595439ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/23 - 14:25:35 | 200 |   25.161915ms |       127.0.0.1 | POST     "/api/show"
2025/02/23 14:25:35 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/sirius/.var/app/com.jeffser.Alpaca/data/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-02-23T14:25:35.653+01:00 level=INFO source=images.go:432 msg="total blobs: 34"
time=2025-02-23T14:25:35.653+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-23T14:25:35.653+01:00 level=INFO source=routes.go:1237 msg="Listening on [::]:11435 (version 0.5.11)"
time=2025-02-23T14:25:35.653+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-02-23T14:25:35.661+01:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
time=2025-02-23T14:25:35.661+01:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="125.7 GiB" available="120.5 GiB"
[GIN] 2025/02/23 - 14:25:35 | 200 |       524.2µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/02/23 - 14:25:35 | 200 |    5.564095ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/23 - 14:25:35 | 200 |    9.593868ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/23 - 14:25:35 | 200 |   16.019399ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/23 - 14:25:35 | 200 |   16.682404ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/23 - 14:25:35 | 200 |   19.482925ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/23 - 14:25:35 | 200 |    26.59628ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/23 - 14:25:35 | 200 |   28.331473ms |       127.0.0.1 | POST     "/api/show"
2025/02/23 14:25:37 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/sirius/.var/app/com.jeffser.Alpaca/data/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-02-23T14:25:37.292+01:00 level=INFO source=images.go:432 msg="total blobs: 34"
time=2025-02-23T14:25:37.293+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-23T14:25:37.293+01:00 level=INFO source=routes.go:1237 msg="Listening on [::]:11435 (version 0.5.11)"
time=2025-02-23T14:25:37.293+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-02-23T14:25:37.300+01:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
time=2025-02-23T14:25:37.300+01:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="125.7 GiB" available="120.4 GiB"
[GIN] 2025/02/23 - 14:25:37 | 200 |      513.16µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/02/23 - 14:25:37 | 200 |    5.041136ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/23 - 14:25:37 | 200 |    8.695117ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/23 - 14:25:37 | 200 |   15.157309ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/23 - 14:25:37 | 200 |   23.985531ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/23 - 14:25:37 | 200 |    21.11382ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/23 - 14:25:37 | 200 |   34.102083ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/23 - 14:25:37 | 200 |   35.442641ms |       127.0.0.1 | POST     "/api/show"
2025/02/23 14:25:43 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/sirius/.var/app/com.jeffser.Alpaca/data/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-02-23T14:25:43.539+01:00 level=INFO source=images.go:432 msg="total blobs: 34"
time=2025-02-23T14:25:43.540+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-23T14:25:43.540+01:00 level=INFO source=routes.go:1237 msg="Listening on [::]:11435 (version 0.5.11)"
time=2025-02-23T14:25:43.540+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-02-23T14:25:43.547+01:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
time=2025-02-23T14:25:43.547+01:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="125.7 GiB" available="120.4 GiB"
[GIN] 2025/02/23 - 14:25:43 | 200 |     644.666µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/02/23 - 14:28:10 | 200 |      647.95µs |       127.0.0.1 | GET      "/api/tags"

Couldn't find the Alpaca Debugging information Button

@Sirius3331 Sirius3331 added the bug Something isn't working label Feb 23, 2025
@Sirius3331 Sirius3331 changed the title GPU not detected and errors GPU not detected and Instance errors Feb 23, 2025
@Jeffser
Copy link
Owner

Jeffser commented Feb 23, 2025

Hi the debugging information button is located at instance manager (ctrl+i) > edit instance > (top right corner)

Do you have any other instances added?

@Sirius3331
Copy link
Author

No. There are no other instances. Only one for Alpaca.

Image

And the logs looks different now.

Couldn't find '/home/sirius/.ollama/id_ed25519'. Generating new private key.
Your new public key is: 

ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEw0Nrz7pMLrnIxnFDO4A9eOgLERKcymIaaETwjGNoRB

Error: listen tcp 0.0.0.0:11435: bind: address already in use
2025/02/23 19:31:00 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/sirius/.var/app/com.jeffser.Alpaca/data/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-02-23T19:31:00.953+01:00 level=INFO source=images.go:432 msg="total blobs: 34"
time=2025-02-23T19:31:00.954+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-23T19:31:00.954+01:00 level=INFO source=routes.go:1237 msg="Listening on [::]:11435 (version 0.5.11)"
time=2025-02-23T19:31:00.954+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-02-23T19:31:00.961+01:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
time=2025-02-23T19:31:00.961+01:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="125.7 GiB" available="121.6 GiB"
[GIN] 2025/02/23 - 19:31:00 | 200 |     695.146µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/02/23 - 19:31:00 | 200 |    6.371864ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/23 - 19:31:00 | 200 |   12.604608ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/23 - 19:31:00 | 200 |   18.569079ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/23 - 19:31:00 | 200 |     16.4141ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/23 - 19:31:00 | 200 |   20.599016ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/23 - 19:31:00 | 200 |   25.313347ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/23 - 19:31:00 | 200 |   26.095443ms |       127.0.0.1 | POST     "/api/show"
2025/02/23 19:31:04 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/sirius/.var/app/com.jeffser.Alpaca/data/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-02-23T19:31:04.664+01:00 level=INFO source=images.go:432 msg="total blobs: 34"
time=2025-02-23T19:31:04.665+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-23T19:31:04.665+01:00 level=INFO source=routes.go:1237 msg="Listening on [::]:11435 (version 0.5.11)"
time=2025-02-23T19:31:04.665+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-02-23T19:31:04.671+01:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
time=2025-02-23T19:31:04.671+01:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="125.7 GiB" available="121.6 GiB"
[GIN] 2025/02/23 - 19:31:04 | 200 |     518.719µs |       127.0.0.1 | GET      "/api/tags"

@yrdn
Copy link

yrdn commented Feb 24, 2025

Had the same problems today too. Uninstalled and installed clean again from flatpak. The error messages disappeared and Alpaca seems to load without stutters, but the GPU is still not detected.

[GIN] 2025/02/24 - 11:14:57 | 200 |     444.333µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/02/24 - 11:14:57 | 200 |    22.25113ms |       127.0.0.1 | POST     "/api/show"
2025/02/24 11:14:58 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/thonkpad/.var/app/com.jeffser.Alpaca/data/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-02-24T11:14:58.324+02:00 level=INFO source=images.go:432 msg="total blobs: 6"
time=2025-02-24T11:14:58.324+02:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-24T11:14:58.324+02:00 level=INFO source=routes.go:1237 msg="Listening on [::]:11435 (version 0.5.11)"
time=2025-02-24T11:14:58.324+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-02-24T11:14:58.340+02:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
time=2025-02-24T11:14:58.340+02:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="31.0 GiB" available="25.8 GiB"
[GIN] 2025/02/24 - 11:14:58 | 200 |     521.352µs |       127.0.0.1 | GET      "/api/tags"

Lenovo Thinkpad P50
OS: Fedora Workstation 41 - GNOME
GPU: NVIDIA Quadro M2000M
CPU: Intel® Core™ i7-6820HQ Processor
RAM: 32GB DDR4 2133MHz

@Sirius3331
Copy link
Author

It looks like it's a problem from Ollama. Someone issued there the same problem.

@Stroemie
Copy link

Stroemie commented Mar 5, 2025

Similar problem.
I am running Alpaca 5.0.5, CPU only, no GPU.
Fedora and Gnome, latest.

Had no problems but after last update many crashes and instance problems.and have the same message: as reported above

HTTPConnectionPool(host='0.0.0.0', port=11435): Max retries exceeded with url: /api/tags (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f6c3ab963c0>: Failed to establish a new connection: [Errno 111] Connection refused'))

and "instance error" "could not retrieve added models":
('Connection broken: IncompleteRead(3959 bytes read, 1148 more expected)', IncompleteRead(3959 bytes read, 1148 more expected))

Couldn't find '/home/...../.ollama/id_ed25519'. Generating new private key.
Your new public key is:

ssh-ed25519 ...... /rh7FtJ9RISweyDo3c/bS4qw9cNWj+a1T

Error: listen tcp 0.0.0.0:11435: bind: address already in use
2025/03/05 15:06:01 routes.go:1205: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/profs/.var/app/com.jeffser.Alpaca/data/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-03-05T15:06:01.728+01:00 level=INFO source=images.go:432 msg="total blobs: 61"
time=2025-03-05T15:06:01.729+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-03-05T15:06:01.730+01:00 level=INFO source=routes.go:1256 msg="Listening on [::]:11435 (version 0.5.12)"
time=2025-03-05T15:06:01.730+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-03-05T15:06:01.733+01:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
time=2025-03-05T15:06:01.733+01:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="94.1 GiB" available="81.9 GiB"
[GIN] 2025/03/05 - 15:06:01 | 200 | 1.929748ms | 127.0.0.1 | GET "/api/tags"

@Jeffser
Copy link
Owner

Jeffser commented Mar 6, 2025

I'm moving all conversation about AMD to this discussion, please take a look!

@Jeffser Jeffser closed this as completed Mar 6, 2025
@Sirius3331
Copy link
Author

Sirius3331 commented Mar 6, 2025

This conversation is more about NVIDIA, CUDA and this Instance error. But about this CUDA Problem there is already a conversation open on OLLAMA. The Instance error was confusing. On my PC it's making this problem but on my Laptop I never had this problem but the same long loading time at startup of Alpaca.

@Jeffser
Copy link
Owner

Jeffser commented Mar 6, 2025

my bad

@Jeffser Jeffser reopened this Mar 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants