Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to establilsh a new connection #583

Open
menuRivera opened this issue Mar 7, 2025 · 1 comment
Open

Failed to establilsh a new connection #583

menuRivera opened this issue Mar 7, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@menuRivera
Copy link

menuRivera commented Mar 7, 2025

Describe the bug
Just installed Alpaca from GNOME software (flatpak), tried to download a model and the following error arised:

HTTPConnectionPool(host='0.0.0.0', port=11435): Max retries exceeded with url: /api/pull (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe9dc33fa40>: Failed to establish a new connection: [Errno 111] Connection refused'))

Expected behavior
The model downloads properly

Screenshots

Image

Debugging information
Please include the output of Alpaca, for this you'll need to run Alpaca from the terminal, then try to reproduce the error you want to report.

Yikes, apparently it worked after reopening the app, seems like it was a first-time kind of issue, idk.
It happened with every model I tried to install though.

➜  manuel ~ flatpak run com.jeffser.Alpaca
INFO	[main.py | main] Alpaca version: 5.0.5
MESA-INTEL: warning: ../src/intel/vulkan/anv_formats.c:782: FINISHME: support YUV colorspace with DRM format modifiers
MESA-INTEL: warning: ../src/intel/vulkan/anv_formats.c:814: FINISHME: support more multi-planar formats with DRM modifiers
INFO	[instance_manager.py | start] Starting Alpaca's Ollama instance...
INFO	[instance_manager.py | start] Started Alpaca's Ollama instance
Couldn't find '/home/manuel/.ollama/id_ed25519'. Generating new private key.
Your new public key is: 

ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINl947pzSVftB46LRVr/nUb+5Ci91EgRqImCpbfVO/1W

2025/03/07 13:07:18 routes.go:1205: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/manuel/.var/app/com.jeffser.Alpaca/data/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-03-07T13:07:18.521-06:00 level=INFO source=images.go:432 msg="total blobs: 0"
time=2025-03-07T13:07:18.522-06:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-03-07T13:07:18.522-06:00 level=INFO source=routes.go:1256 msg="Listening on [::]:11435 (version 0.5.12)"
time=2025-03-07T13:07:18.522-06:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
INFO	[instance_manager.py | start] client version is 0.5.12
time=2025-03-07T13:07:18.528-06:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
time=2025-03-07T13:07:18.528-06:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="15.4 GiB" available="9.4 GiB"
[GIN] 2025/03/07 - 13:07:18 | 200 |      881.66µs |       127.0.0.1 | GET      "/api/tags"
time=2025-03-07T13:07:40.114-06:00 level=INFO source=download.go:176 msg="downloading 74701a8c35f6 in 14 100 MB part(s)"
time=2025-03-07T13:10:01.507-06:00 level=INFO source=download.go:176 msg="downloading 966de95ca8a6 in 1 1.4 KB part(s)"
time=2025-03-07T13:10:02.811-06:00 level=INFO source=download.go:176 msg="downloading fcc5a6bec9da in 1 7.7 KB part(s)"
time=2025-03-07T13:10:04.077-06:00 level=INFO source=download.go:176 msg="downloading a70ff7e570d9 in 1 6.0 KB part(s)"
time=2025-03-07T13:10:05.369-06:00 level=INFO source=download.go:176 msg="downloading 4f659a1e86d7 in 1 485 B part(s)"
[GIN] 2025/03/07 - 13:10:10 | 200 |         2m31s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2025/03/07 - 13:10:10 | 200 |   30.474339ms |       127.0.0.1 | POST     "/api/show"
/usr/lib/python3.12/site-packages/gi/overrides/Gio.py:42: Warning: g_value_get_int: assertion 'G_VALUE_HOLDS_INT (value)' failed
  return Gio.Application.run(self, *args, **kwargs)
time=2025-03-07T13:14:42.618-06:00 level=INFO source=download.go:176 msg="downloading 59bb50d8116b in 16 239 MB part(s)"

System specs
OS: Fedora 41
Model: Thinkpad t470p
CPU: Intel i7-7700HQ
GPU: Intel HD Graphics 630
GPU: NVIDIA GeForce 940mx
RAM: 16GB

@menuRivera menuRivera added the bug Something isn't working label Mar 7, 2025
@mags0ft
Copy link
Contributor

mags0ft commented Mar 7, 2025

Thanks for the error report. To make this as easy as possible, I've a few questions:

  • Did the error appear immediately after starting the download?
  • If yes, have you somehow changed the Flatpak's permissions using an app like Flatseal beforehand?
  • Have you been able to reproduce it since it stopped happening?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants
@menuRivera @mags0ft and others