Skip to content

Commit

Permalink
Initial release
Browse files Browse the repository at this point in the history
  • Loading branch information
mann1x authored Jun 11, 2024
1 parent 1d55810 commit 8fcaed7
Show file tree
Hide file tree
Showing 6 changed files with 364 additions and 21 deletions.
42 changes: 21 additions & 21 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -1,21 +1,21 @@
MIT License

Copyright (c) 2024 ManniX-ITA

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
MIT License
Copyright (c) 2024, ManniX
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
82 changes: 82 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@

<div align="center">
ollamarsync
<br />
<br />
<a href="https://github.com/mann1x/ollamarsync/issues/new?assignees=&labels=bug&template=01_BUG_REPORT.md&title=bug%3A+">Report a Bug</a>
·
<a href="https://github.com/mann1x/ollamarsync/issues/new?assignees=&labels=enhancement&template=02_FEATURE_REQUEST.md&title=feat%3A+">Request a Feature</a>
.
<a href="https://github.com/mann1x/ollamarsync/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+">Ask a Question</a>
</div>

<div align="center">
<br />

[![Project license](https://img.shields.io/github/license/mann1x/ollamarsync.svg?style=flat-square)](LICENSE)

[![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/mann1x/ollamarsync/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22)
[![code with love by mann1x](https://img.shields.io/badge/%3C%2F%3E%20with%20%E2%99%A5%20by-mann1x-ff1414.svg?style=flat-square)](https://github.com/mann1x)

</div>



---

## About

> **[Copy a local ollama model to a remote server]**
> Skips already transferred images
> Uploads at high speed with a progress bar
> No more multiple downloads of the same model on different ollama hosts
> Ideal for servers isolated from internet


### Built With

> **[Python]**
## Getting Started

### Prerequisites

> **[Requests and tqdm modules]**
> Python 3.10+
### Installation

> **[Clone the repo]**
> execute `pip install -r requirements.txt` for the dependencis
## Usage

> Simple: `python ollamarsync.py modelname http://192.168.100.100:11434`
> `-h` for help
## Roadmap

See the [open issues](https://github.com/mann1x/ollamarsync/issues) for a list of proposed features (and known issues).

- [Top Feature Requests](https://github.com/mann1x/ollamarsync/issues?q=label%3Aenhancement+is%3Aopen+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction)
- [Top Bugs](https://github.com/mann1x/ollamarsync/issues?q=is%3Aissue+is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc) (Add your votes using the 👍 reaction)
- [Newest Bugs](https://github.com/mann1x/ollamarsync/issues?q=is%3Aopen+is%3Aissue+label%3Abug)

## Support

- [GitHub issues](https://github.com/mann1x/ollamarsync/issues/new?assignees=&labels=question&template=04_SUPPORT_QUESTION.md&title=support%3A+)


## License

This project is licensed under the **MIT license**.

See [LICENSE](LICENSE) for more information.
43 changes: 43 additions & 0 deletions docs/CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# Contributing

When contributing to this repository, please first discuss the change you wish to make via issue, email, or any other method with the owners of this repository before making a change.

## Development environment setup

> **[?]**
> Proceed to describe how to setup local development environment.
> e.g:
To set up a development environment, please follow these steps:

1. Clone the repo

```sh
git clone https://github.com/mann1x/ollamarsync
```

2. TODO

## Issues and feature requests

You've found a bug in the source code, a mistake in the documentation or maybe you'd like a new feature? You can help us by [submitting an issue on GitHub](https://github.com/mann1x/ollamarsync/issues). Before you create an issue, make sure to search the issue archive -- your issue may have already been addressed!

Please try to create bug reports that are:

- _Reproducible._ Include steps to reproduce the problem.
- _Specific._ Include as much detail as possible: which version, what environment, etc.
- _Unique._ Do not duplicate existing opened issues.
- _Scoped to a Single Bug._ One bug per report.

**Even better: Submit a pull request with a fix or new feature!**

### How to submit a Pull Request

1. Search our repository for open or closed
[Pull Requests](https://github.com/mann1x/ollamarsync/pulls)
that relate to your submission. You don't want to duplicate effort.
2. Fork the project
3. Create your feature branch (`git checkout -b feat/amazing_feature`)
4. Commit your changes (`git commit -m 'feat: add amazing_feature'`)
5. Push to the branch (`git push origin feat/amazing_feature`)
6. [Open a Pull Request](https://github.com/mann1x/ollamarsync/compare?expand=1)
Binary file added docs/images/ollamarsync.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
216 changes: 216 additions & 0 deletions ollamrsync.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,216 @@
import json
from json import JSONEncoder
import os
import argparse
import platform
from pathlib import Path
from contextlib import contextmanager
import sys
from urllib.parse import urlparse
import requests
import subprocess
import signal
from tqdm import tqdm
from tqdm.utils import CallbackIOWrapper

@contextmanager
def optional_dependencies(error: str = "ignore"):
assert error in {"raise", "warn", "ignore"}
try:
yield None
except ImportError as e:
if error == "raise":
msg = f'Missing required dependency "{e.name}". Use pip or conda to install.'
print(f'Error: {msg}')
raise e
if error == "warn":
msg = f'Missing optional dependency "{e.name}". Use pip or conda to install.'
print(f'Warning: {msg}')
if error == "ignore":
pass

parser = argparse.ArgumentParser(prog='ollamarsync', description="Copy local Ollama models to a remote instance", epilog='Text at the bottom of help')
parser.add_argument('local_model', type=str,
help='Source local model to copy eg. ')
parser.add_argument('remote_server', type=str,
help='Remote ollama server eg. http://192.168.0.100:11434')

args = parser.parse_args()

thisos = platform.system()

def get_env_var(var_name, default_value):
return os.environ.get(var_name, default_value)

def get_platform_path(input_path):
if input_path != "*":
return input_path
else:
if thisos == "Windows":
return f'{os.environ["USERPROFILE"]}{separator}.ollama{separator}models'
elif thisos == "Darwin":
return "~/.ollama/models"
else:
return "/usr/share/ollama/.ollama/models"

def get_platform_separator():
if thisos == "Windows":
return "\\"
return "/"

def get_digest_separator():
if thisos == "Windows":
return "-"
return ":"

def model_base(model_name):
parts = model_name.split('/', 1)
if "/" in model_name:
print(f"part0 {parts[0]}")
return parts[0]
else:
return ""

def validate_url(url):
try:
result = urlparse(url)
return all([result.scheme in ['http', 'https'], result.port, not result.query, not result.path, not result.path.endswith('/')])
except ValueError:
return False

def parse_modelfile(multiline_input):
lines = multiline_input.split('\n')
filtered_lines = [line for line in lines if not line.startswith('#') and not line.startswith('FROM ') and not line.startswith('failed to get console mode')]
parsed_output = '\n'.join(filtered_lines)
return parsed_output

def pretty(d, indent=0):
for key, value in d.items():
print('\t' * indent + str(key))
if isinstance(value, dict):
pretty(value, indent+1)
else:
print('\t' * (indent+1) + str(value) if not isinstance(value, dict) else 'Invalid value')

def print_status(json_objects):
lines = json_objects.split('\n')
for line in lines:
try:
data = json.loads(line)
print(data["status"])
except json.JSONDecodeError:
continue

def interrupt_handler(signum, frame):
print(f"\n\nModel upload aborted, exiting")
sys.exit(0)

signal.signal(signal.SIGINT, interrupt_handler)

separator = get_platform_separator()

ollama_models = get_env_var("OLLAMA_MODELS", "*")
base_dir = Path(get_platform_path(ollama_models))

if not base_dir.is_dir():
print(f"Error: ollama models directory ({base_dir}) does not exist.")
sys.exit(1)

if not validate_url(args.remote_server):
print(f"Error: remote server URL is not valid: {args.remote_server}")
sys.exit(1)

blob_dir = Path(f'{base_dir}{separator}blobs')
model_dir = Path(f'{base_dir}{separator}manifests{separator}{args.local_model}')
manifest_file = args.local_model.replace(':', f"{separator}")

if model_base(args.local_model) == "hub":
model_dir = Path(f'{base_dir}{separator}manifests{separator}{manifest_file}')
elif model_base(args.local_model) == "":
model_dir = Path(f'{base_dir}{separator}manifests{separator}registry.ollama.ai{separator}library{separator}{manifest_file}')
else:
model_dir = Path(f'{base_dir}{separator}manifests{separator}registry.ollama.ai{separator}{manifest_file}')

if not model_dir.is_file():
print(f"Error: model not found in {model_dir}.")
sys.exit(1)

with open(model_dir, 'r') as mfile:
data = json.load(mfile)

print(f"Copying model {args.local_model} to {args.remote_server}...")

model_from = ''

for layer in data.get('layers', []):
if layer.get('mediaType').startswith('application/vnd.ollama.image.model') or layer.get('mediaType').startswith('application/vnd.ollama.image.projector') or layer.get('mediaType').startswith('application/vnd.ollama.image.adapter'):
digest = layer.get('digest')
hash = digest[7:]
try:
r = requests.head(
f"{args.remote_server}/api/blobs/sha256:{hash}",
)
except requests.exceptions.RequestException as e:
print(f"Error: {e}")
sys.exit(1)
remote_path="@"
if r.ok:
print(f"skipping upload for already created layer sha256:{hash}")
else:
print(f"uploading layer sha256:{hash}")
blob_file = f"{blob_dir}{separator}sha256{get_digest_separator()}{hash}"
with open(blob_file, "r+b") as f:
total_size = int(os.fstat(f.fileno()).st_size)
block_size = 1024
with tqdm(desc="uploading", total=total_size, unit="B", unit_scale=True, unit_divisor=block_size) as progress_bar:
wrapped_file = CallbackIOWrapper(progress_bar.update, f, "read")
try:
r = requests.post(f"{args.remote_server}/api/blobs/sha256:{hash}", data=wrapped_file)
except requests.exceptions.RequestException as e:
print(f"Error: {e}")
sys.exit(1)
if r.status_code == 201:
print("success uploading layer.")
elif r.status_code == 400:
print("Error: invalid digest, check both ollama are running the same version.")
sys.exit(1)
else:
print(f"Error: upload failed: {r.reason}")
sys.exit(1)
model_from += f'FROM {remote_path}sha256:{hash}\n'

try:
result = subprocess.run(["ollama", "show", f"{args.local_model}", "--modelfile"], stdout=subprocess.PIPE, stderr=subprocess.DEVNULL, encoding='UTF-8', shell=False, check=True)
if result.stdout.startswith("Error:"):
print(f"Error: could not get ollama Modelfile")
modelfile = parse_modelfile(result.stdout)
modelfile = model_from + modelfile
except Exception as e:
print(f"Error: could not run ollama to export Modelfile")
sys.exit(1)

try:
headers = {
'Content-Type': 'application/x-www-form-urlencoded',
}

model_create = {
"name": args.local_model,
"modelfile": modelfile
}
data = json.dumps(model_create)

try:
r = requests.post(f"{args.remote_server}/api/create", headers=headers, data=data)
except requests.exceptions.RequestException as e:
print(f"Error: {e}")
sys.exit(1)
if r.status_code == 200:
print_status(r.text)
sys.exit(0)
else:
print(f"Error: could not create {args.local_model} on the remote server ({r.status_code}): {r.reason}")
sys.exit(1)
except Exception as e:
print(f"Exception: could not create {args.local_model} on the remote server: {e}")
sys.exit(1)
2 changes: 2 additions & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
Requests==2.32.3
tqdm==4.66.4

0 comments on commit 8fcaed7

Please sign in to comment.