Skip to content

Commit

Permalink
Fix up docs and add manpage entries
Browse files Browse the repository at this point in the history
  • Loading branch information
rhjostone committed Jan 31, 2025
1 parent 7c81174 commit 1e71e79
Show file tree
Hide file tree
Showing 5 changed files with 20 additions and 4 deletions.
3 changes: 3 additions & 0 deletions docs/ramalama-bench.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,9 @@ URL support means if a model is on a web site or even on your local system, you
#### **--help**, **-h**
show this help message and exit

#### **--network-mode**=*none*
set the network mode for the container

## DESCRIPTION
Benchmark specified AI Model.

Expand Down
3 changes: 3 additions & 0 deletions docs/ramalama-convert.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,9 @@ type of OCI Model Image to convert.
| car | Includes base image with the model stored in a /models subdir |
| raw | Only the model and a link file model.file to it stored at / |

#### **--network-mode**=*none*
sets the configuration for network namespaces when handling RUN instructions

## EXAMPLE

Generate an oci model out of an Ollama model.
Expand Down
3 changes: 3 additions & 0 deletions docs/ramalama-run.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,9 @@ llama.cpp explains this as:
#### **--tls-verify**=*true*
require HTTPS and verify certificates when contacting OCI registries

#### **--network-mode**=*none*
set the network mode for the container

## DESCRIPTION
Run specified AI Model as a chat bot. RamaLama pulls specified AI Model from
registry if it does not exist in local storage. By default a prompt for a chat
Expand Down
3 changes: 3 additions & 0 deletions docs/ramalama-serve.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,9 @@ IP address for llama.cpp to listen on.
#### **--name**, **-n**
Name of the container to run the Model in.

#### **--network-mode**=*bridge*
set the network mode for the container

#### **--port**, **-p**
port for AI Model server to listen on

Expand Down
12 changes: 8 additions & 4 deletions ramalama/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -383,7 +383,7 @@ def bench_parser(subparsers):
"--network-mode",
type=str,
default="none",
help="Set the network mode for the container.",
help="set the network mode for the container",
)
parser.add_argument("MODEL") # positional argument
parser.set_defaults(func=bench_cli)
Expand Down Expand Up @@ -611,7 +611,7 @@ def convert_parser(subparsers):
"--network-mode",
type=str,
default="none",
help="Sets the configuration for network namespaces when handling RUN instructions.",
help="sets the configuration for network namespaces when handling RUN instructions",
)
parser.add_argument("SOURCE") # positional argument
parser.add_argument("TARGET") # positional argument
Expand Down Expand Up @@ -737,7 +737,7 @@ def run_parser(subparsers):
"--network-mode",
type=str,
default="none",
help="Set the network mode for the container.",
help="set the network mode for the container",
)
parser.add_argument("MODEL") # positional argument
parser.add_argument(
Expand All @@ -764,11 +764,15 @@ def serve_parser(subparsers):
parser.add_argument(
"-p", "--port", default=config.get('port', "8080"), help="port for AI Model server to listen on"
)
# --network-mode=bridge lets the container listen on localhost, and is an option that's compatible
# with podman and docker:
# https://docs.podman.io/en/latest/markdown/podman-run.1.html#network-mode-net
# https://docs.docker.com/engine/network/#drivers
parser.add_argument(
"--network-mode",
type=str,
default="bridge",
help="Set the network mode for the container.",
help="set the network mode for the container",
)
parser.add_argument("MODEL") # positional argument
parser.set_defaults(func=serve_cli)
Expand Down

0 comments on commit 1e71e79

Please sign in to comment.