Skip to content

nobu007/llm_docker_setting_pub

Repository files navigation

LLM_DOCKER_SETTING_PUB

Deploy and manage LLM applications with ease

Built with the tools and technologies:

GNU%20Bash Docker GitHub%20Actions


Table of Contents


Overview

llmdockersettingpub simplifies deploying and managing powerful large language model (LLM) applications. It provides pre-configured Docker environments with GPU support, remote access, and automated setup, ensuring consistent development and deployment across different machines. Ideal for developers and researchers working with LLMs, it streamlines the process, allowing them to focus on model development and application building.


Features

  • Pre-configured Docker environments with GPU support
  • Remote access capabilities
  • Automated setup for consistent development and deployment

Project Structure

└── llm_docker_setting_pub/
    ├── .github
    │   └── workflows
    ├── HowToUseDocker.md
    ├── LICENSE
    ├── README.md
    ├── config
    │   └── requirements.txt
    ├── docker
    │   ├── .dockerignore.sample
    │   ├── Dockerfile
    │   ├── Dockerfile.desktop
    │   ├── Dockerfile.gpu
    │   ├── docker-compose.gpu.yml
    │   ├── docker-compose.vnc.yml
    │   └── docker-compose.yml
    ├── entrypoint.sh.sample
    ├── envsetup.sh.sample
    ├── healthcheck.sh.sample
    ├── install.sh.sample
    ├── script
    │   ├── docker_compose.sh
    │   └── docker_replace.sh
    └── server.py.sample

Project Index

LLM_DOCKER_SETTING_PUB/
__root__
envsetup.sh.sample - Envsetup.sh.sample automates the development environment setup
- It installs Rye, a Python dependency management tool, configures the PATH environment variable, and activates a virtual environment
- Pre-commit hooks are installed for code quality
- The script ensures consistent development environments across machines by persistently adding necessary commands to shell profiles.
healthcheck.sh.sample - The healthcheck script monitors the `/app/server.py` process
- It logs its actions to `/app/work/all.log`
- If the Python server is not running, the script attempts to start it using `pyenv` and logs the success or failure
- Upon successful startup or if the server is already running, it exits with a zero status code; otherwise, it exits with a non-zero code
- This ensures the application's core server process remains active.
entrypoint.sh.sample - Entrypoint.sh configures the application's working directory, setting ownership and ensuring a log file exists
- It then continuously monitors and displays the log file's contents, providing real-time output for monitoring the application's activity within the broader project
- This facilitates runtime observation and debugging.
server.py.sample - Server.py.sample initializes and configures an OpenInterpreter instance, specifying parameters like the LLM model, token limit, verbosity, and a system message defining the working directory
- It then starts a server using this configured interpreter, enabling interaction with the larger application through a user interface
- The server facilitates communication between the interpreter and external clients.
install.sh.sample - `install.sh.sample` provides a sample installation script for the project
- It sets up the environment by modifying the PATH, navigating to the application directory, activating a virtual environment, and finally installing the project in editable mode using pip
- This ensures the project's dependencies are correctly managed and the application is ready for use.
config
requirements.txt - The `requirements.txt` file specifies the project's dependencies
- It lists numerous Python packages, encompassing data science libraries (like Pandas, NumPy, and Scikit-learn), web frameworks (like aiohttp), machine learning tools (including Langchain and OpenAI), and other utilities crucial for the application's functionality and development environment
- These dependencies enable the project's core operations.
docker
docker-compose.gpu.yml - The `docker-compose.gpu.yml` file configures a multi-container Docker environment
- It defines services for a graphical desktop, an XRDP server for remote access, and a GPU-enabled Open Interpreter instance
- The configuration facilitates remote access to a resource-intensive application leveraging GPU capabilities, likely for AI or machine learning tasks
- Environment variables manage user credentials and API keys.
Dockerfile - The Dockerfile constructs a Docker image, leveraging a base xRDP image
- It sequentially executes numerous installation scripts, each potentially installing project dependencies or configuring the runtime environment
- Finally, it copies remaining project files, runs a final installation script, sets environment variables, and defines the container's entrypoint and health check
- The resulting image provides a ready-to-run environment for the application.
docker-compose.yml - The docker-compose.yml file orchestrates a multi-container Docker application
- It defines services for environment checks and two primary applications: a desktop environment (`desktop-xrdp`) and a main application (`app-xrdp`)
- `app-xrdp` leverages resources from the project's root directory, exposing several ports and incorporating health checks for monitoring
- The configuration facilitates a reproducible and isolated deployment environment.
.dockerignore.sample - The `.dockerignore.sample` file specifies files and directories to exclude when building Docker images
- It prevents unnecessary files, like Docker configuration files and version control data, from being included in the final image, resulting in smaller and more efficient Docker images for the project
- This contributes to streamlined deployment and improved build times.
Dockerfile.gpu - Dockerfile.gpu constructs a Docker image optimized for GPU usage
- It leverages a base image containing NVIDIA CUDA libraries, adding a desktop environment
- Crucially, it copies necessary CUDA components from the base image to ensure GPU acceleration within the final image, enabling the execution of GPU-dependent tasks within the broader Open Interpreter project
- The entrypoint script manages the image's execution.
docker-compose.vnc.yml - The docker-compose.vnc.yml file configures a multi-container Docker environment
- It defines two services: a VNC desktop and an application service
- The application service, built from a separate Dockerfile, exposes ports for VNC access and a web application, leveraging environment variables for authentication and API keys
- The desktop service acts as a base image for the application, ensuring a consistent runtime environment.
Dockerfile.desktop - Dockerfile.desktop configures a desktop development environment
- It sets up a base image, installs essential development tools, including VS Code, Google Chrome, and Python 3.11, and configures a user account with sudo privileges
- The script also customizes the environment for Japanese language support
- This Dockerfile facilitates consistent and reproducible development environments across different systems.
.github
workflows
cla.yml - The `cla.yml` workflow automates the Contributor License Agreement (CLA) process
- It monitors pull requests and issues, checking for CLA signatures
- Upon detecting a signature or pull request event, it uses a third-party action to manage CLA status, storing signatures in a specified file and optionally in a remote repository
- The workflow enhances project governance by ensuring contributors acknowledge the CLA before merging code.
script
docker_compose.sh - The script prepares the project for Docker Compose execution
- It sets up essential environment variables, creates placeholder shell scripts for various installation and operational tasks within the project's root directory, and then initiates a Docker Compose build and startup process from the docker subdirectory
- This ensures a consistent and reproducible build environment.
docker_replace.sh - The script facilitates deployment by copying project files into a Docker container named `app-xrdp`
- It determines the project's root directory, then uses `docker cp` to transfer all files and potentially environment variables and a healthcheck script to the container's `/app` directory, enabling the application's execution within the Docker environment
- This streamlines the deployment process within the larger project structure.

Getting Started

Prerequisites

Before getting started with llm_docker_setting_pub, ensure your runtime environment meets the following requirements:

  • Programming Language: Error detecting primary_language: {'sample': 6, 'txt': 1, 'yml': 4, 'gpu': 1, 'desktop': 1, 'sh': 2}
  • Package Manager: Pip
  • Container Runtime: Docker

Installation

Install llm_docker_setting_pub using one of the following methods:

Build from source:

  1. Clone the llm_docker_setting_pub repository:
❯ git clone ../llm_docker_setting_pub
  1. Navigate to the project directory:
cd llm_docker_setting_pub
  1. Install the project dependencies:

Using pip   ❯ pip install -r config/requirements.txt

echo 'INSERT-INSTALL-COMMAND-HERE'

Using docker  

❯ docker build -t codeinterpreter_api_agent/llm_docker_setting_pub .

Usage

Run llm_docker_setting_pub using the following command: Using pip   ❯ python server.py.sample

echo 'INSERT-RUN-COMMAND-HERE'

Using docker  

❯ docker run -it {image_name}

Testing

Run the test suite using the following command: Using pip   ❯ pytest

echo 'INSERT-TEST-COMMAND-HERE'

Project Roadmap

  • Task 1: Implement feature one.
  • Task 2: Implement feature two.
  • Task 3: Implement feature three.

Contributing

Contributing Guidelines
  1. Fork the Repository: Start by forking the project repository to your LOCAL account.
  2. Clone Locally: Clone the forked repository to your local machine using a git client.
    git clone /home/jinno/git/drill/gamebook/codeinterpreter_api_agent/llm_docker_setting_pub
  3. Create a New Branch: Always work on a new branch, giving it a descriptive name.
    git checkout -b new-feature-x
  4. Make Your Changes: Develop and test your changes locally.
  5. Commit Your Changes: Commit with a clear message describing your updates.
    git commit -m 'Implemented new feature x.'
  6. Push to LOCAL: Push the changes to your forked repository.
    git push origin new-feature-x
  7. Submit a Pull Request: Create a PR against the original project repository. Clearly describe the changes and their motivations.
  8. Review: Once your PR is reviewed and approved, it will be merged into the main branch. Congratulations on your contribution!
Contributor Graph


License

This project is protected under the SELECT-A-LICENSE License. For more details, refer to the LICENSE file.


Acknowledgments

  • List any resources, contributors, inspiration, etc. here.

About

llm_docker_setting

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •