See https://zielmicha.com/coding-with-llms
Right now the setup is rather rough - I'll hopefully attempt to make it better over time. Honestly, you probably should not try now, unless you are extremely motivated.
First, it only works only on Linux and this is a hard requirement.
You'll need a virtual env with some set of Python packages (you'd like requirements.txt? you need to wait).
Then, you'll need a btrfs mount. If you don't actually like btrfs (like me), you are free to create it on a loopback device.
Make a directory on that btrfs filesystem that MLLM will use to keep its state. Let's call it $state_dir. Run btrfs subvolume create $state_dir/base
.
Then, in your project directory, create:
- mllm-env.yaml file, similar to mllm-env.yaml in this repo
- todo.txt with the task you want the model to be done, in a following format:
- [1] please make my codebase nicer
files: *.py
- [2] implement frobinification
be careful to preserve the frogs
files: frog.py, main.py
Then run:
python mllm_ui.py --port 5002 --env-config ~/your-app/mllm-env.yaml --task-file-path ~/your-app/todo.txt
and visit the web UI in the browser!