- Clone the repo
git clone https://github.com/nv78/Anote-PrivateGPT-Desktop
andcd Anote-PrivateGPT-Desktop
First, compile the backend
-
cd backend
-
Create a virtual env in
python -m venv venv
source venv/bin/activate
For Windows: use Command Prompt.\venv\Scripts\Activate
-
Install requirements
pip install -r requirements.txt
-
Compile
pyinstaller --onefile app.py --add-data "database.db:."
-
Now, you should have an output in ./backend/dist called app. You will copy this into ./appdist
cd frontend
- Install dependencies and Build react app
npm install --force
npm run build
-
Go to https://ollama.com/download and download Ollama for Mac.
-
Once you have followed installation instructions, you will want to run
ollama run llama2
andollama run mistral
-
In the home directory (/Anote-Private-GPT), you will install dependencies:
npm install --force
-
Then you will run the app by doing
npm run start
To Run the code:
-
Open backend folder in terminal
cd backend
-
Create a virtual env in
python -m venv venv
source venv/bin/activate
For Windows: use Command Prompt.\venv\Scripts\Activate
-
Install pyinstaller
pip install pyinstaller
-
Build the backend
To include the db: pyinstaller --onefile app.py --add-data "database.db:."
Note: might have to do pyinstaller --onefile app.py --hidden-import flask
Put the flask app, which is in the folder backend/dist in appdist
-
Open frontend folder in terminal
cd ..
cd frontend
-
Install dependencies and Build react app
npm install --force
npm run build
-
Go back to main folder
cd ..
-
Install all dependencies and run electron
npm install
npm start
-
To package/bundle, run for mac:
npm run make
, and for Linux:sudo npx electron-forge make --platform=linux --arch=x64
Install private models (should include this under installation instructions under the app later):
- Follow installation instructions at https://github.com/ollama/ollama
- On your terminal, run
ollama run llama2
(let's make this a shell script that is spawned as a child function?)