...should read "A Cypherpunk's Manifesto" by Timothy May.
See scripts/
for a list of available tools, or npm run docs
to run a local copy of the documentation.
See INSTALL.md
for a complete install guide.
Local settings should be provided by environment variables wherever possible, including:
SQL_DB_HOST
— host of the SQL serverSQL_DB_PORT
— port of the SQL serverSQL_DB_USERNAME
— username for the SQL userSQL_DB_PASSWORD
— password for the SQL userOLLAMA_HOST
— HTTP host for Ollama serverOLLAMA_PORT
— HTTP port for Ollama serverREDIS_HOST
— host of the Redis serverREDIS_PORT
— port of the Redis server
Settings can be configured locally through settings/local.js
— care should be taken not to commit secrets; again, prefer environment variables.
The project is primarily built in JavaScript, running Node.js on the server and leveraging React on the client side. The client is transpiled using Webpack, and delivered as a complete bundle to the assets/
directory. This directory can be served by a static web server, so long as update operations (and requests for JSON representations of hosted resources) are passed through to the backend HTTP server (served on port 3045
by default).
Running npm start
will compile the UI using scripts/build.js
to the assets/
directory. You can serve this directory from any standard web server, but you will need to route HTTP requests with the Accept: application/json
header to the backend server (port 3045) in addition to WebSockets if you want real-time functionality.
You can run the node without compiling the UI using scripts/node.js
— this can aide in accelerating server-side development.
- Coordinator —
scripts/node.js
the Node.js master process, managing:- Sensemaker —
services/sensemaker.js
implements Fabric Service - AI Agents —
types/agent.js
- Trainer Agents —
types/trainer.js
- Worker Agents —
types/worker.js
- HTTPServer —
@fabric/http
- FabricNode —
@fabric/core
- Sensemaker —
- AI Agents — connect to external resources, such as OpenAI, HuggingFace, or Ollama
- Fabric —
@fabric/core
- Matrix —
@fabric/matrix
- Python HTTP Server — for models unsupported by Ollama
- Fabric —
- Services — implement a common API using
@fabric/core/types/service
- Sensemaker — primary, single-core instance of the Coordinator
- Trainer - utilizes LangChain, etc. to generate, store, and retrieve embeddings
- PyTorch — initial training tools used for gpt2 emulation
LangChain is available through services/trainer.js
which also handles all general "training" operations, including the generation of embeddings.
- Commit early, commit often
- Once a branch diverges, open a pull request (see also number 1)
- Regularly use
npm test
andnpm run report:todo
- Knex is used to manage database schemata, including migrations both forward and backward
- Ollama is used to provide a standard API for interfacing with LLMs
- Fabric is used for connectivity between instances
We use a custom Semantic UI theme, located in libraries/semantic
— you can modify the theme and recompile the CSS styles using the npm run build:semantic
command.
- You can use
scripts/node.js
to quickly run the service without building:node scripts/node.js
- Use
nodemon
to monitor for changes:nodemon scripts/node.js
- Re-build UI when necessary:
npm run build
- Re-build semantic styling (CSS) when necessary:
npm run build:semantic
You can pass webpack
configuration options in types/compiler.js
to tweak various settings, such as live reloading.
All other configuration options for your local node live in settings/local.js
— some important settings include:
email
— configures email settingsenable
— boolean (true or false)host
— hostname for outbound email
- semicolon not optional
- explicit over implicit (prefer clarity over brevity)
- spaces after function names, not after calls
- no double spacing (maximum one empty line)
- newline at EOF