diff --git a/README.md b/README.md index 929643b..bc717ea 100644 --- a/README.md +++ b/README.md @@ -8,79 +8,11 @@ You own little LLM in your matrix chatroom. ## Usage -This project uses [jsonargparse](https://github.com/omni-us/jsonargparse/) to help with the command line arguments. +This project is split in two parts: the client and the server. -To see the available options, run: +The server simply downloads an LLM and starts a llama-cpp-python server (which mimics an openai server). -```bash -nio_llm --help -``` - -To run the bot, you can either use command line arguments, environment variables or a config file. (or a mix of all three) - -### Command line arguments - -```bash -nio_llm \ - # required \ - --room \ - --password \ - --username \ - --preprompt \ - # optional \ - --device-id nio-llm \ - --homeserver https://matrix.org \ - --ggml-repoid TheBloke/stable-vicuna-13B-GGML \ - --ggml-filename stable-vicuna-13B.ggmlv3.q5_1.bin \ - --sync-timeout 30000 -``` - -### Environment variables - -```bash -# required -export NIO_LLM_ROOM= -export NIO_LLM_PASSWORD= -export NIO_LLM_USERNAME= -export NIO_LLM_PREPROMPT= - -# optional -export NIO_LLM_DEVICE_ID=nio-llm -export NIO_LLM_HOMESERVER=https://matrix.org -export NIO_LLM_GGML_REPOID=TheBloke/stable-vicuna-13B-GGML -export NIO_LLM_GGML_FILENAME=stable-vicuna-13B.ggmlv3.q5_1.bin -export NIO_LLM_SYNC_TIMEOUT=30000 - -nio_llm -``` - - -### Config file - -Create a config file with the following content: - -```yaml -# config_file.yaml - -# required -room: -password: -username: -preprompt: - -# optional -device_id: nio-llm -homeserver: https://matrix.org -ggml_repoid: TheBloke/stable-vicuna-13B-GGML -ggml_filename: stable-vicuna-13B.ggmlv3.q5_1.bin -sync_timeout: 30000 -``` - -Then run: - -```bash -nio_llm --config config_file.yaml -``` +The client connects to the matrix server and queries the llama-cpp-python server to create matrix messages. ## Special thanks