You own little LLM in your matrix chatroom
Find a file
2023-10-19 13:46:28 +00:00
.vscode ♻️ big refactor, use llama server and openai python library 2023-10-11 10:31:47 +00:00
src 🐛 preemptively check if rel_type is in event.source["content"]["m.relates_to"] 2023-10-19 13:46:28 +00:00
.editorconfig 🔧 .editorconfig 2023-05-18 19:03:19 +02:00
.gitattributes 🙈 .gitattributes 2023-05-18 19:03:49 +02:00
.gitignore ♻️ big refactor, use llama server and openai python library 2023-10-11 10:31:47 +00:00
docker-compose.yml ♻️ big refactor, use llama server and openai python library 2023-10-11 10:31:47 +00:00
LICENSE 📄 add MIT license 2023-05-24 22:42:13 +02:00
poetry.lock ♻️ big refactor, use llama server and openai python library 2023-10-11 10:31:47 +00:00
poetry.toml ♻️ big refactor, use llama server and openai python library 2023-10-11 10:31:47 +00:00
pyproject.toml ♻️ big refactor, use llama server and openai python library 2023-10-11 10:31:47 +00:00
README.md 🍻 broken installation instructions 2023-06-13 20:28:46 +02:00

Nio LLM

GitHub Code style: black Ruff

You own little LLM in your matrix chatroom.

Usage

This project uses jsonargparse to help with the command line arguments.

To see the available options, run:

nio_llm --help

To run the bot, you can either use command line arguments, environment variables or a config file. (or a mix of all three)

Command line arguments

nio_llm \
  # required \
  --room <YOUR ROOM> \
  --password <YOUR PASSWORD> \
  --username <YOUR USERNAME> \
  --preprompt <YOUR PREPROMPT> \
  # optional \
  --device-id nio-llm \
  --homeserver https://matrix.org \
  --ggml-repoid TheBloke/stable-vicuna-13B-GGML \
  --ggml-filename stable-vicuna-13B.ggmlv3.q5_1.bin \
  --sync-timeout 30000

Environment variables

# required
export NIO_LLM_ROOM=<YOUR ROOM>
export NIO_LLM_PASSWORD=<YOUR PASSWORD>
export NIO_LLM_USERNAME=<YOUR USERNAME>
export NIO_LLM_PREPROMPT=<YOUR PREPROMPT>

# optional
export NIO_LLM_DEVICE_ID=nio-llm
export NIO_LLM_HOMESERVER=https://matrix.org
export NIO_LLM_GGML_REPOID=TheBloke/stable-vicuna-13B-GGML
export NIO_LLM_GGML_FILENAME=stable-vicuna-13B.ggmlv3.q5_1.bin
export NIO_LLM_SYNC_TIMEOUT=30000

nio_llm

Config file

Create a config file with the following content:

# config_file.yaml

# required
room: <YOUR ROOM>
password: <YOUR PASSWORD>
username: <YOUR USERNAME>
preprompt: <YOUR PREPROMPT>

# optional
device_id: nio-llm
homeserver: https://matrix.org
ggml_repoid: TheBloke/stable-vicuna-13B-GGML
ggml_filename: stable-vicuna-13B.ggmlv3.q5_1.bin
sync_timeout: 30000

Then run:

nio_llm --config config_file.yaml

Special thanks