mirror of
https://github.com/Laurent2916/nio-llm.git
synced 2024-10-18 08:06:18 +00:00
2.1 KiB
2.1 KiB
Nio LLM
You own little LLM in your matrix chatroom.
Usage
This project uses jsonargparse to help with the command line arguments.
To see the available options, run:
nio_llm --help
To run the bot, you can either use command line arguments, environment variables or a config file. (or a mix of all three)
Command line arguments
nio_llm \
# required \
--room <YOUR ROOM> \
--password <YOUR PASSWORD> \
--username <YOUR USERNAME> \
--preprompt <YOUR PREPROMPT> \
# optional \
--device-id nio-llm \
--homeserver https://matrix.org \
--ggml-repoid TheBloke/stable-vicuna-13B-GGML \
--ggml-filename stable-vicuna-13B.ggmlv3.q5_1.bin \
--sync-timeout 30000
Environment variables
# required
export NIO_LLM_ROOM=<YOUR ROOM>
export NIO_LLM_PASSWORD=<YOUR PASSWORD>
export NIO_LLM_USERNAME=<YOUR USERNAME>
export NIO_LLM_PREPROMPT=<YOUR PREPROMPT>
# optional
export NIO_LLM_DEVICE_ID=nio-llm
export NIO_LLM_HOMESERVER=https://matrix.org
export NIO_LLM_GGML_REPOID=TheBloke/stable-vicuna-13B-GGML
export NIO_LLM_GGML_FILENAME=stable-vicuna-13B.ggmlv3.q5_1.bin
export NIO_LLM_SYNC_TIMEOUT=30000
nio_llm
Config file
Create a config file with the following content:
# config_file.yaml
# required
room: <YOUR ROOM>
password: <YOUR PASSWORD>
username: <YOUR USERNAME>
preprompt: <YOUR PREPROMPT>
# optional
device_id: nio-llm
homeserver: https://matrix.org
ggml_repoid: TheBloke/stable-vicuna-13B-GGML
ggml_filename: stable-vicuna-13B.ggmlv3.q5_1.bin
sync_timeout: 30000
Then run:
nio_llm --config config_file.yaml