mirror of
https://github.com/Laurent2916/nio-llm.git
synced 2024-11-21 05:38:48 +00:00
🔥 simplify the README
This commit is contained in:
parent
12080ad3a5
commit
8eda4825d9
74
README.md
74
README.md
|
@ -8,79 +8,11 @@ You own little LLM in your matrix chatroom.
|
|||
|
||||
## Usage
|
||||
|
||||
This project uses [jsonargparse](https://github.com/omni-us/jsonargparse/) to help with the command line arguments.
|
||||
This project is split in two parts: the client and the server.
|
||||
|
||||
To see the available options, run:
|
||||
The server simply downloads an LLM and starts a llama-cpp-python server (which mimics an openai server).
|
||||
|
||||
```bash
|
||||
nio_llm --help
|
||||
```
|
||||
|
||||
To run the bot, you can either use command line arguments, environment variables or a config file. (or a mix of all three)
|
||||
|
||||
### Command line arguments
|
||||
|
||||
```bash
|
||||
nio_llm \
|
||||
# required \
|
||||
--room <YOUR ROOM> \
|
||||
--password <YOUR PASSWORD> \
|
||||
--username <YOUR USERNAME> \
|
||||
--preprompt <YOUR PREPROMPT> \
|
||||
# optional \
|
||||
--device-id nio-llm \
|
||||
--homeserver https://matrix.org \
|
||||
--ggml-repoid TheBloke/stable-vicuna-13B-GGML \
|
||||
--ggml-filename stable-vicuna-13B.ggmlv3.q5_1.bin \
|
||||
--sync-timeout 30000
|
||||
```
|
||||
|
||||
### Environment variables
|
||||
|
||||
```bash
|
||||
# required
|
||||
export NIO_LLM_ROOM=<YOUR ROOM>
|
||||
export NIO_LLM_PASSWORD=<YOUR PASSWORD>
|
||||
export NIO_LLM_USERNAME=<YOUR USERNAME>
|
||||
export NIO_LLM_PREPROMPT=<YOUR PREPROMPT>
|
||||
|
||||
# optional
|
||||
export NIO_LLM_DEVICE_ID=nio-llm
|
||||
export NIO_LLM_HOMESERVER=https://matrix.org
|
||||
export NIO_LLM_GGML_REPOID=TheBloke/stable-vicuna-13B-GGML
|
||||
export NIO_LLM_GGML_FILENAME=stable-vicuna-13B.ggmlv3.q5_1.bin
|
||||
export NIO_LLM_SYNC_TIMEOUT=30000
|
||||
|
||||
nio_llm
|
||||
```
|
||||
|
||||
|
||||
### Config file
|
||||
|
||||
Create a config file with the following content:
|
||||
|
||||
```yaml
|
||||
# config_file.yaml
|
||||
|
||||
# required
|
||||
room: <YOUR ROOM>
|
||||
password: <YOUR PASSWORD>
|
||||
username: <YOUR USERNAME>
|
||||
preprompt: <YOUR PREPROMPT>
|
||||
|
||||
# optional
|
||||
device_id: nio-llm
|
||||
homeserver: https://matrix.org
|
||||
ggml_repoid: TheBloke/stable-vicuna-13B-GGML
|
||||
ggml_filename: stable-vicuna-13B.ggmlv3.q5_1.bin
|
||||
sync_timeout: 30000
|
||||
```
|
||||
|
||||
Then run:
|
||||
|
||||
```bash
|
||||
nio_llm --config config_file.yaml
|
||||
```
|
||||
The client connects to the matrix server and queries the llama-cpp-python server to create matrix messages.
|
||||
|
||||
## Special thanks
|
||||
|
||||
|
|
Loading…
Reference in a new issue