You own little LLM in your matrix chatroom
Go to file
Laureηt 95c751ea30
🐛 temporarily encapsulate message_callback's inside logic inside a try/catch
2023-10-21 10:26:17 +00:00
.vscode ♻️ big refactor, use llama server and openai python library 2023-10-11 10:31:47 +00:00
src 🐛 temporarily encapsulate message_callback's inside logic inside a try/catch 2023-10-21 10:26:17 +00:00
.editorconfig 🔧 .editorconfig 2023-05-18 19:03:19 +02:00
.gitattributes 🙈 .gitattributes 2023-05-18 19:03:49 +02:00
.gitignore ♻️ big refactor, use llama server and openai python library 2023-10-11 10:31:47 +00:00
LICENSE 📄 add MIT license 2023-05-24 22:42:13 +02:00
README.md 🔥 simplify the README 2023-10-19 14:23:25 +00:00
docker-compose.yml ♻️ big refactor, use llama server and openai python library 2023-10-11 10:31:47 +00:00
poetry.lock ♻️ big refactor, use llama server and openai python library 2023-10-11 10:31:47 +00:00
poetry.toml ♻️ big refactor, use llama server and openai python library 2023-10-11 10:31:47 +00:00
pyproject.toml ♻️ big refactor, use llama server and openai python library 2023-10-11 10:31:47 +00:00

README.md

Nio LLM

GitHub Code style: black Ruff

You own little LLM in your matrix chatroom.

Usage

This project is split in two parts: the client and the server.

The server simply downloads an LLM and starts a llama-cpp-python server (which mimics an openai server).

The client connects to the matrix server and queries the llama-cpp-python server to create matrix messages.

Special thanks