You own little LLM in your matrix chatroom
Find a file
2023-10-19 16:10:47 +00:00
.vscode ♻️ big refactor, use llama server and openai python library 2023-10-11 10:31:47 +00:00
src ️ use async openai methods + create async task for the typing loop 2023-10-19 16:10:47 +00:00
.editorconfig 🔧 .editorconfig 2023-05-18 19:03:19 +02:00
.gitattributes 🙈 .gitattributes 2023-05-18 19:03:49 +02:00
.gitignore ♻️ big refactor, use llama server and openai python library 2023-10-11 10:31:47 +00:00
docker-compose.yml ♻️ big refactor, use llama server and openai python library 2023-10-11 10:31:47 +00:00
LICENSE 📄 add MIT license 2023-05-24 22:42:13 +02:00
poetry.lock ♻️ big refactor, use llama server and openai python library 2023-10-11 10:31:47 +00:00
poetry.toml ♻️ big refactor, use llama server and openai python library 2023-10-11 10:31:47 +00:00
pyproject.toml ♻️ big refactor, use llama server and openai python library 2023-10-11 10:31:47 +00:00
README.md 🔥 simplify the README 2023-10-19 14:23:25 +00:00

Nio LLM

GitHub Code style: black Ruff

You own little LLM in your matrix chatroom.

Usage

This project is split in two parts: the client and the server.

The server simply downloads an LLM and starts a llama-cpp-python server (which mimics an openai server).

The client connects to the matrix server and queries the llama-cpp-python server to create matrix messages.

Special thanks