
Features
this isn’t some half-baked “ai wrapper”. brainz is a full llm runtime — built for devs who actually like to own their stack instead of renting it. fine-tuning, memory, inference, agents… all under your control. no middlemen.
dynamic fine-tuning
training doesn’t wait. brainz learns while it talks.
what feeds it:
raw user prompts
logged feedback dumps
prompt frequency patterns
autonomous scoring agents picking what’s worth learning
no preprocessing, no restarts, no “come back later”. the runtime decides when to train. every single reply can be a training signal.
semantic memory engine
memory that actually matters. no “chat history” gimmicks — real vector-based context recall.
how it works:
sentence-transformers for embeddings
top-k similarity search to pull the best matching past prompts
tagged + queryable memory logs
stored cleanly in postgres, ready for digging
result? long-term context that shifts tone, fixes intent, and grows smarter over time. it doesn’t just answer — it remembers why.
autonomous agents
you’re not the only one improving the system. brainz runs its own watchers.
agents do stuff like:
spot low-effort or spammy prompts
auto-trigger fine-tuning (autotrainer)
rewrite messy inputs and sloppy outputs (feedbackloop agent)
keep the tone, structure, and accuracy on point
all scriptable, chainable, cli-ready. leave it running, it’ll evolve on its own.
dev-first stack
no black box bullshit. brainz is made to be torn apart and rebuilt.
you get:
fastapi backend
react + vite + tailwind frontend
dockerized deployment
postgres + sqlalchemy
cli tools for training + inference
live logs, memory, analytics
swap layers, script new ones, rip stuff out — it’s local-first and terminal-friendly.
privacy-first
your data. your rules. period.
âś… 100% self-hosted âś… no api keys âś… no cloud calls âś… no telemetry âś… no lock-ins
everything runs on your hardware. stays there.
Last updated