Page cover

Under The Hood

brainz isn’t some mystery box — it’s built like proper infrastructure. every layer is exposed, every call traceable, every piece swappable. run it in a container, rip it apart, rebuild it. your stack, your rules.


backend architecture

  • language: python 3.10+

  • core stack: fastapi + sqlalchemy + transformers + sentence-transformers

  • async-ready, tuned for llms. nothing blocking, nothing hidden.

brainz was designed for devs who hate waiting on “magic functions”. it’s clean, modular, and made to be torn open.


frontend stack

fast. reactive. no unnecessary bloat.

  • language: typescript

  • react – component-driven ui

  • vite – hot reload + fast builds

  • tailwind – minimal, utility-first styling

it feels native without trying to impress you with heavy animations. built for devs, not designers.


⚙devops & infra

full docker-native build. no weird global python installs, no backend/frontend version hell.

docker-compose up → you’re live. works the same on your laptop or in prod. one command, no excuses.


model compatibility

brainz doesn’t care what transformer you throw at it, as long as it’s huggingface-compatible.

tested and running:

  • falcon

  • gpt-j

  • mistral

  • llama

  • anything else via AutoModelForCausalLM

switching? change MODEL_NAME in .env, hot reload, done.


flexible & extendable

it’s all layered:

  • swap out vector engines

  • replace tokenizer logic

  • extend agents

  • build adapters for custom models

  • rewrite memory scoring if you want

brainz doesn’t fight you — it’s built to be messed with.

Last updated