
API Reference
raw, open, no bullshit. brainz exposes a clean rest api — same engine the cli + ui hit. hook it into your bots, dashboards, or custom scripts.
base url:
http://localhost:8000
swagger (auto-docs):
http://localhost:8000/docs
all endpoints live under:
/api/
POST /api/llm/query
– ask the brain
POST /api/llm/query
– ask the brainthrow a prompt, get a response. zero latency overhead, no third-party nonsense.
request
{
"prompt": "explain rollups in ethereum"
}
response
{
"response": "rollups batch transactions off-chain to scale ethereum..."
}
curl example
-X POST http://localhost:8000/api/llm/query \
-H "Content-Type: application/json" \
-d '{"prompt": "what is solana?"}'
POST /api/llm/train
– teach it new stuff
POST /api/llm/train
– teach it new stufffeed it a prompt → completion pair, live. no retrain scripts, no restart, just on-the-fly fine-tuning.
request
{
"prompt": "what is slippage?",
"completion": "slippage = the difference between expected and actual execution price."
}
response
{
"status": "ok",
"message": "training sample processed."
}
optional flags (metadata)
vectorize prompt into memory
tag it for filtered searches
dry-run to simulate before committing
every sample you send = another neuron wired.
POST /api/user/create
– create a user session
POST /api/user/create
– create a user sessionmostly for internal agent tracking or prompt provenance.
request
{
"username": "ani"
}
response
{
"username": "ani",
"api_key": "uuid-generated-here"
}
GET /api/system/logs
– watch it think
GET /api/system/logs
– watch it thinkstreams live logs. see what the system is doing in real time:
response
[2025-07-15 13:44] training started: 12 samples
[2025-07-15 13:45] vector memory updated
[2025-07-15 13:46] inference: prompt from cli
optional filters:
/api/system/logs?level=INFO&limit=100
perfect for watching agents fire or tracking retrain cycles.
auth (if you care)
out of the box → open & local-first. wanna lock it down? your call.
add middleware →
backend/api/middleware/auth.py
enable key/token-auth in
server.py
docker isolation / nginx rules for external setups
no vendor lock-in, no forced auth, just hooks if you need them.
testing
hit swagger: http://localhost:8000/docs
or go manual:
curl / httpie (cli)
postman / insomnia (ui)
python scripts (requests)
same engine across api, cli, and ui. one brain, different doors.
Last updated