I'm working on RootCX (https://github.com/RootCX/RootCX), a platform for building and shipping internal apps and AI agents in production.
Think of it like "Claude code on Supabase", but for internal apps and AI agents.
I got tired of choosing the deployment platform, wiring up Postgres, SSO (OIDC), RBAC, audit logs, secret vaults, integrations/tools/MCP, ... from scratch every time I needed an internal tool.
introducing moderation, steerage and censorship in your LLM is a great way to not even show up to the table with a competitive product. builders have woken up to this reality and are demanding local models
the real insight here isn't that sizing is broken. Everyone knows that. It's that fixing it would require brands to admit their current customers don't match the label they've been selling them. "You're not a size 6, but size 10" is bad for business
No phd, no funding committee, no peer review anxiety. Just curoisity and paper. Sometimes the breakthrough comes from not knowing the problem was supposed to be difficult. This is what happens when you don't tell a kid something is too hard
This is why Ctrl+C is 0x03 and Ctrl+G is the bell. The columns aren't arbitrary. They're the control codes with bit 6 flipped. Once you see it, you can't unsee it. Best ASCII explainer I've read.
The moat here is local inference. Whisper.cpp + Metal gives you <500ms latency on M1 with the small model. no API costs + no privacy concerns. Ship that and you've got something the paid tools can't match. The UI is already solid, the edge is in going fully offline.
Think of it like "Claude code on Supabase", but for internal apps and AI agents.
I got tired of choosing the deployment platform, wiring up Postgres, SSO (OIDC), RBAC, audit logs, secret vaults, integrations/tools/MCP, ... from scratch every time I needed an internal tool.
reply