The full system isn't open source yet - still deciding on licensing. But the benchmark repo has:
- Complete results (500/500 on LongMemEval)
- Raw logs showing each question/answer
- Comparison with baselines
Happy to answer questions about the approach. The core insight: intelligent context organization beats raw context volume. No LLM calls for memory extraction - pure embedding-based retrieval using RudraDB (https://rudradb.com).
If you want to verify independently, I can provide API access.
He's destroying the institutions of democracy.
He uses his power for retribution.
His dad has it right. #purge #bilionaires