Hacker Newsnew | past | comments | ask | show | jobs | submit | SafeDusk's commentslogin

UV script enabled me to distribute a MCP client or server in a single file[0].

[0]: https://blog.toolkami.com/mcp-server-in-a-file/


I think both Cursor and Cognition and going in the same direction of SWE-grep[0].

SWE-grep was able to hit ~700tokens/s and Cursor ~300token/s, hard to compare the precision/recall and cost effectiveness though, considering SWE-grep also adopted a "hack" of running it on Cerebras.

I'm trying to kickstart a RL-based code search project called "op-grep" here[1], still pretty early, but looking for collaborators!

[0]: https://cognition.ai/blog/swe-grep [1]: https://github.com/aperoc/op-grep


excited to see how far you get with opgrep!


hehe thanks! as a self taught AI engineer, might take awhile =D


One of the core design principles at https://github.com/aperoc/toolkami


Spam


Not sure man, I specifically stated this in my README way before this post: https://github.com/aperoc/toolkami/blob/main/README.md#comma....

I mean it's not much, but the concept just resonates with me and I want to share it. Sad I can't share even simple opinion nowadays ...


Great to see progress being made here! I had tons of fun using AlphaEvolve to optimize Perlin Noise[0]

[0]: https://blog.toolkami.com/alphaevolve-toolkami-style/


Thanks for sharing your blog! Very interesting work, 100% agree with your 3 criteria on the sweet spot for AI. Most systems performance problems fit right in


I recommend reading Shopify CEO Tobi's try[0] for good example of how Ruby's block behavior and meta-programming makes it easy to create a single file, shell wrapper.

[0]: https://github.com/tobi/try/blob/main/try.rb


what the fuck...


And I'm looking for a problem to spend my next decade on ...


Inspired by SWE-grep, I've started a repo to educate myself on it here https://github.com/aperoc/op-grep.

I've drafted an architecture, with the steps mainly as so: 1. Collect actions (grep/glob/read) policies either from usage logs or open datasets 2. Optimize by removing redundant actions or parallelization 3. Train model on optimized action policy 4. Release model as a single file, MCP tool (Refer to repo for visual diagram of the architecture)

I've just released the base model and added `openai_forwarder.py` to start collecting action policies.

Looking for more eyes and contributors to make this a reality, thanks!


I like to think of subagents as “OS threads” with its own context and designed to hand off task to.

A good use case is Cognition/Windsurf swe-grep which has its own model to grep code fast.

I was inspired by it but too bad it’s closed for now, so I’m taking a stab with an open version https://github.com/aperoc/op-grep.


Kickstarting an exploratory open version here https://github.com/aperoc/op-grep since it doesn't look like they will do it.


This has very little resemblance of SWE-grep haha. At least fine-tune a small pre-trained LLM or something on a retrieval dataset. But no, this literally tries to train a small RNN from scratch to retrieve results given a natural language query...


Any plans to offer this as a tool/MCP server for other coding agents or is it going to be Windsurf exclusive?


we have other things in store that can be used by other coding agents, this one was tuned to use custom fast search tools that kinda wouldnt be useful in other agents


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: