Hacker Newsnew | past | comments | ask | show | jobs | submit | quinncom's commentslogin

The README claims “Full feature parity with pi,” but I presume pz does not support pi’s extension/package ecosystem (because they’re all writted in TS and so would require bundling node/bun) – is that correct? One of the highlights of pi is its extensibility; if that’s not possible with pz, it should be clearly stated as a goal/nongoal.

I just learned yesterday that ChatGPT (and maybe others) can’t connect to a MCP running on localhost; it needs an endpoint on the public internet. (I guess because the request comes from OpenAI servers?)

I’d rather not expose a private MCP to the public, so ContextVM sounds like a step in the right direction. But I’m confused about how it is called: doesn’t OpenAI’s servers still need you to provide a public endpoint with a domain name and TLS? Or does it use a Nostr API?


Interesting, I didn't know about that. It could be for security reasons or to lock users into their platform tools, but it seems odd.

If you can still connect to a stdio MCP server, you can plug it into a remote MCP server exposed through ContextVM. You can do this using the CVMI CLI tool, or if you need custom features, the SDK provides the primitives to build a proxy. For example, using CVMI, you could run your server over Nostr. You can run an existing stdio server with the command `npx cvmi serve -- <your-command-to-run-the-server>` or a remote HTTP server with the command `npx cvmi serve -- http(s)://myserver.com/mcp`. This makes your server available through Nostr, and you will see the server's public key in your terminal.

Locally, you can then use the command `npx cvmi use <server-public-key>` to configure it as a local stdio server. The CLI binds both transports, Nostr <-> stdio, so your remote server will appear as a local stdio server. I hope this clarifies your question. For more details, see the documentation at https://docs.contextvm.org. Please ask if you have any other questions :)


I like Readeck – https://codeberg.org/readeck/readeck

Open source. Self hosted or managed. Native iOS and Android apps.

Its Content Scripts feature allows custom JS scripts that transform saved content, which could be used to do URL rewriting.


The 2025 Stack Overflow Developer Survey asked participants to identify dev tools they use (“Desired”) and want to use again (“Admired”). Dividing these scores calculates a “underrated score” which reveals which tools may be hidden gems.

I've compiled a list using this method, filtering for tools admired by >60%, and used by <20% of developers, then sorted by “underrated”.

The previous 2024 list is available here: https://news.ycombinator.com/item?id=41090759

In 2025 there were a total of 12 tools with an admired score >70%. In 2024 there were 41. Are we admiring our tools less? Or did we stop caring because AI is touching the tools instead of us?


Although I rarely hit my limit in my $20 a month Codex plan, I can imagine this would be very useful.

The issue I have more often is that I will start a conversation in ChatGPT, and realize an hour later that I needed all that context to be in Codex, so I’ll generally ask ChatGPT to give me a summary of all facts and a copy‑paste prompt for Codex. But maybe there is a way to extract the more useful content from a chat UI to an agent UI.


imo an agent learns way more by watching the raw agentic flow than by reading some sanitized context dump. you get to see exactly where the last bot derailed and then patched itself. give that a shot—handing over a spotless doc feels fake anyway.

Screen sharing to any remote API is a nonstarter for me. I don’t care if the API claims ZDR; Snowden’s revelations are still echoing. So, I appreciate that the app supports a custom endpoint for local models.

Which local models did you try? GLM-OCR seems like it would excel at this: https://huggingface.co/zai-org/GLM-OCR


I've got it installed with Qwen3-VL-4B running in LM Studio on my MBP M1 Pro. (Yes, the fans are running.) GLM-OCR didn't work because it returns all text on the screen, despite the instructions asking only for a summary.

Screenshots are summarized in ~28 seconds. Here's the last one:

> "The user switched to the Hacker News tab, displaying item 47049307 with a “Gave Claude photographic memory for $0.0002/screenshot” headline. The chat now shows “Sonnet 4.6” and a message asking “What have I been doing in the past 10 minutes?” profile, replacing prior Signal content. The satellite map background remains unchanged."

The satellite map background remains unchanged message appears in every summary (my desktop background is a random Google Maps satellite image that rotates every hour).

I would like to experiment with custom model instructions – for example, to ignore desktop background images.

Earlier in my testing it was sending screenshots for both of my displays at the same time, which was much slower, but now it's only sending screenshots of my main screen. Does MemoryLane only send screenshots for displays that have active windows?

Here's the first test of the MCP server in Claude – https://ss.strco.de/SCR-20260217-onbp.png – it works!


Update: I switched to Qwen3 VL 2B (`qwen3-vl-2b-instruct-mlx@bf16`) which is 2.5× faster than 4B (11s vs 18s per screenshot) and my meager M1 Pro is able to keep up without the fans spinning 100% of the time.

Small nudges to steer company culture regarding AI use:

- signal disclosure as a norm: whenever you use AI, say “BTW I used AI to write this”, when you don’t use AI, say “No AI used in this document”

- add an email footer to your messages that states you do not use AI because [shameful reasons]

- normalize anti-AI language (slop, clanker, hallucination, boiling oceans)

- celebrate human craftsmanship (highlight/compliment well written documentation, reports, memos)

- share AI-fail memes

- gift anti-AI/pro-human stickers

- share news/analysis articles about the AI productivity myth [0], AI-user burnout [1], reverse centaur [2], AI capitalism [3]

[0] https://hbr.org/2025/09/ai-generated-workslop-is-destroying-... [1] https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies... [2] https://pluralistic.net/2025/12/05/pop-that-bubble/ [3] https://80000hours.org/problem-profiles/extreme-power-concen...


OpenClaw is actually built on top of pi-mono (for its agent runtime, models, and tools):

https://docs.openclaw.ai/concepts/agent#pi-mono-integration

https://github.com/openclaw/openclaw/blob/main/docs/pi.md


Did you scroll through the pricing options? The largest Kimi plan is $199/month. “Much better” depends on how much usage is included vs. Anthropic plans/API costs.

From what I can see, this is agentic tooling that provides similar features to OpenClaw. It’s been on GitHub since June 2024 but never seemed to catch the hype train. Some stats comparing the popularity of the two:

  Agent Zero: 14k GH stars, 3k X followers
  OpenClaw: 197k GH stars, 314k X followers

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: