> Meta CEO Mark Zuckerberg could soon have an AI clone of himself to interact with and provide feedback to employees, according to a report from the Financial Times.
Tunnel vision? If your model can handle big context, why divide into lesser problems to conquer - even if such splitting might be quite trivial and obvious?
It's the difference of "achieve the goal", and "achieve the goal in this one particular way" (leverage large context).
I meant, if the claim here is that small models can accomplish the same things with good scaffolding, why didn’t they demonstrate finding those problem with good scaffolding rather than directly pointing them at the problem?
Lot of people in this thread don't seem to be getting that.
If another model can find the vulnerability if you point it at the right place, it would also find the vulnerability if you scanned each place individually.
People are talking about false positives, but that also doesn't matter. Again, they're not thinking it through.
False positives don't matter, as you can just automatically try and exploit the "exploit" and if it doesn't work, it's a false positive.
Worse, we have no idea how Mythos actually worked, it could have done the process I've outlined above, "found" 1,000s of false positives and just got rid of them by checking them.
The fundamental point is it doesn't matter how the cheap models identified the exploit, it's that they can identify the exploit.
When it turns out the harness is just acting as a glorified for-each brute force, it's not the model being intelligent, it's simply the harness covering more ground. It's millions of monkeys bashing type-writers, not Shakespeare at one.
It’s strange to see this constant “I could do that too, I just don’t want to” response.
Finding an important decades-old vulnerability in OpenBSD is extremely impressive. That’s the sort of thing anyone would be proud to put on their resume. Small models are available for anyone to use. Scaffolding isn’t that hard to build. So why didn’t someone use this technique to find this vulnerability and make some headlines before Anthropic did? Either this technique with small models doesn’t actually work, or it does work but nobody’s out there trying it for some reason. I find the second possibility a lot less plausible than the first.
From the article:
>At AISLE, we've been running a discovery and remediation system against live targets since mid-2025: 15 CVEs in OpenSSL (including 12 out of 12 in a single security release, with bugs dating back 25+ years and a CVSS 9.8 Critical), 5 CVEs in curl, over 180 externally validated CVEs across 30+ projects spanning deep infrastructure, cryptography, middleware, and the application layer.
They have been doing it (and likely others as well), but they are not anthropic which a million dollar marketing budget and a trillion dollar hype behind it, so you just didn't hear about it.
> If another model can find the vulnerability if you point it at the right place, it would also find the vulnerability if you scanned each place individually.
They didn't just point it at the right place, they pointed it at the right place and gave it hints. That's a huge difference, even for humans.
I mean definitely a good starting point is a share-nothing system, but then it becomes impossible to use tools (no shared filesystem, no networking), so everything needs to happen over connections the agent provides.
MCP looks like it would then fit that purpose, even if there was an MCP for providing access to a shell. Actually I think a shell MCP would be nice, because currently all agent environments have their own ways of permission management to the shell. At least with MCP one could bring the same shell permissions to every agent environment.
Though in practice I just use the shell, and not MCP almost at all; shell commands are much easier to combine, i.e. the agent can write and run a Python program that invokes any shell command. In the "MCP shell" scenario this complete thing would be handled by that one MCP, it wouldn't allow combining MCPs with each other.
That is fine, but you give up any pretence of security - your agent can inspect your tool's process, environment variables etc - so can presumably leak API keys and other secrets.
Other comments have claimed that tools are/can be made "just as secure" - they can, but as the saying goes: "Security is not a convenience".
Good question! I've never tried. The NT driver makes use of some of the more advanced features of the networking stack, so possibly not. But you never know. I'd love a Wg4React.
ReactOS was, at one time, targeting a Windows Server 2003-level of compatibility. With that in mind I can't imagine current Wireguard would have even a shred of hope of working on ReactOS.
Allows me to use Instagram messages without the app - as well as (Facebook/meta) Messenger (and others).
I do wish they had a "support us" subscription tier, as I think the base price is a little steep - and I don't really need any of the paid features. Maybe something around the third or quarter the price.
I would hope that would lead to more users subscribing.
One thing I like is the possibility to turn off notifications on other apps - and then dms become/remain push (beeper) other SoMe crap become pull (check when actively opening the app).
What prevents the agent from presisering or leaking the API key - or reading it from the environment?