The power from a digital watch is billions of times more powerful than the signal we get from Voyager 1. It blows my mind that we're still able to sense it.
From a quick search: Voyager 1 is 25B km from earth and runs on 240 watts of power.
I'm no physicist, but I don't understand how a signal is detectable from that far. Also, am very impressed that voyager can aim at earth from that far away too.
The antenna has a beam width of 0.3°. So it only needs to aim that accurately in the general direction of earth. In general, antennas don't need to aim more or less accurately as they get closer or further away, it is only in function of their beam width.
I'm pretty sure, at that distance, it doesn't even matter anymore whether it is pointing at earth, the moon or the sun.
.3° at 25B km is still a pretty large distance. Some random calculator online says that would be 1.3090e+8 kilometers or 130,900,000km. The earth-moon distance is roughly 240,000km. 1AU ~= 149,597,900 So .3° is just under 1AU, and essentially to cosmological scales .3° = 1AU. And it's only getting bigger as it continues to get further away. So essentially, just point at the sun and it'll hit earth
Voyager is so far away that from its perspective, earth isn't moving. There is also no force acting on voyager. So practically speaking, compared to the distance between them and the 0.3° beam width, both are hanging pretty still.
It does have an AACS system, which is tracking the sun and a bright star (Canopus) to orient itself earlier in the mission.
A quick search indicates it is still doing about 1 puff per hour to keep pointing the right way. The biggest problem seems to be that the lines for those puffs are clogging.
It's nice to see people putting effort into tackling things from the human side outside of phishing awareness campaigns and annual training. Even CrowdStrike noted in their annual report that something like 70% of successful attacks were interactive intrusions without malware.
I'm on my phone and can't dive deep right now, but are you able to create detections in SIEMs to identify these kinds of users and behaviors based on this research?
Same. When I try to get it to do a simple loop (eg take screenshot, click next, repeat) it'll work for about five iterations (out of a hundred or so desired) then say, "All done, boss!"
I'm hoping Anthropic's browser extension is able to do some of the same "tricks" that Claude Code uses to gloss over these kinds of limitations.
Claude is extremely poor at vision when compared to Gemini and ChatGPT. i think anthropic severely overfit their evals to coding/text etc. use cases. maybe naively adding browser use would work, but I am a bit skeptical.
I have a completely different experience. Pasting a screenshot into CC is my de-facto go-to that more often than not leads to CC understanding what needs to be done etc…
I have better success with asking for a short script that does the million iterations than asking the thing to make the changes itself (edit: in IDEs, not in the browser).
I'm wondering if they are using vanilla claude or if they are using a fine-tuned version of claude specifically for browser use.
RL fine-tuning LLMs can have pretty amazing results. We did GRPO training of Qwen3:4B to do the task of a small action model at BrowserOS (https://www.browseros.com/) and it was much better than running vanilla Claude, GPT.
I think I struggle to see much difference between 80s and synthwave? It wasn't until you pointed it out that I noticed any real differences. It might be like musical genres where the deeper you explore a genre, the more you can bifurcate the genre.
Presenting where you have to be exactly on the content with no deviation is hard. To do that without sounding like a robot is very hard.
Presenting isn't that hard if you know your content thoroughly, and care about it. You just get up and talk about something that you care about, within a somewhat-structured outline.
Presenting where customers and the financial press are watching and parsing every word, and any slip of the tongue can have real consequences? Yeah, um... find somebody else.
I've run qwen3 4B on my phone, it's not the best but it's better than old gpt-3.5. It also does have a reasoning mode, and in reasoning mode it's better than the original gpt-4 and rhe original gpt-4o, but not the latest gpt-4o. I get usable speed, but it's not really comparable to most cloud hosted models.
I'm on android so I've used termux+ollama, but if you don't want to set that up in a terminal or want a GUI pocketpal AI is a really good app for both android and iOS. It let's you run hugging face models.
What speaks to me about this is how it was presented before LLMs, yet many concepts still apply. For example
> Learn tools, and use tools, but don't accept tools. Always distrust them; always be alert for alternative ways of thinking.
In his closing remarks he says, "The most dangerous thought you can have as a creative person is to think you know what you're doing," because you stop being open and receptive to new ways of thinking and doing things, much like programmers shunned FORTRAN because they were comfortable programming in binary.
I considered the section about programs interrogating one another to accomplish some goal [0] the most evocative idea. I'll admit that my limited fantasies resembled something that resembled swagger's openapi or hateoas.
When I heard about Anthropic's model context protocol [1], it reminded me of this talk. I feel pretty skeptical that llm based systems are apt to craft a pigdin with a tool, as that seems like the kind of interaction that would use up lots of the context window. I'll grant that I've heard of people working around that by having their llm leave a summary of their session [2], to bootstrap the next, fresh session. I guess one could leave a pigdin dictionary that suited the llm traning data, as well.
[0] intro starts at 13:13, regarding arpanet, but description starts at 13:53
> FORTRAN was proposed by Backus and friends, and again was opposed by almost all programmers. First, it was said it could not be done. Second, if it could be done, it would be too wasteful of machine time and capacity. Third, even if it did work, no respectable programmer would use it -- it was only for sissies!
- Richard Hamming, The Art of Doing Science and Engineering
Tangentially, I haven't received a Pig Butchering opening text ("hey") in quite a while. A quick scan through my Spam & Blocked doesn't show much either, just a lot of political spam. Did something happen to improve the situation, or am I just lucky?
It's one more step on the path to A Young Lady's Illustrated Primer. Still a long way to go, but it's a burden off my shoulders to be able to ask stupid questions without judgment or assumptions.
The "Recents" doesn't contain files I've recently interacted with, and I'm still not sure what qualifies a file to be listed in Recents.