Hacker Newsnew | past | comments | ask | show | jobs | submit | more MattSayar's commentslogin

I cannot right-click and create a new file.

The "Recents" doesn't contain files I've recently interacted with, and I'm still not sure what qualifies a file to be listed in Recents.


The power from a digital watch is billions of times more powerful than the signal we get from Voyager 1. It blows my mind that we're still able to sense it.

https://public.nrao.edu/ask/how-strong-is-the-signal-from-th...


From a quick search: Voyager 1 is 25B km from earth and runs on 240 watts of power.

I'm no physicist, but I don't understand how a signal is detectable from that far. Also, am very impressed that voyager can aim at earth from that far away too.


The antenna has a beam width of 0.3°. So it only needs to aim that accurately in the general direction of earth. In general, antennas don't need to aim more or less accurately as they get closer or further away, it is only in function of their beam width.

I'm pretty sure, at that distance, it doesn't even matter anymore whether it is pointing at earth, the moon or the sun.


Can you provide some details on how we on earth are able to pick up such a signal amongst all the noise?


According to https://gigazine.net/gsc_news/en/20240604-voyager-1-photons-... they have a really big 70m dish which collects about 60 RF photons per second per square meter.


The transcript from this podcast has some answers:

https://www.nasa.gov/podcasts/invisible-network/bonus-dsn-yo...


0.3 degrees is pretty narrow :) I would not consider that "in the general direction".


.3° at 25B km is still a pretty large distance. Some random calculator online says that would be 1.3090e+8 kilometers or 130,900,000km. The earth-moon distance is roughly 240,000km. 1AU ~= 149,597,900 So .3° is just under 1AU, and essentially to cosmological scales .3° = 1AU. And it's only getting bigger as it continues to get further away. So essentially, just point at the sun and it'll hit earth


Of course, the sun will amplify the radio waves!


What the huh? That's not even funny if I tilt my head sideways and squint at it.


You didn't watch the three body problem, I gather :)


I'm glad someone got it :)


The apparent size of the moon is 0.5 degrees. So 0.3 degrees is not _that_ narrow. You can point your finger at the moon.


How does it locate earth to .3 accuracy while they’re both moving ?


Voyager is so far away that from its perspective, earth isn't moving. There is also no force acting on voyager. So practically speaking, compared to the distance between them and the 0.3° beam width, both are hanging pretty still.

It does have an AACS system, which is tracking the sun and a bright star (Canopus) to orient itself earlier in the mission.

A quick search indicates it is still doing about 1 puff per hour to keep pointing the right way. The biggest problem seems to be that the lines for those puffs are clogging.


It's nice to see people putting effort into tackling things from the human side outside of phishing awareness campaigns and annual training. Even CrowdStrike noted in their annual report that something like 70% of successful attacks were interactive intrusions without malware.

I'm on my phone and can't dive deep right now, but are you able to create detections in SIEMs to identify these kinds of users and behaviors based on this research?


Same. When I try to get it to do a simple loop (eg take screenshot, click next, repeat) it'll work for about five iterations (out of a hundred or so desired) then say, "All done, boss!"

I'm hoping Anthropic's browser extension is able to do some of the same "tricks" that Claude Code uses to gloss over these kinds of limitations.


Claude is extremely poor at vision when compared to Gemini and ChatGPT. i think anthropic severely overfit their evals to coding/text etc. use cases. maybe naively adding browser use would work, but I am a bit skeptical.


I have a completely different experience. Pasting a screenshot into CC is my de-facto go-to that more often than not leads to CC understanding what needs to be done etc…


Is it overfitting if it makes them the best at those tasks?


This has been exactly my experience using all the browser based tools I've tried.

ChatGPT's agents get the furthest but even then they only make it like 10 iterations or something.


I have better success with asking for a short script that does the million iterations than asking the thing to make the changes itself (edit: in IDEs, not in the browser).


If you need precision, that's the way to go, and it's usually cheaper and faster too.


I'm wondering if they are using vanilla claude or if they are using a fine-tuned version of claude specifically for browser use.

RL fine-tuning LLMs can have pretty amazing results. We did GRPO training of Qwen3:4B to do the task of a small action model at BrowserOS (https://www.browseros.com/) and it was much better than running vanilla Claude, GPT.


Hopefully one of those "tricks" involves training a model on examples of browser use.


I think I struggle to see much difference between 80s and synthwave? It wasn't until you pointed it out that I noticed any real differences. It might be like musical genres where the deeper you explore a genre, the more you can bifurcate the genre.


Presenting is hard


Presenting where you have to be exactly on the content with no deviation is hard. To do that without sounding like a robot is very hard.

Presenting isn't that hard if you know your content thoroughly, and care about it. You just get up and talk about something that you care about, within a somewhat-structured outline.

Presenting where customers and the financial press are watching and parsing every word, and any slip of the tongue can have real consequences? Yeah, um... find somebody else.


What's your experience with the quality of LLMs running on your phone?


I've run qwen3 4B on my phone, it's not the best but it's better than old gpt-3.5. It also does have a reasoning mode, and in reasoning mode it's better than the original gpt-4 and rhe original gpt-4o, but not the latest gpt-4o. I get usable speed, but it's not really comparable to most cloud hosted models.


I'm on android so I've used termux+ollama, but if you don't want to set that up in a terminal or want a GUI pocketpal AI is a really good app for both android and iOS. It let's you run hugging face models.


As other said, around gpt 3.5 level so three or four years behind SOTA today at reasonable (but not quick) speed.


What speaks to me about this is how it was presented before LLMs, yet many concepts still apply. For example

> Learn tools, and use tools, but don't accept tools. Always distrust them; always be alert for alternative ways of thinking.

In his closing remarks he says, "The most dangerous thought you can have as a creative person is to think you know what you're doing," because you stop being open and receptive to new ways of thinking and doing things, much like programmers shunned FORTRAN because they were comfortable programming in binary.


Bret Victor Idealism is the antithesis of the horrible future that all the capital owners and at least half the programmers are preaching about now.


I considered the section about programs interrogating one another to accomplish some goal [0] the most evocative idea. I'll admit that my limited fantasies resembled something that resembled swagger's openapi or hateoas.

When I heard about Anthropic's model context protocol [1], it reminded me of this talk. I feel pretty skeptical that llm based systems are apt to craft a pigdin with a tool, as that seems like the kind of interaction that would use up lots of the context window. I'll grant that I've heard of people working around that by having their llm leave a summary of their session [2], to bootstrap the next, fresh session. I guess one could leave a pigdin dictionary that suited the llm traning data, as well.

[0] intro starts at 13:13, regarding arpanet, but description starts at 13:53

[1] https://modelcontextprotocol.io/docs/learn/architecture

[2] https://news.ycombinator.com/item?id=44661223

<edit to add> Also, Bret Victor's team was able to involve the light pens mentioned in his dynamicland research group / lab.

[3] https://dynamicland.org/archive/2015/Dynamic_Library


Programmers of assembly code (not binary) shunned FORTRAN? Got a source for that?


> FORTRAN was proposed by Backus and friends, and again was opposed by almost all programmers. First, it was said it could not be done. Second, if it could be done, it would be too wasteful of machine time and capacity. Third, even if it did work, no respectable programmer would use it -- it was only for sissies!

- Richard Hamming, The Art of Doing Science and Engineering


In some sense... I bet there are more people writing assembly than FORTRAN today.


Doubtful. Fortran is big in HPC and has modern versions.


I'll take that bet.


That was obviously before FORTRAN existed.


Because, of course, Real Programmers use FORTRAN.

https://homepages.inf.ed.ac.uk/rni/papers/realprg.html


Tangentially, I haven't received a Pig Butchering opening text ("hey") in quite a while. A quick scan through my Spam & Blocked doesn't show much either, just a lot of political spam. Did something happen to improve the situation, or am I just lucky?


I get between 5-10 per day, so I’d say you’re just lucky.


Are you recently divorced? Just curious if public filings like divorce decrees would make someone a target for pig butchering scams.


No


The cost to operate is so low I can’t see why they’d bother targeting?


It's one more step on the path to A Young Lady's Illustrated Primer. Still a long way to go, but it's a burden off my shoulders to be able to ask stupid questions without judgment or assumptions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: