Hacker Newsnew | past | comments | ask | show | jobs | submit | hasperdi's commentslogin

AFAIK Zuck got mad and restructured the whole department.

What's likely is that there won't be anything open / significant coming out from them anymore


"getting Zuck'd"

Another thing... they alter the localStorage & sessionStorage prototype, by wrapping the native ones with a wrapper that prevent keys that not in their whitelist from being set.

You can try this by opening devtools and setting

  localStorage.setItem('hi', 123)

Say you were the Mayor of London, and being a great mayor you have your priorities 100% correct.

Can you guarantee that something like this will never happen on your watch?


He funded Comma.ai, so he does understand the problem domain & complexity.


As I understand, Comma.ai is focused on driver-assistance and not fully autonomous self-driving.

The features listed on the wikipedia are lane-centering, cruise-control, driver monitoring, and assisted lane change.[1]

The article I linked to from Starsky addresses how the first 90% is much easier than the last 10% and even cites "The S-Curve here is why Comma.ai, with 5–15 engineers, sees performance not wholly different than Tesla’s 100+ person autonomy team."

To give an example of the difficulty of the last 10%: I saw an engineer from Waymo give a talk about how they had a whole team dedicated to detecting emergency vehicle sirens and acting appropriately. Both false positives and false negatives could be catastrophic so they didn't have a lot of margin for error.

[1] https://en.wikipedia.org/wiki/Openpilot#Features


Speaking as a user of Openpilot / Comma device, it is exactly what the Wikipedia article described. In other words, it's a level 2 ADAS.

My point was, he had more than naive / "pedestrian level" (pun?) understanding of the problem domain as he worked on Comma.ai project for quite some time; even the device is only capable of solving maybe about 40% of the autonomous driving problem.


On the other hand, it depends on what kind of movie you're making and who the target group is.

Say you're making children's videos like Cocomelon or Bluey in 3D, you don't need all these nice things.

At the end, movies are about the stories, not just pretty graphics.


> At the end, movies are about the stories, not just pretty graphics.

The great people at Pixar and DreamWorks would be a bit offended. Over the past three or so decades they have pushed every aspect of rendering to its very limits: from water, hair, atmospheric effects, reflections, subsurface scattering, and more. Watching a modern Pixar film is a visual feast. Sure, the stories are also good, but the graphics are mind-bendingly good.


I've always wondered why they have never done a remaster of Toy Story.


Because they also want to tell new stories. It’s never just about the graphics, unlike most modern video game “remasters”, for example.


I don't know about that. I watched Avatar 3 last month and I liked the experience but the story was forgettable.

People don't pay 45 eurodollars for IMAX because they like the story.


The YouTube channel Calum made a documentary about it https://m.youtube.com/watch?v=zR0M7KjnJTE , Mustard did too https://m.youtube.com/watch?v=pW0eZRoQ86g


What you said is possible by feeding the output of speech-to-text tools into an LLM. You can prompt the LLM to make sense of what you're trying to achieve and create sets of actions. With a CLI it’s trivial, you can have your verbal command translated into working shell commands. With a GUI it’s slightly more complicated because the LLM agent needs to know what you see on the screen, etc.

That CLI bit I mentioned earlier is already possible. For instance, on macOS there’s an app called MacWhisper that can send dictation output to an OpenAI‑compatible endpoint.


Handy can post process with LLMs too! It’s just currently hidden behind a debug menu as an alpha feature (ctrl/cmd+shift+d)


I was just thinking about building something like this, looks like you beat me to the punch, I will have to try it out. I'm curious if you're able to give commands just as well as some wording you want cleaned up. I could see a model being confused between editting the command input into text to be inserted and responding to the command. Sorry if that's unclear, might be better if I just try it.


I’d just try it and fork handy if it doesn’t work how you want :)


Not exactly trackballs, but Magic Trackpad can be considered an alternative. Or roller mouse slim (crazy expensive)


Nice, I want one! Assuming it works great (Keychron products usually do)


But where?


"Best" is a relative term. It depends on the purpose / criteria


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: