A sound banker, alas, is not one who foresees danger and avoids it, but one who, when he is ruined, is ruined in a conventional and orthodox way along with his fellows, so that no one can really blame him. JM Keynes
The thing is that as far as I can tell, a ZKP of age involves a state or similar attestor to issue an ID/waller that can be querried for age without revealing identity.
But attestor has to have certainty about the age of the person it issues IDs to. That raises obvious questions.
What states are going to accept private attestors? What states are going accept other states as attestors? What state won't start using its issues ID/Wallet for any purpose it sees fit?
This system seems likely to devolve national Internets only populated by those IDs. That can all happen with ZKPs not being broken.
OK, supposed Claude in X many years could write a drop in replacement for every single one of those things. Would you raise your rates in the meantime too?
Why would a manager solve an IC's problems for them? Solving problems is generally the job of the IC. If an IC doesn't have the ability to solve a given problem, the manager should let them talk to a different IC with that skill.
LLMs are trained on human behavior as exhibited on the Internet. Humans break rules more often under pressure and sometimes just under normal circumstances. Why wouldn't "AI agents" behave similarly?
The one thing I'd say is that humans have some idea which rules in particular to break while "agents" seem to act more randomly.
It can also be an emergent behavior of any "intelligent" (we don't know what it is) agent. This is an open philosophical problem, I don't think anyone has a conclusive answer.
Maybe but there's no reason to think that's the case here rather then the models just acting out typical corpus storylines: the Internet is full of stories with this structure.
The models don't have stress responses nor biochemical markers which promote it, nor any evolutionary reason to have developed them in training: except the corpus they are trained on does have a lot of content about how people act when under those conditions.
I use Firefox to access ChatGPT. I wouldn't want AI suggestions or slop appearing on random pages.
The idea that LLM have been successful and useful for significant things is naturally confused with the idea LLMs needed to bolted on to literally anything.
I dunno I just see so much cool shit in the world today. I see Waymo cars driving themselves around. LLMs are still wildly revolutionary. My TV is the tits. There's so much good happening but there's this massive undercurrent of negativity that's hard to reconcile.
Personally I'm doing something that brings me more happiness than most of my activities for the last twenty years - I'm providing direct aid for the unhoused and the impoverished. And I'm know there are people who are doing things that are their passion unrelated to that.
But I have to say, your list for is remarkable for being about things, not people. You amazed by the cool stuff available for some people for a lot of money and some things that are pretty cheap. But X percent of the population can't pay their rent with their income, cool stuff for sale is hardly going to help them. And indeed that statement itself is a strong illustration of how self-insulating people are from the conditions people live with.
It's something I note a lot about detractors on HN. They don't think in people, they think in "things". And the "important things" are ones they take for granted. If they secured a house from the tech boom of the 10's before everything went to shit, then they are completely insulated from the idea that "things are harder to get".
Then there's the job aspect. I guess Hacker News is more older millenial/Gen X, so they are on the sticky side of this "sticky job market". If they are done job hopping, they won't feel how bad it is out there.
I feel like it's such a cultural thing here in the US. There is a pervasive culture of individualism and operating wholly within ones own means. Need help? Don't ask your neighbor for help, ask Our LLM (tm) for only $9.99 a month! I'm being hyperbolic of course.
This is leaking out of the us, too. The cultural gap between people who have money, access to the internet, and who can speak english, and those people living within a mile away who have none of these things, has never been wider.
The point of these LLMs is to do things that computers were bad at.
That's a good point imo but we achieved this stuff by at least 2022 when ChatGPT was released. The thing about these giant black boxes is that they also fail to do things that directly human-written software ("computers") does easily. The inability to print text onto generated images or do general arithmetic is important. And sure, some of these limits look like "limits of humans". But it is important to avoid jumping from "they do this human-thing" to "they're like humans".
That claim just reads like he's concocted two sides for his position to be the middle ground between. I did that essays in high school but I try to be better than that now.
There are a lot of approximation methods involved in training neural networks. But the main thing that while learning calculus is a challenging, actually calculating the derivative of a function at a point using algorithmic differentiation is actually extremely fast and exact, nearly as exact as calculating the function's value itself and inherently more efficient than finite difference approximations to the derivative. Algorithmic differentiation is nearly "magic".
But remember, that is for taking the derivative at a single data point - what's hard is the average derivative over the entire set of points and that's where sampling and approximations (SGD etc)comes in.
reply