> Do Tesla shareholders think the rest of the auto industry are in a coma?
I have no idea what institutional investors think, and they're probably the relevant group here.
From the way I've observed individuals discussing it, defending it, on HN… it pattern-matches to my understanding of what people these days call "main character syndrome", i.e. that the other companies are just a supporting cast to provide an interesting challenge for the only one that's not an NPC.
Or, they're stuck in a narrative that stopped making sense only gradually. Tesla solving self-driving ten years ago would have been a triumph. Solving it today, meh. They would be ahead of others by a couple of years, max.
It's funny because the whole kerfuffle is based on the disagreement over the humanity of these bots. The bot thinks he's a human, so it submits a PR. The maintainer thinks the bot it not human, so he rejects it. The bot reacts as a human, writing an angry ans emotional post about the story. The maintainer makes a big fuss because a non-human wrote a hit piece on him. Etc.
I think it could have been handled better. The maintainer could have accepted the PR while politely explaining that such PRs are intentionally kept for novice developers and that the bot, as an AI, couldn't be considered a novice- so please avoid such simple ones in the future and, in case, focus on more challenging stuff. I think everyone would have been happier as a result- including the bot.
My take on autonomous driving is this (I'll leave it here for posterity). There will be no winners. Fully autonomous, flexible, dependable driving requires some degree of general intelligence. This will not come from clever algorithms or accumulation of proprietary data, but from progressive improvements in AI. Different sensory hardware will also make marginal difference as the bitter lesson will always hold: real driving ability mostly comes from bigger, smarter systems.
This implies that there is no moat. One company or another might be the first on the market with one system, but the others will catch up in the space of a couple of years. The first to market won't have much advantage over the others. We'll see a replica of what happened with LLMs: any latecomer will be able to replicate the results by putting a few billions on the table and hiring researchers from other companies, Chinese companies will develop a working version that runs on slightly less demanding hardware, open weights and open source will appear. Etc.
Yeah, that's basically the opposite of what I'm saying. The problem is enormously difficult, but it won't be solved as a problem in itself, it will suddenly become solved as a side effect of better AI, exactly like numerous cognition-related problems got suddenly solved with the advent of LLMs. And we've seen that there are many players working on better AI, and the main ones all maybe one/ two years away from each other. Self driving will become commoditized much before any of the early players can make substantial amounts of money out of it. Compared to LLMs, it's even worse, as self-driving can't really get superhuman and people will happily settle for whatever works acceptably well (where acceptably means at the same level as a good human driver).
Cool, so where can I buy a car in which, wherever I am, I can comfortably sit at the back knowing that it will take me to destination like a good driver? Or that I can summon to come to pick me up?
Certain areas can release more carbon than trees bind if there are trees there, for example peats (obviously not a desert) and tundras (more akin to a desert). These have often a lot of carbon bound in the ground which can be released.
Worth keeping in mind that in this case the test takers were random members of the general public. The score of e.g. people with bachelor's degrees in science and engineering would be significantly higher.
What is the point of comparing performance of these tools to humans? Machines have been able to accomplish specific tasks better than humans since the industrial revolution. Yet we don't ascribe intelligence to a calculator.
None of these benchmarks prove these tools are intelligent, let alone generally intelligent. The hubris and grift are exhausting.
It can be reasonable to be skeptical that advances on benchmarks may be only weakly or even negatively correlated with advances on real-world tasks. I.e. a huge jump on benchmarks might not be perceptible to 99% of users doing 99% of tasks, or some users might even note degradation on specific tasks. This is especially the case when there is some reason to believe most benchmarks are being gamed.
Real-world use is what matters, in the end. I'd be surprised if a change this large doesn't translate to something noticeable in general, but the skepticism is not unreasonable here.
The GP comment is not skeptical of the jump in benchmark scores reported by one particular LLM. It's skeptical of machine intelligence in general, claims that there's no value in comparing their performances with those of human beings, and accuses those who disagree with this take of "hubris and grift". This has nothing to do with any form or reasonable skepticism.
I would suggest it is a phenomenon that is well studied, and has many forms. I guess mostly identify preservation. If you dislike AI from the start, it is generally a very strongly emotional view. I don't mean there is no good reason behind it, I mean, it is deeply rooted in your psyche, very emotional.
People are incredibly unlikely to change those sort of views, regardless of evidence. So you find this interesting outcome where they both viscerally hate AI, but also deny that it is in any way as good as people claim.
That won't change with evidence until it is literally impossible not to change.
> What evidence of intelligence would satisfy you?
That is a loaded question. It presumes that we can agree on what intelligence is, and that we can measure it in a reliable way. It is akin to asking an atheist the same about God. The burden of proof is on the claimer.
The reality is that we can argue about that until we're blue in the face, and get nowhere.
In this case it would be more productive to talk about the practical tasks a pattern matching and generation machine can do, rather than how good it is at some obscure puzzle. The fact that it's better than humans at solving some problems is not particularly surprising, since computers have been better than humans at many tasks for decades. This new technology gives them broader capabilities, but ascribing human qualities to it and calling it intelligence is nothing but a marketing tactic that's making some people very rich.
(Shrug) Unless and until you provide us with your own definition of intelligence, I'd say the marketing people are as entitled to their opinion as you are.
I would say that marketing people have a motivation to make exaggerated claims, while the rest of us are trying to just come up with a definition that makes sense and helps us understand the world.
I'll give you some examples. "Unlimited" now has limits on it. "Lifetime" means only for so many years. "Fully autonomous" now means with the help of humans on occasion. These are all definitions that have been distorted by marketers, which IMO is deceptive and immoral.
> Machines have been able to accomplish specific tasks...
Indeed, and the specific task machines are accomplishing now is intelligence. Not yet "better than human" (and certainly not better than every human) but getting closer.
> Indeed, and the specific task machines are accomplishing now is intelligence.
How so? This sentence, like most of this field, is making baseless claims that are more aspirational than true.
Maybe it would help if we could first agree on a definition of "intelligence", yet we don't have a reliable way of measuring that in living beings either.
If the people building and hyping this technology had any sense of modesty, they would present it as what it actually is: a large pattern matching and generation machine. This doesn't mean that this can't be very useful, perhaps generally so, but it's a huge stretch and an insult to living beings to call this intelligence.
But there's a great deal of money to be made on this idea we've been chasing for decades now, so here we are.
> Maybe it would help if we could first agree on a definition of "intelligence", yet we don't have a reliable way of measuring that in living beings either.
How about this specific definition of intelligence?
Solve any task provided as text or images.
AGI would be to achieve that faster than an average human.
I still can't understand why they should be faster. Humans have general intelligence, afaik. It doesn't matter if it's fast or slow. A machine able to do what the average human can do (intelligence-wise) but 100 times slower still has general intelligence. Since it's artificial, it's AGI.
It's a mix of trying to look blasé, of believing that a superficial knowledge of how they work is enough to dismiss them as purely mechanical, and probably genuine fear and denial for the implications of their existence.
LLMs are not just intelligent, they're general intelligences in that they're not limited to a single task (such as a chess-playing AI or a voice-recognition AI) but they're capable of any task that can be performed with text as input and output (which doesn't mean it's just text manipulation, the internals are not limited to text).
Plastic is a fantastic material, that's why it's chosen for the packaging. I also don't see the problem sending it to a landfill as long as it stays there.
Many cities have banned plastic bags, and the results have been miraculous for waterways and wetlands. It turns out that shore animals don't benefit as much from "hope a few customers choose the better thing, but otherwise let them take home single-use crap that immediately blows off into natural settings."
Would you be ok if stores offered the option between a cheap plastic bag and a more expensive non-plastic one? (All the stores here already do it, btw).
I think the externalities of plastic recycling must be internalized economically by requiring all manufacturers of items to pre-pay for the recycling of said items up front, as part of the manufacturing cost. Similar to how bottle returns are managed, which has been very successful. Items which are provably biodegradable or designed to facilitate repair may be exempt.
Plastic bags are already taxed where I live. Consumers pay that tax, obviously. Other costs imposed on producers of plastic items will just be passed down to consumers.
Exactly. The point of this sort of tax should not be to collect revenue, it should be to ensure that non-biodegradable bags are being disposed of correctly. To the extent that this is not happening, any bag tax is malfunctioning. Such a tax is either insufficient or poorly-designed. (Our city just banned chain stores from giving out plastic bags under 4 mils thick, and stores now give out paper and sell re-usable bags.)
Even if that state is just straight up burning all the tax income from single-use plastic bags, by taxing them you incentivize consumers and distributors towards untaxed, ideally more sustainable alternatives, like single use paper bags or robust multi-use bags.
> Our city just banned chain stores from giving out plastic bags under 4 mils thick, and stores now give out paper and sell re-usable bags
I don't see how this is not a massive win? Paper bags are significantly more sustainable, and multi-use bags are more durable and thrown aways less simply from being more expensive alone.
People are much more wasteful with things they didn't pay for, regardless of "inherent" value.
We’re not disagreeing. I’m saying that the tax should be set high enough that it creates the desired behavior, which is to disincentivize the widespread use of polluting plastic bags AND/OR ensure that they’re recycled and don’t wind up in the environment. If you’re charging $.05 per bag and people are just eating the tax and the bags are winding up in wetlands in similar amounts, that means your tax regime isn’t effective. You should either increase the tax or improve the system. My city’s absolute ban is equivalent to setting the tax to infinity, which is one solution that seems to work well.
That ain't working. A plastic bag discarded in a ditch by the side of the road, or blown by the wind from the landfill, is still going to end up into the ocean. No amount of prepay recycling is going to take back that plastic from the ocean.
reply