I expect that management at Meta will have about as much success willing "superintelligence" into existence as Boeing managment had willing a safe 737Max into existence.
No amount of fantastical thinking is going to coax AGI out of a box of inanimate binary switches --- aka, a computer as we know it.
Even with billions and billions of microscopic switches operating at extremely high speed consuming an enormous share of the world's energy, a computer will still be nothing more than a binary logic playback device.
Expecting anything more is to defy logic and physics and just assume that "intelligence" is a binary algorithm.
> Expecting anything more is to defy logic and physics.
What logic and physics are being defied by the assumption that intelligence doesn't require the specific biological machinery we are accustomed to?
This is a ridiculous comment to make, you do nothing to actually prove the claims you're making, which are even stronger than the claims most people will make about the potential of AGI.
The logic and physics that make a computer what it is --- a binary logic playback device.
By design, this is all it is capable of doing.
Assuming a finite, inanimate computer can produce AGI is to assume that "intelligence" is nothing more than a binary logic algorithm. Currently, there is no logical basis for this assumption --- simply because we have yet to produce a logical definition of "intelligence".
Of all people, programmers should understand that you can't program something that is not defined.
> By design, this is all it is capable of doing. Assuming a finite, inanimate computer can produce AGI is [...]
Humans are also made up of a finite number of tiny particles moving around that would, on their own, not be considered living or intelligent.
> [...] we have yet to produce a logical definition of "intelligence". Of all people, programmers should understand that you can't program something that is not defined.
There are multiple definitions of intelligence, some mathematically formalized, usually centered around reasoning and adapting to new challenges.
There are also a variety of definitions for what makes an application "accessible", most not super precise, but that doesn't prevent me improving the application in ways such that it gradually meets more and more people's definitions of accessible.
Are you a programmer? Are you familiar with Alan Turing [0]?
What do you mean by finite, are you familiar with the halting problem? [1]
What does "inanimate" mean here? Have you seen a robot before?
Imprecise language negates your entire argument. You need to very precisely express your thoughts if you are to make such bold, fundamental claims.
While it's great that you're taking an interest in this subject, you're clearly speaking from a place of great ignorance, and it would serve you better to learn more about the things you're criticizing before making inflammatory, ill-founded claims. Especially when you start trying to tell a field expert that they don't know their own field.
Using handwavy words you don't seem to understand such as "finite" and "inanimate" while also claiming we don't have a "logical definition" (whatever that means) of intelligence just results in an incomprehensible argument.
Human intelligence seems likely to be a few tricks we just haven't figured out yet. Once we figure it out, we'll probably remark on how simple a model it is.
We don't have the necessary foundation to get there yet. (Background context, software/hardware ecosystem, understanding, clues from other domains, enough people spending time on it, etc.) But one day we will.
At some point people will try to run human-level AGI intelligences on their Raspberry Pi. I'd almost bet that will be a game played in the future - run human-level AGI intelligences on as low a spec machine as possible.
I also wonder what it would be like if the AGI / ASI timeline coincide with our ability to do human brain scans at higher fidelity. And that if they do line up, that we might try replicating our actual human thoughts and dreams on our future architectures as we make progress on AGI.
If those timelines have anything to do with one another, then when we crack AGI, we might also be close to "human brain uploads". I wouldn't say it's a necessary precondition, but I'd bet it would help if the timelines aligned.
And I know the limits of detection right now and in the foreseeable future are abysmal. So AGI and even ASI probably come first. But it'd be neat if they were close to parallel.
The article doesn't say anything along those lines as far as I can tell - it focuses on scaling laws and diminishing returns ("If you want to get linear improvements, you need exponential resources").
I generally agree with the article's point, though I think "Will Never Happen" is too strong of a conclusion, whereas I don't think the idea that simple components ("a box of inanimate binary switches") fundamentally cannot combine to produce complex behaviour is well-founded.
Your criteria is the lack of randomness and determinism by the sound of that.
What if i had an external source of trye randomness? Very easy to add. In fact current ai algorithms have a temperature parameter that can easily utilise true randomness if you want it to.
Would you suddenly change your mind and say ok ‘now it can be AGI!’ because i added a nuclear decay based random number generator to my ai model?
I hope to preprint my paper for your review on arxiv next week titled:
"A Novel Bridge from Randomness in Stochastic Data to, Like, OMG I'm SO Randomness in Valley Girl Entropy"
We will pay dearly for overloading that word. Good AGI will be capable of saying the most random things! Not, really, no. I mean, they'll still be pronounceable, I'm guessing?
So is the “binary” nature of today’s switches the core objection? We routinely simulate non-binary, continuous, and probabilistic systems using binary hardware. Neuroscientific models, fluid solvers, analog circuit simulators, etc., all run on the same “binary switches,” and produce behavior that cannot meaningfully be described as binary, only the substrate is.
I tend to wonder whether people who make claims like that are confusing intelligence with consciousness. The claim as stated above could be a summary of a certain aspect of the hard problem of consciousness: that it's not clear how one could "coax consciousness out of a box of inanimate binary switches" - but the connection to "intelligence" is dubious. Unless of course one believes that "true" intelligence requires consciousness.
Intelligence can be expressed in higher order terms than the logic that the binary gates running the underlying software is required to account for.
Quarks don't need to account for atomic physics. Atomic physics don't need to account for chemistry. Chemistry doesn't need to account for materials science. It goes on and on. It's easy to look at a soup of quarks and go, "there's no way this soup of quarks could support my definition of intelligence!", but you go up the chain of abstraction and suddenly you've got a brain.
Scientists don't even understand yet where subjective consciousness comes into the picture. There are so many unanswered questions that it's preposterous to claim you know the answers without proof that extends beyond a handwavy belief.
We already have irrefutable evidence of what can reasonably be called intelligence, from a functional perspective, from these models. In fact in many, many respects, the models outperform a majority of humans on many kinds of tasks requiring intelligence. Coding-related tasks are an especially good example.
Of course, they're not equivalent to humans in all respects, but there's no reason that should be a requirement for intelligence.
If anything, the onus lies on you to clarify what you think can't be achieved by these models, in principle.
90% of TSLA income is from EV sales. Everything else is just Musk's grandiose and increasingly absurd predictions which have a long history of falling short of reality.
Yes. Most companies play these financial games to some extent.
I lumped government subsidies in with EV sales since they are related. Trump wiped these out.
Robotaxi and robots are the fantasy category. They are not currently income producers and may not be for years to come. His robot demos have been widely panned as fake.
Meanwhile the story of Jensen and Musk continued onward with custom chips to support FSDv1, in which J's personal delivery of the DGX1 to OpenAI served as the catalyst to the relationship, iirc.
The problem for nvidia is ... where do you go from here and still have spectacular performance improvements?
Nvidia got extremely lucky again and again and again, and what specifically did it is that right in time non-Nvidia researchers learned to train on smaller floating point bit lengths, which Nvidia raced to support. And great, well, done! A list of ironies though ... for example it's Google Deepmind that made the Turing generation of cards viable for nvidia. However, the new floating point formats train has arrived at it's last station, the NXFP4 station. There is no FP3 and no FP2 to go to. Is there a new train to get on? I'm not aware of one.
Nvidia's argument is "Blackwell easily doubles Ada performance!" ... but that is deceptive. The actual improvement is that Blackwell NXFP4 (4-bits) is more than double Ada FP8 (8-bit) performance in ops. That's the train that's arrived at its last station. Go back further and the same is true, just with larger and larger FP formats, starting at FP32 (single precision). Aside from a small FP64 detour, and a few "oopses" in some of the format they chose turning out useless or unstable, all quickly abandoned that's the story of nvidia in ML.
Comparing, for example, FP32 you don't see big improvements: e.g. 4090: 83 FP32 TFLOPS, 5090: 104 FP32 TFLOPS. Given the power requirements involved that's actually a regression. If you're stuck at 8 bits, nvidia's story breaks down and Ada cards beat Blackwell cards in performance per watt: 4090: 5.44 Watt/FP32 TFLOP, 5090: 5.5 Watt/FP32 TFLOP. Or, FP8, same story: 4090 is 0.681 Watt/FP8 TFLOP, 5090 is 0.686 Watt/FP8 TFLOP. Now effectively the new memory still buys some improvement but not much.
Will the next generation after Blackwell, with the same floating format as the previous generation be a 10% improvement and subject to further diminishing returns and stuck there until ... well, until we find something better than silicon? I should point out 10% is generous, because for FP8, Blackwell is actually not an improvement at all over Ada, on a per-watt basis for equivalent floating point lengths.
Plus Blackwell is ahead of the competition ... but only 1 generation. If nvidia doesn't get on a new train, the next generation of AMD cards will match the current nvidia generation. Then the next TPU generation will match nvidia.
Really bad AI code is no different than really bad sushi.
It can be a lot harder to tell if it's bad.
You can either vet it thoroughly or you can consume it and see what happens.
Business pressures often lead to the latter approach. Managers that would balk at bad sushi pressure developers to consume AI code that is known to often be "bad"--- and then blame developers for the effects.
reply