>For general computer usage, SSDs really were a once in a generation "holy shit, this upgrade makes a real difference" thing.
I only ever noticed it on my windows partition. IIRC on my linux partition it was hardly noticeable because Linux is far better at caching disk contents than windows and also linux in general can boot surprisingly fast even on HDDs if you only install modules you actually need so that the autoconfiguration doesn't waste time probing dozens of modules in search of the best one.
maybe not on an SSD but it definitely helps a lot on HDD by virtue of having far less disk traffic. The kernel's method for figuring out which modules to load is effectively to load every single module that might be compatible with a given device in series and then ask the module for its opinion before unloading it, and then once it has a list of all (self-reported) compatible modules for a given device it picks one and reloads it.
IDK; at the time i was using gentoo, in which it's natural not to have more modules than necessary because part of the installation process involves generating your own kernel configuration.
Even though it's not the normal way to install debian there ought to be some sort of way to build your own custom kernels and modules without interferance from the package manager (or you can just run it all manually and hope that you dont end up in a conflict with apt). Gentoo is the only distro where it's mandatory but im pretty sure this is supported on just about every distribution since it would be necessary for maintainers.
AI witch-hunts are definitely a problem. The only tell you can actually rely on is when the AI says something so incredibly stupid that it not only fails to understand what it is talking about but the very meaning of words themselves.
Eg,metaphors that make no sense or fail to contribute any meaningful insight or extrenely cliched phrases ("it was a dark and stormy night...") used seriously rather than for self-deprecating humor.
My favorite example of an AI tell was a youtube video about serial killers i was listening to for background noise which started one of its sentences with "but what at first seemed to be an innocent night of harmless serial murder quickly turned to something sinister."
which is unfortunate, because pre-AI, "but what at first seemed to be an innocent night of harmless serial murder quickly turned to something sinister." would just be a funny bit of writing.
This whole thing pisses me off so much. I would be fine with an absolute anarchy in which copyright and patents no longer exist but these same dickheads have been terrorizing the entire planet with lawsuits and DRM for downloading Metallica CDs for the last 30 years and even now they don't actually want to reform the copyright system, just grant themselves a special exception because everything is supposed to unconditionally work in their favor regardless of circumstances.
I still don't understand why standardized testing gets so much pushback. Having the students do their work in a controlled environment is the obvious solution to AI and many other problems related to academic integrity.
Its also the only way that students can actually be held to the same standards. When I was a freshman in college with a 3.4 highschool GPA, I was absolutely gobsmacked by how many kids with perfect >= 4.0 GPAs couldn't pass the simple algebra test that the university administered to all undergraduates as a prerequisite for taking any advanced mathematics course.
Well, for one thing, people learn differently and comparing a "standard" test result just measures how much crap someone has been able to cram into their brain. I compare it to people memorizing trivia for Jeopardy. Instead what needs to be tested and taught is critical thinking. Yes a general idea of history and others is important, but again its teaching people to think about those subject, not just memorizing a bunch of dates that will be forgotten the day after the test.
you cannot possibly do any higher level of analysis of any subject if you dont even know the base facts. its the equivalent of saying you dont need to know your times tables to do physics. Like, theoretically its possible to look up 4x6 every time you need to do arithmetic but why would you not just memorize it.
If you dont even know that the american civil war ended in 1865 how could you do any meaningful analysis on its downstream implications or causes and its relationship to other events.
More important than knowing what 4x6 is, is understanding what multiplication is, why division is really the same operation, understanding commutative, associative, distributive properties of operations, etc. All of this comes as a result of repeated drilling of multiplication problem sets. Once this has been assimilated, you can move on to more abstract concepts that build on that foundation, and at that point sure you can use a calculator to work out the product of two integers as a convenience.
That's an argument against ranking students in a way which can potentially determine their entire lives, not an argument against standardized testing. The alternative to standardized testing is GPA which is an extremely subjective metric masquerading as some objective datapoint. Kids from different schools don't even have the same curricula let alone the same grading standards.
Also the teachers have a vested interest in giving the highest grades they can to as many students as they can without making it obvious that they aren't actually grading them fairly. i don't mean this as an accusation against anybody or some sort of insult against teachers as a whole, I merely mean to point out that this is what they are incentized to do by virtue of the fact that they are indirectly grading themselves by grading their students.
Nah. Goodhart's law is literally just "if you play a matrix game don't announce your pick in advance". It is not a real law, or not different from common sense. (By matrix game I mean what wiki calls "Normal form game[0]", e.g. rock-paper-scissors or prisoner's dilemma.)
In education, regarding exams, Goodhart's law just means that you should randomize your test questions instead of telling the students the questions before the exam. Have a wide set of questions, randomize them. The only way for students to pass is to learn the material.
A randomized standardized test is not more susceptible to Goodhart's law than a randomized personal test. The latter however has many additional problems.
That's not even remotely true. A randomized standardized test will still have some domain that it chooses its questions from and that domain will be perfectly susceptible to Goodhart's Law. It is already the case that no one is literally teaching "On the SAT you're going to get this problem about triangle similarity and the answer is C." When a fresh batch of students sits down in front of some year's SATs the test is still effectively "randomized" relative to the education they received. But that randomization is relative to a rigid standardized curriculum and the teaching was absolutely Goodhart'd relative to that curriculum.
"The only way for students to pass is to learn the material."
Part of Goodhart's law in this context is precisely that it overdetermines "the material" and there is no way around this.
I wish Goodhart's law was as easy to dodge as you think it is, but it isn't.
I do not believe schooling is purely an exercise in knowledge transfer, especially grade school.
School needs to provide opportunities to practice applying important skills like empathy, tenacity, self-regulation, creativity, patience, collaboration, critical thinking, and others that cannot be assessed using a multiple choice quiz taken in silence. When funding is tied to performance on trivia, all of the above suffers.
I think that's probably the most beautiful AI-generated post that was ever generated. The fact that he posted it shows that either he didn't read it, didn't understood it, or thought it would be fun to show how the AI implementation was inferior to the one it was 'inspired' from.
Personally I would have those credits to generate hentai but to each his own i suppose.
In the post where you had it respond to accusations of plagiarism and it responded by posting snippets of code which were obviously plagiarized and confidently asserted that they were not, what was your prompt? I ask because I felt its response was oddly tone-deaf even by LLM standards. I'm guessing that instead of giving it a neutral prompt such as "respond to this comment" you gave it something more specific such as "defend yourself against these accusations"?
I'm used to seeing them contradict themselves and say things that are obviously not true but usually when confronted they will give in and admit their mistake rather than dig a deeper hole.
Because people have been glazing this dumbass and feeding his ego for his entire adult life. Ive met musk cultists and "cultist" really is more of an accurate description than an insult for some of these people.
If you have it in your head that he's our generation's equivalent to Newton or Euler then its natural to want to be his assistant or apprentice for the sake of participating (what you think to be) a historically significant event.
Why does every major Javascript vulnerability come off as something that would be easily avoided by not doing obviously stupid things (in this case automatically updating packages with no authentication, testing or oversight)?
Maybe I'm just being ai-phobic or whatever but I strongly suspect the original article is written by grok based on how it goes off on bizarre tangents describing extremely complicated metaphors that are not only inaccurate but also wouldn't in any way be insightful even if they were accurate.
I'm skeptical that they're actually capable of making something novel. There are thousands of hobby operating systems and video game emulators on github for it to train off of so it's not particularly surprising that it can copy somebody else's homework.
I remain confused but still somewhat interested as to a definition of "novel", given how often this idea is wielded in the AI context. How is everyone so good at identifying "novel"?
For example, I can't wrap my head around how a) a human could come up with a piece of writing that inarguably reads "novel" writing, while b) an AI could be guaranteed to not be able to do the same, under the same standard.
Generally novel either refers to something that is new, or a certain type of literature. If the AI is generating something functionally equivalent to a program in its training set (in this case, dozens or even hundreds of such programs) then it by definition cannot be novel.
This is quite a narrow view of how the generation works. AI can extrapolate from the training set and explore new directions. It's not just cutting pieces and gluing together.
Calling it “exploring” is anthropomorphising. The machine has weights that yield meaningful programs given specification-like language. It’s a useful phenomenon but it may be nothing like what we do.
Do you have any concrete examples you'd care to share? While this new wave of AI doesn't have unlimited powers of extrapolation, the post we're commenting on is asserting that this latest AI from Google was able to extrapolate solutions to two of AI's oldest problems, which would seem to contradict an assertion of "very limited".
Positively not. It is pure interpolation and not extrapolation. The training set is vast and supports an even vaster set of possible traversal paths; but they are all interpolative.
Same with diffusion and everything else. It is not extrapolation that you can transfer the style of Van Gogh onto a photographl it is interpolation.
Extrapolation might be something like inventing a style: how did Van Gogh do that?
And, sure, the thing can invent a new style---as a mashup of existing styles. Give me a Picasso-like take on Van Gogh and apply it to this image ...
Maybe the original thing there is the idea of doing that; but that came from me! The execution of it is just interpolation.
This is knock against you at all, but in a naive attempt to spare someone else some time: remember that based on this definition it is impossible for an LLM to do novel things and more importantly, you're not going to change how this person defines a concept as integral to one's being as novelty.
I personally think this is a bit tautological of a definition, but if you hold it, then yes LLMs are not capable of anything novel.
I think you should reverse the question, why would we expect LLMs to even have the ability to do novel things?
It is like expecting a DJ remixing tracks to output original music. Confusing that the DJ is not actually playing the instruments on the recorded music so they can't do something new beyond the interpolation. I love DJ sets but it wouldn't be fair to the DJ to expect them to know how to play the sitar because they open the set with a sitar sample interpolated with a kick drum.
A lot of musicians these days are using sample libraries instead of actually holding real instruments in their hands. It’s not just DJs or electronic producers. It’s remarkable that Brendan Perry of Dead Can Dance, for example, who played guitar and bass as a young man and once amassed a collection of exotic instruments from around the world, built recent albums largely out of instrument sample libraries. One of technology’s effects on culture that maybe doesn’t get talked about as much as outright electronic genres.
kid koala does jazz solos on a disk of 12 notes, jumping the track back and forth to get different notes.
i think that, along with the sitar player are still interpolating. the notes are all there on the instrument. even without an instrument, its still interpolating. the space that music and aound can be in is all well known wave math. if you draw a fourier transform view, you could see one chart with all 0, and a second with all +infinite, and all music and sound is gonna sit somewhere between the two.
i dont know that "just interpolation" is all that meaningful to whether something is novel or interesting.
If he plucked one of the 13 strings of a koto, we wouldn't say he is just remixing the vibration of the koto. Perhaps we could say that, if we had justification. There is a way of using a musical instrument as just a noise maker to produce its characteristics sounds.
Similarly, a writer doesn't just remix the alphabet, spaces and punctuation symbols. A randomly generated soup of those symbols could the thought of as their remix, in a sense.
The question is, is there a meaning being expressed using those elements as symbols?
Or is just the mixing all there is to the meaning? I.e. the result says "I'm a mix of this stuff and nothing more".
If you mix Alphagetti and Zoodles, you don't have a story about animals.
That is not strictly true, because being able to transfer the style of Van Gogh onto an arbitrary photographic scene is novel in a sense, but it is interpolative.
Mashups are not purely derivative: the choice of what to mash up carries novelty: two (or more) representations are mashed together which hitherto have not been.
uhhh can it? I've certainly not seen any evidence of an AI generating something not based on its training set. It's certainly smart enough to shuffle code around and make superficial changes, and that's pretty impressive in its own way but not particularly useful unless your only goal is to just launder somebody else's code to get around a licensing problem (and even then it's questionable if that's a derived work or not).
Honest question: if AI is actually capable of exploring new directions why does it have to train on what is effectively the sum total of all human knowledge? Shouldn't it be able to take in some basic concepts (language parsing, logic, etc) and bootstrap its way into new discoveries (not necessarily completely new but independently derived) from there? Nobody learns the way an LLM does.
ChatGPT, to the extent that it is comparable to human cognition, is undoubtedly the most well-read person in all of history. When I want to learn something I look it up online or in the public library but I don't have to read the entire library to understand a concept.
You have to realize AI is trained the same way one would train an auto-completer.
Theres no cognition. It’s not taught language, grammar, etc. none of that!
It’s only seen a huge amount of text that allows it to recognize answers to questions. Unfortunately, it appears to work so people see it as the equivalent to sci-fi movie AI.
I agree and that's the case I'm trying to make. The machine-learning community expects us to believe that it is somehow comparable to human cognition, yet the way it learns is inherently inhuman. If an LLM was in any way similar to a human I would expect that, like a human, it might require a little bit of guidance as it learns but ultimately it would be capable of understanding concepts well enough that it doesn't need to have memorized every book in the library just to perform simple tasks.
In fact, I would expect it to be able to reproduce past human discoveries it hasn't even been exposed to, and if the AI is actually capable of this then it should be possible for them to set up a controlled experiment wherein it is given a limited "education" and must discover something already known to the researchers but not the machine. That nobody has done this tells me that either they have low confidence in the AI despite their bravado, or that they already have tried it and the machine failed.
There’s a third possible reason which is that they’re taking it as a given that the machine is “intelligent” as a sales tactic, and they’re not academic enough to want to test anything they believe.
Is it? I only see a few individuals, VCs, and tech giants overblowing LLMs capabilities (and still puzzled as to how the latter dragged themselves into a race to the bottom through it). I don't believe the academic field really is that impressed with LLMs.
no it's not I work on AI and what these things do are much much more then a search engine or an autocomplete. If an autocomplete passed the turing test you'd dismiss it because it's still an autocomplete.
The characterization you are regurgitating here is from laymen who do not understand AI. You are not just mildly wrong but wildly uninformed.
Well, I also work on AI, and I completely agree with you. But I've reached the point of thinking it's hopeless to argue with people about this: It seems that as LLMs become ever better people aren't going to change their opinions, as I had expected. If you don't have good awareness of how human cognition actually works, then it's not evidently contradictory to think that even a superintelligent LLM trained on all human knowledge is just pattern matching and that humans are not. Creativity, understanding, originality, intent, etc, can all be placed into a largely self-consistent framework of human specialness.
To be fair, it's not clear human intelligence is much more than search or autocomplete. The only thing that's clear here is that LLMs can't reproduce it.
Yes but colloquially this characterization you see used by laymen is deliberately used to deride AI and dismiss it. It is not honest about the on the ground progress AI has made and it’s not intellectual honest about the capabilities and weaknesses of Ai.
I disagree. The actual capabilities of LLMs remain unclear, and there's a great deal of reasons to be suspicious of anyone whose paycheck relies on pimping them.
The capabilities of LLMs are unclear but it is clear that they are not just search engines or autocompletes or stochastic parrots.
You can disagree. But this is not an opinion. You are factually wrong if you disagree. And by that I mean you don’t know what you’re talking about and you are completely misinformed and lack knowledge.
The long term outcome if I’m right is that AI abilities continue to grow and it basically destroys my career and yours completely. I stand not to benefit from this reality and I state it because it is reality. LLMs improve every month. It’s already to the point of where if you’re not vibe coding you’re behind.
Let me be utterly clear. People with your level of programming skill who incorporate AI into their workflow are in general significantly more productive than you. You are a less productive, less effective programmer if you are not using AI. That is a fundamental fact. And all of this was not true a year ago.
Again if you don’t agree then you are lost and uninformed. There are special cases where there are projects where human coding is faster but that is a minority.
Isn't that what's going on with synthetic data? The LLM is trained, then is used to generate data that gets put into the training set, and then gets further trained on that generated data?
You didn’t have to read the whole library because your brain has been absorbing knowledge from multiple inputs your entire life. AI systems are trying to temporally compress a lifetime into the time of training. Then, given that these systems have effectively a single input method of streams of bits, they need immense amounts of it to be knowledgeable at all.
OK, but by that definition, how many human software developers ever develop something "novel"? Of course, the "functionally equivalent" term is doing a lot of heavy lifting here: How equivalent? How many differences are required to qualify as different? How many similarities are required to qualify as similar? Which one overrules the other? If I write an app that's identical to Excel in every single aspect except that instead of a Microsoft Flight Simulator easter egg, there's a different, unique, fully playable game that can't be summed up with any combination of genre lables, is that 'novel'?
I think the importance is the ability. Not every human have produced (or even can) something novel in their life, but there are humans who have time after time.
Meanwhile, depending on how you rate LLM's capabilities, no matter how many trials you give it, it may not be considered capable of that.
At any point prior to the final output it can garner huge starting point bias from ingested reference material. This can be up to and including whole solutions to the original prompt minus some derivations. This is effectively akin to cheating for humans as we cant bring notes to the exam. Since we do not have a complete picture of where every part of the output comes from we are at a loss to explain if it indeed invented it or not. The onus is and should be on the applicant to ensure that the output wasn't copied (show your work), not on the graders to prove that it wasn't copied. No less than what would be required if it was a human.
Ultimately it boils down to what it means to 'know' something, whether a photographic memory is, in fact, knowing something, or rather derivations based on other messy forms of symbolism.
It is nevertheless a huge argument as both sides have a mountain of bias in either directions.
Not that specifically but I certainly have the capability to create my own OS without having to refer to the source code of existing operating systems. Literally "creating a linux" is a bit on the impossible side because it implies compatibility with an existing kernel despite the constraints prohibiting me from referring to the source of that existing kernel (maybe possible if i had some clean-room RE team that would read through the source and create a list of requirements without including any source).
If we're all on the same page regarding the origins of human intelligence (ie, that it does not begin with satan tricking adam and eve into eating the fruit of a tree they were specifically instructed not to touch) then it necessarily follows that any idea or concept was new at some point and had to be developed by somebody who didn't already have an entire library of books explaining the solution at his disposal.
For the Linux thought-experiment you could maybe argue that Linux isn't totally novel since its creator was intentionally mimicking behavior of an existing well-known operating system (also iirc he had access to the minix source) and maybe you could even argue that those predecessors stood on the shoulders of their own proverbial giants, but if we keep kicking the ball down the road eventually we reach a point where somebody had an idea which was not in any way inspired by somebody else's existing idea.
The argument I want to make is not that humans never create derivative or unoriginal works (that obviously cannot be true) but that humans have the capability to create new things. I'm not convinced that LLMs have that same capability; maybe I'm wrong but I'm still waiting to see evidence of them discovering something new. As I said in another post, this could easily be demonstrated with a controlled experiment in which the model is bootstrapped with a basic yet intentionally-limited "education" and then tasked with discovering something already known to the experimenters which was not in its training set.
>Did you create anything that you are certain your peers would recognize as more "novel" than anything a LLM could produce?
Yes, I have definitely created things without first reading every book in the library and memorizing thousands of existing functionally-equivalent solutions to the same problem. So have you so long as I'm not actually debating an LLM right now.
If the model can map an unseen problem to something in its latent space, solve it there, map back and deliver an ultimately correct solution, is it novel? Genuine question, ‘novel’ doesn’t seem to have a universally accepted definition here
> For example, I can't wrap my head around how a) a human could come up with a piece of writing that inarguably reads "novel" writing, while b) an AI could be guaranteed to not be able to do the same, under the same standard.
The secret ingredient is the world outside, and past experiences from the world, which are unique for each human. We stumble onto novelty in the environment. But AI can do that too - move 37 AlphaGo is an example, much stumbling around leads to discoveries even for AI. The environment is the key.
A system of humans creates bona fide novel writing. We don’t know which human is responsible for the novelty in homoerotic fanfiction of the Odyssey, but it wasn’t a lizard. LLMs don’t have this system-of-thinkers bootstrapping effect yet, or if they do it requires an absolutely enormous boost to get going
> Didn't some fake AI country song just get on the top 100?
No
Edit: to be less snarky, it topped the Billboard Country Digital Song Sales Chart, which is a measure of sales of the individual song, not streaming listens. It's estimated it takes a few thousand sales to top that particular chart and it's widely believed to be commonly manipulated by coordinated purchases.
Because we know that the human only read, say, fifty books since they were born, and watched a few thousand videos, and there is nothing in them which resembles what they wrote.
Doing something novel is incredibly difficult through LLM work alone. Dreaming, hallucinating, might eventually make novel possible but it has to be backed up be rock solid base work. We aren't there yet.
The working memory it holds is still extremely small compared to what we would need for regular open ended tasks.
Yes there are outliers and I'm not being specific enough but I can't type that much right now.
I believe they can create a novel instance of a system from a sufficient number of relevant references - i.e. implement a set of already-known features without (much) code duplication. LLMs are certainly capable of this level of generalization due to their huge non-relevant reference set. Whether they can expand beyond that into something truly novel from a feature/functionality standpoint is a whole other, and less well-defined, question. I tend to agree that they are closed systems relative to their corpus. But then, aren't we? I feel like the aperture for true novelty to enter is vanishingly small, and cultures put a premium on it vis-a-vis the arts, technological innovation, etc. Almost every human endeavor is just copying and iterating on prior examples.
Almost all of the work in making a new operating system or a gameboy emulator or something is in characterizing the problem space and defining the solution. How do you know what such and such instruction does? What is the ideal way to handle this memory structure here? You know, knowledge you gain from spending time tracking down a specific bug or optimizing a subroutine.
When I create something, it's an exploratory process. I don't just guess what I am going to do based on my previous step and hope it comes out good on the first try. Let's say I decide to make a car with 5 wheels. I would go through several chassis designs, different engine configurations until I eventually had something that works well. Maybe some are too weak, some too expensive, some are too complicated. Maybe some prototypes get to the physical testing stage while others don't. Finally, I publish this design for other people to work on.
If you ask the LLM to work on a novel concept it hasn't been trained on, it will usually spit out some nonsense that either doesn't work or works poorly, or it will refuse to provide a specific enough solution. If it has been trained on previous work, it will spit out something that looks similar to the solved problem in its training set.
These AI systems don't undergo the process of trial and error that suggests it is creating something novel. Its process of creation is not reactive with the environment. It is just cribbing off of extant solutions it's been trained on.
Here's a thought experiment: if modern machine learning systems existed in the early 20th century, would they have been able to produce an equivalent to the theory of relativity? How about advance our understanding of the universe? Teach us about flight dynamics and take us into space? Invent the Turing machine, Von Neumann architecture, transistors?
If yes, why aren't we seeing glimpses of such genius today? If we've truly invented artificial intelligence, and on our way to super and general intelligence, why aren't we seeing breakthroughs in all fields of science? Why are state of the art applications of this technology based on pattern recognition and applied statistics?
Can we explain this by saying that we're only a few years into it, and that it's too early to expect fundamental breakthroughs? And that by 2027, or 2030, or surely by 2040, all of these things will suddenly materialize?
>Here's a thought experiment: if modern machine learning systems existed in the early 20th century, would they have been able to produce an equivalent to the theory of relativity? How about advance our understanding of the universe? Teach us about flight dynamics and take us into space? Invent the Turing machine, Von Neumann architecture, transistors?
Only a small percentage of humanity are/were capable of doing any of these. And they tend to be the best of the best in their respective fields.
>If yes, why aren't we seeing glimpses of such genius today?
Again, most humans can't actually do any of the things you just listed. Only our most intelligent can. LLMs are great, but they're not (yet?) as capable as our best and brightest (and in many ways, lag behind the average human) in most respects, so why would you expect such genius now ?
> LLMs are great, but they're not (yet?) as capable as our best and brightest (and in many ways, lag behind the average human) in most respects, so why would you expect such genius now ?
I'm not expecting novel scientific theories today. What I am expecting are signs and hints of such genius. Something that points in the direction that all tech CEOs are claiming we're headed in. So far I haven't seen any of this yet.
And, I'm sorry, I don't buy the excuse that these tools are not "yet" as capable as the best and brightest humans. They contain the sum of human knowledge, far more than any individual human in history. Are they not intelligent, capable of thinking and reasoning? Are we not at the verge of superintelligence[1]?
> we have recently built systems that are smarter than people in many ways, and are able to significantly amplify the output of people using them.
If all this is true, surely we should be seeing incredible results produced by this technology. If not by itself, then surely by "amplifying" the work of the best and brightest humans.
And yet... All we have to show for it are some very good applications of pattern matching and statistics, a bunch of gamed and misleading benchmarks and leaderboards, a whole lot of tech demos, solutions in search of a problem, and the very real problem of flooding us with even more spam, scams, disinformation, and devaluing human work with low-effort garbage.
>I'm not expecting novel scientific theories today. What I am expecting are signs and hints of such genius.
Like I said, what exactly would you be expecting to see with the capabilities that exist today ? It's not a gotcha, it's a genuine question.
>And, I'm sorry, I don't buy the excuse that these tools are not "yet" as capable as the best and brightest humans.
There's nothing to buy or not buy. They simply aren't. They are unable to do a lot of the things these people do. You can't slot an LLM in place of most knowledge workers and expect everything to be fine and dandy. There's no ambiguity on that.
>They contain the sum of human knowledge, far more than any individual human in history.
It's not really the total sum of human knowledge but let's set that aside. Yeah so ? Einstein, Newton, Von Newman. None of these guys were privy to some super secret knowledge their contemporaries weren't so it's obviously not simply a matter of more knowledge.
>Are they not intelligent, capable of thinking and reasoning?
Yeah they are. And so are humans. So were the peers of all those guys. So why are only a few able to see the next step ? It's not just about knowledge, and intelligence lives in degrees/is a gradient.
>If all this is true, surely we should be seeing incredible results produced by this technology. If not by itself, then surely by "amplifying" the work of the best and brightest humans.
Yeah and that exists. Terence Tao has shared a lot of his (and his peers) experiences on the matter.
>And yet... All we have to show for it are some very good applications of pattern matching and statistics, a bunch of gamed and misleading benchmarks and leaderboards, a whole lot of tech demos, solutions in search of a problem, and the very real problem of flooding us with even more spam, scams, disinformation, and devaluing human work with low-effort garbage.
> Like I said, what exactly would you be expecting to see with the capabilities that exist today ?
And like I said, "signs and hints" of superhuman intelligence. I don't know what that looks like since I'm merely human, but I sure know that I haven't seen it yet.
> There's nothing to buy or not buy. They simply aren't. They are unable to do a lot of the things these people do.
This claim is directly opposed to claims by Sam Altman and his cohort, which I'll repeat:
> we have recently built systems that are smarter than people in many ways, and are able to significantly amplify the output of people using them.
So which is it? If they're "smarter than people in many ways", where is the product of that superhuman intelligence? If they're able to "significantly amplify the output of people using them", then all of humanity should be empowered to produce incredible results that were previously only achievable by a limited number of people. In hands of the best and brightest humans, it should empower them to produce results previously unreachable by humanity.
Yet all positive applications of this technology show that it excels at finding and producing data patterns, and nothing more than that. Those experience reports by Terence Tao are prime examples of this. The system was fed a lot of contextual information, and after being coaxed by highly intelligent humans, was able to find and produce patterns that were difficult to see by humans. This is hardly a showcase of intelligence that you and others think it is. Including those highly intelligent humans, some of whom have a lot to gain from pushing this narrative.
We have seen similar reports by programmers as well[1]. Yet I'm continually amazed that these highly intelligent people are surprised that a pattern finding and producing system was able to successfully find and produce useful patterns, and then interpret that as a showcase of intelligence. So much so that I start to feel suspicious about the intentions and biases of those people.
To be clear: I'm not saying that these systems can't be very useful in the right hands, and potentially revolutionize many industries. Ultimately many real-world problems can be modeled as statistical problems where a pattern recognition system can excel. What I am saying is that there's a very large gap from the utility of such tools, and the extraordinary claims that they have intelligence, let alone superhuman and general intelligence. So far I have seen no evidence of the latter, despite of the overwhelming marketing euphoria we're going through.
> Well it's a good thing that's not true then
In the world outside of the "AI" tech bubble, that is very much the reality.
Were they the best of the best? or were they just at the right place and time to be exposed to a novel idea?
I am skeptical of this claim that you need a 140IQ to make scientific breakthroughs, because you don't need a 140IQ to understand special relativity. It is a matter of motivation and exposure to new information. The vast majority of the population doesn't benefit from working in some niche field of physics in the first place.
Perhaps LLMs will never be at the right place and the right time because they are only trained on ideas that already exist.
>Were they the best of the best? or were they just at the right place and time to be exposed to a novel idea?
It's not an "or" but an "and". Being at the right place and time is a necessary precondition, but it's not sufficient. Newton stood on the shoulders of giants like Kepler and Galileo, and Einstein built upon the work of Maxwell and Lorentz. The key question is, why did they see the next step when so many of their brilliant contemporaries, who had the exact same information and were in similar positions, did not? That's what separates the exceptional from the rest.
>I am skeptical of this claim that you need a 140IQ to make scientific breakthroughs, because you don't need a 140IQ to understand special relativity.
There is a pretty massive gap between understanding a revolutionary idea and originating it. It's the difference between being the first person to summit Everest without a map, and a tourist who takes a helicopter to the top to enjoy the view. One requires genius and immense effort; the other requires following instructions. Today, we have a century of explanations, analogies, and refined mathematics that make relativity understandable. Einstein had none of that.
It's entirely plausible that sometimes one genius sees the answer all alone -I'm sure it happens sometimes- but it's also definitely a common theme that many people/ a subset of society as a whole may start having similar ideas all around the same time. In many cases where a breakthrough is attributed to one person, if you look more closely you'll often see some sort of team effort or societal ground swell.
Of course they can come up with something novel. They're called hallucinations when they do, and that's something that can't be in their training data, because it's not true/doesn't exist. Of course, when they do come up totally novel hallucinations, suddenly being creative is a bad thing to be "fixed".
Ive been thinking about that a lot too. Fundamentally it's just a different way of telling the computer what to do and if it seems like telling an llm to make a program is less work than writing it yourself then either your program is extremely trivial or there are dozens of redundant programs in the training set that are nearly identical.
If you're actualy doing real work you have nothing to fear from LLMs because any prompt which is specific enough to create a given computer program is going to be comparable in terms of complexity and effort to having done it yourself.
I only ever noticed it on my windows partition. IIRC on my linux partition it was hardly noticeable because Linux is far better at caching disk contents than windows and also linux in general can boot surprisingly fast even on HDDs if you only install modules you actually need so that the autoconfiguration doesn't waste time probing dozens of modules in search of the best one.
reply