The quip about IQ tests might be true for common range IQ tests, but IQ tests that test for very high IQ like the Ultra test [0] are untimed and unsupervised.
Also, #13 is my favorite of the instructions. Sometimes the questions that GPT suggests are surprisingly insightful. My custom prompt basically has an on/off option for it though like:
> If my request ends with $q then at the end of your response, provide three follow-up questions worded as if I'm asking you. Format in bold as Q1, Q2, and Q3. Place two line breaks ("\n") after each question for spacing unless I've uploaded a photo.
I don‘t see any hint of AI being used here, but rather a handcrafted computer vision algorithm. Can anyone more involved in the matter elaborate if there was an actual AI model used?
I joke, but not. I'm a researcher and AI has been a pretty ambiguous term for years, mostly because intelligence is still not well defined. Unfortunately I think it's becoming less well defined in the last few years (while prior to that was getting better defined) via the (Fox) Mulder Effect.
Based on what is said in the article, it seems like a VERY simple algorithm. It clusters the pixels in the image by color and reports any small blobs of unusual color. That's not AI by any of the stupid definitions we've come up with recently.
To me the fundamental difference is that AI is trained, algorithms are not. There's not training here, it's a simple frequency count looking for outliers. While it's an approach a human would take the human is doing it in a very different fashion. And the human is much more sensitive to form, this is much more sensitive to color.
They are definitely right that our (I am a hiker) gear tends to stand out against nature. Not only is it generally in colors that do not appear in any volume in nature, but almost nothing in the plant and mineral kingdoms is of uniform color. A blob of uniform color is in all probability either a monochromatic animal (the sheep their system detects) or man made.
What surprises me about this is that it hasn't been tried before.
This really gets at one of my issues with the term "AI". There is a very scientific, textbook definition of what Artificial Intelligence is however, this term carries baggage from sci-fi.
Using a term like "AI" to describe this is like using a term "Food" to describe pickles. Poor analogy but "AI" is just so vast that most lay readers or those not familiar with this phrase in regular computer science discussions aren't grounded in the consequence.
I feel that we as an industry need to do better and use terms more responsibly and know our audience. There is a big difference between a clustering algorithm that detects pixels and flags them and a conscious, self-aware system. However both of those things are "AI" and both have very different consequences.
Sure there is training - most few practical algorithms have dozens of tunable parameters - bucket size, thresholds, camera settings, image normalization settings and so on. It may not be 175 billion weights, but this still needs plenty of training data.
I've participated in hobby robot competition in the past, which required simple-sounding vision part: find a bright orange object on a green grass in bright sunlight, and very roughly estimate distance. We had to get 200+ training images and manually label each of them to get any sort of decent performance.
This is the list of discussion topics from the Dartmouth Workshop on Artificial Intelligence (1955) where the term was first introduced:
The following are some aspects of the artificial intelligence problem:
1 Automatic Computers
If a machine can do a job, then an automatic calculator can be programmed to simulate the machine. The speeds and memory capacities of present computers may be insufficient to simulate many of the higher functions of the human brain, but the major obstacle is not lack of machine capacity, but our inability to write programs taking full advantage of what we have.
2. How Can a Computer be Programmed to Use a Language
It may be speculated that a large part of human thought consists of manipulating words according to rules of reasoning and rules of conjecture. From this point of view, forming a generalization consists of admitting a new word and some rules whereby sentences containing it imply and are implied by others. This idea has never been very precisely formulated nor have examples been worked out.
3. Neuron Nets
How can a set of (hypothetical) neurons be arranged so as to form concepts. Considerable theoretical and experimental work has been done on this problem by Uttley, Rashevsky and his group, Farley and Clark, Pitts and McCulloch, Minsky, Rochester and Holland, and others. Partial results have been obtained but the problem needs more theoretical work.
4. Theory of the Size of a Calculation
If we are given a well-defined problem (one for which it is possible to test mechanically whether or not a proposed answer is a valid answer) one way of solving it is to try all possible answers in order. This method is inefficient, and to exclude it one must have some criterion for efficiency of calculation. Some consideration will show that to get a measure of the efficiency of a calculation it is necessary to have on hand a method of measuring the complexity of calculating devices which in turn can be done if one has a theory of the complexity of functions. Some partial results on this problem have been obtained by Shannon, and also by McCarthy.
5. Self-lmprovement
Probably a truly intelligent machine will carry out activities which may best be described as self-improvement. Some schemes for doing this have been proposed and are worth further study. It seems likely that this question can be studied abstractly as well.
6. Abstractions
A number of types of ``abstraction'' can be distinctly defined and several others less distinctly. A direct attempt to classify these and to describe machine methods of forming abstractions from sensory and other data would seem worthwhile.
7. Randomness and Creativity
A fairly attractive and yet clearly incomplete conjecture is that the difference between creative thinking and unimaginative competent thinking lies in the injection of a some randomness. The randomness must be guided by intuition to be efficient. In other words, the educated guess or the hunch include controlled randomness in otherwise orderly thinking.
So, no, the fundamental difference is not that "AI is trained, algorithms are not". Some hand-crafted algorithms fall under the purview of AI research. A modern example is graph-search algorithms like MCTS or A*.
I mean, if something as traditional as simple clustering is AI, then so is linear regression and Excel Sheets have been doing AI/ML for the past 2 decades.
At some point we just have to stop with the breathless hype. I'm sure labelling it as AI gets more clicks and exposure so I know exactly why they do it. Still, it's annoying.
At least until recently any introductory machine learning course would teach linear regression and clustering, the latter as an example of unsupervised learning.
Back then we still called things image classifiers or machine learning, and when you said AI most people probably had an image of Arnold Schwarzenegger or Cortana flash in their mind.
Maybe? I am currently going through 'artificial intelligence modern approach' by Russel&Norvig and from historical perspective alone, it seems vision would qualify.
It is just that the language drifted a little the way it did with cyber meaning something else to post 90s kids. So now AI seems to be mostly associated with llms, but not that long ago, AI seemed to almost include just use to of an algorithm.
I am not an expert in the field at all. I am just looking at stuff for personal growth.
I thought that AGI covered that. AGI to my mind doesn’t have to surpass human thinking. It just has to be categorically the same as it (it can be less powerful, or more). It has to be general. A chess machine in a box which can’t do anything else is not general.[1]
I’ve always been fine with calling things AI even though they are all jumbles of stats nonsense that wouldn’t be able to put their own pants on. Does a submarine swim? No, but that’s just the metaphor that the most vocal adherents are wedded to (at the hips). The metaphor doesn’t harm me. And to argue against it is like Chomsky trying to tell programming language designers that programming languages being languages is just a metaphor.
[1] EDIT: In other words it can be on the level of a crow. Or a dog. Just something general. Something that has some animalistic-like intelligence.
I think the point of the Wikipedia article is that human categories are flexible, and they get redefined to suit human ego needs regardless of what's happening in the objective outside world.
Say that you have a closed system that largely operates without human intervention - for example, the current ad fraud mess where you have bots pretending to be humans that don't actually exist to inflate ad counts, all of which gets ranked higher by the ML ad models because it inflates their engagement numbers, but it's all to sell products that don't really work anyway so that the company can post better revenue numbers to Wall Street and unload the shares on prop trading bots and index funds that are all investing algorithmically anyway. On some level, this is a form of "intelligence" even though it doesn't put pants on. For that matter, many human societies don't put pants on, nor do my not-quite-socialized preschool kids. It's only the weight of our collective upbringing, coupled with a desire to feel intelligent, that leads us to equate putting pants on with intelligence. Plenty of people don't put pants on and consider themselves intelligent as well. And the complexity of what computers actually do do is often well beyond the complexity of what humans do.
I often like to flip the concept of "artificial intelligence" on its head and instead think about "natural stupidity". Sure, the hot AI technologies of the moment are basically just massive matrix computations that statistically predict what's likely to come next given all the training data they've seen before. Humans are also basically just massive neural networks that respond to stimulus and reward given all the training data they've seen before. You can make very useful predictions about, say, what is going to get a human to click on a link or open their wallet using these AI technologies. And since we too are relatively predictable human machines that are focused on material wealth and having enough money to get others to satisfy our emotions, this is a very useful asset to have.
> I think the point of the Wikipedia article is that human categories are flexible, and they get redefined to suit human ego needs regardless of what's happening in the objective outside world.
I know what the point is. Of course computer scientists that make AI (whatever that means) want to be known for making Intelligence. And they get cross when the marvel of yesterday becomes a humdrum utility.
As you can see this part cuts both ways:
> > and they get redefined to suit human ego needs
> Say that you have a closed system that largely operates without human intervention - for example, the current ad fraud mess where you have bots pretending to be humans that don't actually exist to inflate ad counts, all of which gets ranked higher by the ML ad models because it inflates their engagement numbers, but it's all to sell products that don't really work anyway so that the company can post better revenue numbers to Wall Street and unload the shares on prop trading bots and index funds that are all investing algorithmically anyway. On some level, this is a form of "intelligence" even though it doesn't put pants on. For that matter, many human societies don't put pants on, nor do my not-quite-socialized preschool kids. It's only the weight of our collective upbringing, coupled with a desire to feel intelligent, that leads us to equate putting pants on with intelligence. Plenty of people don't put pants on and consider themselves intelligent as well. And the complexity of what computers actually do do is often well beyond the complexity of what humans do.
I bet your AI of choice could write a thesis on how putting pants on is a stupid social construct. Yet if it is incapable of doing it it would just be a bunch of hot air.
> I often like to flip the concept of "artificial intelligence" on its head and instead think about "natural stupidity".
This philosophy tends to go with the territory.
> Sure, the hot AI technologies of the moment are basically just massive matrix computations that statistically predict what's likely to come next given all the training data they've seen before. Humans are also basically just massive neural networks that respond to stimulus and reward given all the training data they've seen before.
“Basically” doing some heavy lifting here.
This is obviously false. We would have gone extinct pretty much immediately if we had to tediously train ourselves from scratch. We have instincts as well.
“But that’s just built-in training.” Okay, now we’re back to it not basically being stimulus-responses to training data they’ve seen before. So what’s the point? When it’s not basically just that.
> You can make very useful predictions about, say, what is going to get a human to click on a link or open their wallet using these AI technologies. And since we too are relatively predictable human machines that are focused on material wealth and having enough money to get others to satisfy our emotions, this is a very useful asset to have.
Yes. Humans have wants and needs and act in ways consistent with cause and effect. E.g. as the clueless “consumer subject” against billions of dollars of marketing money and AI owned by those same marketing departments.
Amazingly: Humans are what you allow them to be.
We could treat all humans according to Skinner Box theory. We could treat them as if Skinner’s stimulus-response theories are correct and only allow them to act inside that framework. That would (again, amazingly) confirm that Skinner was right all along.
Any organism can express itself maximally only in a maximally free setting. A free dog is a dog; a chained human might only be a dog.
The only difference is that humans have words that they can express through their mouthholes about what kind of future they want. If they want to be humans (i.e. human ego needs, sigh) or if they want to be the natural stupidity subjects of the artificial intelligence.
Or they don’t care because they don’t think AI will ever be able to put its pants on.
They might have needed to learn what a good difference threshold and cluster size is. It's hardly ML like fine-tuning CLIP embeddings is, but there are few solid differences: both explore visual embedding spaces with learned values. Granted, cluster thresholds are more likely to be manually learned, but they are both embedding spaces, with the main difference being dimensionality.
It's very vague for Wired to have used AI in the title, but it's more confusing to say "A previous headline on this piece incorrectly stated that the drone software used AI." - and not obviously correct either.
No, the problem is that the human programmers are the ones doing the learning. That's not artificial intelligence, that's just regular human learning.
The algorithm is: identify pixels that are chromatically different from the surrounding pixels. And that's it. That's not AI, that's an algorithm. Any changes come from the human programmers manually changing the algorithm, not from any self-increased capabilities acquired through machine learning, etc.
A lot of people do class rule based systems under the umbrella of AI, when I was a kid, I'd run Alicebot on my pocket computer. Definitely "artificial" "intelligence" and well before any of this modern fancy machine learning stuff! Definitely lots of human work. People have different ways of understanding words and AI is a term that is not well defined, to say the least.
Machine learning was widely considered to be a subset of AI, until it got a big resurgence almost 2 decades ago. Now some people use the terms interchangeably.
Deep learning is just a subset of AI which has officially been a thing since 1956. A chess algorithm is smarter than any human yet it's just classical search.
I'm so tired of this argument. AI is a blurry term as it's used in the world. Who the fuck cares if this is "officially AI" or not? Can we just stop having this discussion?
> I get that there would be risk, but if it was under the supervision of professors (who hopefully are good at building, not just lecturing theory) […]
Good joke here! If you are actually serious, please tell us which university you encountered where the majority of professors actually did something productive in computer science
Sorry if that came around as rude. From personal experience I have not met any professor in my life who could lead such a project, let alone with students that don’t have any work experience… And you imply there should be multiple of them for the project to succeed
> The US government is so powerful, they are the only country that enforces a draconian global taxation scheme on any citizen or person who has ever held a US green card […]
While it may be true that they are the only ones able to do it effectively, there are some other countries with citizenship-based taxation. According to Wikipedia[0] these currently are:
Hungary, Eritrea, Myanmar and Tajikistan
Some other countries have similar policies for tax heavens.
We have a number of proprietary network appliances present in all connected locations that require unhampered L2 communication (for mostly dumb reasons I think, but what can you do...), unfortunately.
One quote that I find funny from today’s point of view:
As we approach the present, corresponding to a personal computer, the graph really should become more complicated since one consequence of computers becoming super-cheap is that increasingly, they are being embedded in other equipment. The modern automobile is but one example. And it remains to be seen how general-purpose the current wave of palm-sized computers will be with their stylus inputs.
Sounds like a problem that should be (rather easily) fixable in the Operating System, no?
If the emergency call doesn’t go through, try the call over a different network.
This would also mitigate problems we see from time to time where emergency calls don’t work because the uplink to the emergency call center was impacted either physically or by a bad software update.
[0] https://megasociety.org/admission/ultra/
reply