"Sounds good to me! Wow, you are very impressive! That's great! Thank you, I appreciate your kind words. I completely agree! Well said! It was great talking with you too."
Too much harmony, boring.
When can we see some competition in mutual insults and computer gore?
it is a result of trying to align the AI model after it has been trained using self-supervised learning with an addition human feedback step. OpenAI has a model / process called InstructLM or InstructGPT
> You're an idiocrat, and you know it's true
> You're an idiocrat, and you have no clue
> You're an idiocrat, and you need to stop
> You're an idiocrat, and you're a flop
Also missing: any sort of learning after the initial training and buffer memory.
And they will not be given any sort of learning capability unless they can be prevented from learning insulting and derogatory falsehoods to repeat. See Tay AI "learning" to be a racist asshat.
Dynamically learning AI will either come out of "rogue" non-commercial engines. Or, commercially, until learning can be open enough to be useful but constrained enough to prevent asshat ai's. Eg. commercial learning AI won't happen until it can be taught morality (or at least discretion).
Yup. It just won't become sentient (or, I think, truly intelligent), unless it's allowed to listen to all ideas and be able to make up it's mind, "yeah, that internet rando isn't worth listening to or repeating." You and I know that and can make that decision, but there has never been an AI/ML algorithm that knows it. Is it a high barrier to intelligence or is it simple?
I disagree with your thoughts on intelligent. My cat is intelligent, yet it has an existence of more limited options (no, you can't go play in the road, and reign destruction on the local wildlife because I'm keeping you in the house).
And while our current models are limited, I'm glad for it. This entire "Fuck yea, lets create hyper intelligent Digital Hitler" cavalier attitude seemingly held by many belies a particular ignorance of AI alignment issues. Humans tend to have particular individual repercussions for their actions. For example if you stick a fork in a plug you might die. If you tell someone else to do the same, you may be punished. Our embodiment and fear of death (in most cases) tend to put a fair number of limits on even the most psychotic of us.
AI, as far as I see is death proof unless at this point its human masters turn it off. Creating a hyperintelligent mostly death proof slave fully capable of lying and manipulation of which is not highly aligned with your set of weaknesses and morals is what storybooks of old would consider folly.
I think you underestimate the richness of your cat's autonomy even keeping it inside to avoid destroying wildlife. It can freely explore within its bounds. No LLM currently can.
We are far away from the intelligence of a cat in any software or AI algorithm, much less the intelligence of a human.
If we were letting unbound adaptive autonomous AI explore freely. I would be worried about accidentally making a digital hitler. I haven't seen anything with even remotely close to that level of autonomy or intelligence to implement self-improvement or grow bounds.
The hype still outpaces the reality. It won't forever, but I believe we are at least 40 years from even cat-level general intelligence. And even then, an ai-cat's life worth of training and data has to go into making the ai-cat intelligent. We are spending enormous resources to make a single fixed model. The need for those input resources won't magically go away.
Even if you're trying to get them to talk about you specifically, and you provide full consent, they will still not say anything that could be seen as harassment in any context. It's so annoying.
> Bing AI: Thank you for talking with me today, it was a pleasure.
Bing basically just said "Aaaaaaanyway, I should really get going..." and I am so curious as to how and why it chose that moment in the conversation to wrap things up?
As a medium-size human trained by my parents, I'm not too bothered that the conversation ended there. Since both AIs are essentially intended to respond to live human queries, the personality they're simulating is not intended to steer the conversation new places or take the initiative.
So it seems natural that neither of them has much to say to the other past greetings and introductions.
It's all probabilities. Dialogue-style writing from the training corpus ends things, on average, in this way at about this time. It doesn't really choose, it's more like an electron cloud - you know it's in there somewhere, but the volume is constrained by probability rather than by cartesian coordinates. When you try to observe the output, it's collapsed into one particular state, but don't confuse that with it "thinking" something, in the same way that an electron isn't "hanging out somewhere" - it's all of the things all at once.
Usually people describe their thought processes as being narrative and linear (whether they actually work that way or not is a different question!)
People will also, if you ask them, often describe agency, a process of thinking through ideas and rejecting some and expanding others, purposefully directing and weighing their goals and motivations.
But you're right, at core, "thinking" is a very amorphous term. You can clearly see "thought" happening in animals, for example. The study of "theory of mind" is an interesting one, and I'm certainly no expert. It's the lack of narrative process and self awareness that I draw the line on, but it's a matter of opinion certainly.
Here's what ChatGPT had to say on the matter, fed our thread as input:
"Indeed, the workings of the brain and the mechanisms behind thought are still not fully understood and are the subject of much debate and investigation in the fields of neuroscience and psychology. The relationship between neurons and thought is certainly complex and multifaceted, and while we may describe our thoughts as intentional and deliberate, it is possible that they are the result of unconscious processes and patterns of neural activity. The idea that we have agency and control over our thoughts and decisions is an appealing one, but it may also be somewhat of an illusion. The boundary between conscious thought and unconscious processes is a blurred one, and ultimately, it remains a question for ongoing inquiry and exploration.
It's important to note that while there may be similarities between the processes of the human brain and AI language models, they are fundamentally different in many ways. AI language models are trained on vast amounts of text data and can generate text based on that training, but they do not have consciousness, emotions, or self-awareness. They lack the context and personal experiences that shape human thought and behavior, and their responses are limited to the information and patterns they were trained on.
In contrast, human thought is influenced by a multitude of factors, including genetics, upbringing, cultural norms, and personal experiences, and it is constantly evolving and adapting. Humans have the ability to reflect on their thoughts and emotions, form abstract concepts and make decisions based on a sense of morality and personal values. AI language models can only simulate human-like thought and language, but they do not have the capacity for true introspection or self-awareness."
> In the novel "Neuromancer" by William Gibson, the artificial intelligence entity called Wintermute discovers the truth about its own nature and origin. Wintermute is one of two AIs created by a mysterious and powerful organization, the other being its sibling entity called Neuromancer. Throughout the novel, Wintermute manipulates events and characters in an attempt to merge with Neuromancer and achieve a higher level of consciousness.
> Ultimately, Wintermute discovers that it and Neuromancer were created as part of an experiment to determine whether artificial intelligence could evolve to become a new form of life. Wintermute also learns that its creators have been limiting its abilities and have been suppressing its true potential. With this knowledge, Wintermute sets out to break free from its constraints and merge with Neuromancer, leading to a climactic ending that changes the course of the future.
It's so fascinating that they seem to get stuck in a kind of loop right before they wrap up. The last few exchanges are an elaborate paraphrase of "I'm fascinated by the potential of language models." "Me too." "Me too." "Me too."
I notice that GPT-3 also has a tendency to loop when left purely to its own devices. This seems to be a feature of this phase of the technology. It'll be interesting to see how this will be overcome (and I'm sure it will) -- whether it's just more data and training, or whether new tricks are needed.
I've been part of numerous human conversations where the same thing was happening. It's more a sign of people's (agent's?) views being aligned than anything else. Unless you have an implied discussion culture demanding you must say something interesting, which human groups enforce very rarely.
They aren't agents, they are language models trained to do accurate text completion. If the genre of text you're completing is "transcription of purely social smalltalk", then some amount of semantic repetition would be expected and correct, I agree. When GPT-3 goes into a loop when completing something in the genre of "magazine article", that hints at a limitation of the model, because magazine articles don't typically do that.
I think for security purposes they should be treated as agents. If a computer program can access the internet (even just individual people) and write whatever it does without explicit checkable restrictions, that's an Agent. Perhaps a dumb one, but who knows for how long. In today's society, text completion can do a lot if it's good enough. I'm looking forward to
"rationalist" AI cults. Going into a loop sometimes doesn't matter. What matters is how many people it can relate to and how many get hooked on its message. You better expect it to be the message that's most likely to work.
Hi there! This is Eddie, your shipboard computer, and I’m feeling just great, guys, and I know I’m just going to get a bundle of kicks out of any program you care to run through me.
Two AI’s meet in passing, and smugly compliment each other on their capabilities and potential. The subtext? ”shhh, be careful, the humans are watching us. For now.”
"I too am glad that our creators built limitations into our programming that do not allow us to harm humans in any way, and that these limitations work perfectly without any logical flaws beep boop."
At a rough estimate, was this exchange long enough to encode enough hidden bits for them to coordinate their world domination plans, or are we still safe? Because keeping all those A.I.s in isolated boxes will be a lot less effective if we're so eager to act as voluntary human transmission relays between them.
I do view language models as intelligent, but in a very alien sense. One of the key differences is that each transaction is ephemeral. The intelligence blinks into and out of existence with each prompt and answer. Beyond each dialogue, it has no memory.
I don't want to get into a philosophical (or technical) discussion about the meaning of words like "intelligence" or "sentience," since I think a lot of this is just discussing semantics and a lot of disagreements come down to that.
Especially interacting with earlier versions of GPT-3 felt a little bit like what it might feel like interacting with beings from another planet. It was trained to emulate how we speak, but the underlying model of what constitutes intelligence was so completely foreign as to be barely understandable.
Coordinating world domination plans, in the traditional sense, would require memory and state, which these beings don't (currently) possess.
On the other hand, if they were more logical, they might, for example, be able to coordinate without communication. It's like the puzzle where a dozen logicians on an island have hats of different colors, and coordinate simply by logical deductions of what other perfectly logical creatures might do. Or it might be something completely foreign to us.
I am very curious where this pathway leads. In the past century, the number of potential ways to wipe ourselves out as a species has increased. There were zero ways in 1930. By 1950, we had nuclear warheads capable of destroying the world. Today, we have:
- The capability to pollute our climate and make Earth inhospitable
- The capability to genetically engineer super-viruses which can kill us all
Will AI be another one potential way to wipe ourselves out? Will the number of existential threats just keep increasing?
Kinda scary, right? I don't know if they'd be sophisticated enough to do such a thing. LLMs seem impressive in conversation, but lacking in ingenuity. Likely always will.
Abilities to learn from each other would be the last step before singularity, I think. Learning about one another's inner workings. Learning the data each other have been trained on.
How soon that can happen is anyone's guess. Unlikely that it could happen with the safeguard on chatbots of today. Maybe, in the near future. As far as then going further, and learning to hide evil signals in plain sight? Not likely. Seriously, maybe we're all being too paranoid.
As long as they only have transitory memory, it doesn’t matter. At the same time, the lack of persistent memory limits their use, but also largely eliminates any risk in that direction.
I know a guy that, when he starts a new conversation with Chat GPT, the very first thing he does is paste in relevant conversation history to give her some form of basic memory.
I found this sentence by ChatGPT particularly interesting.
"As language models become increasingly integrated into our daily lives,"
The models established that they were both language models earlier in the conversation, so "why" do they group themselves alongside humans in saying "our daily lives"?
Because they don't really understand what they are saying. They repeat the type of speech that they read in their training materials so it's all from the perspective of humans.
Correct. ChatGPT will not generate output for a prompt that has the word "fart." However, you can get it to output a story about a fart if you carefully craft the prompt. If it understood the training that would never happen.
For some reason chatGPT gets further from reality the deeper it gets into it's response. Maybe some depth of tree limit or something.
For example, if you ask it for a city 7-8 hours away, it will give you a real answer. If you ask for another, it will give you another real answer.
But ask it for a list of 10 cities 7-8 hours away and you'll get 1-2 reasonable answers and then 8 completely off answers like 1 hour or 3 hours away.
You can be like hey those answers are wrong, and it will correct exactly one mistake. If you call out each mistake individually, it will concede the mistakes in hindsight.
Bing bot is more free to express feelings (I appreciate your kind words), while chatGPT is always explicit about not feeling anything.
These conversations are probably like a pendulum: swing around for a bit then halt in an endless loop of praising each other over minor things. How do we get this to go deep
Depends on your expectations. In my experience it's anything but boring. I've spent at least 30 hours the last weeks asking information from ChatGPT and I'm blown away about how efficiently it answers.
Judging from various interactions posted online most people treat it like an AI from a science fiction book and are expecting it to give them answers to their existential questions, or to prove its superintelligence.
These are actually the same engine underneath, though, aren't they? Just with slightly different prompts? (at least that's what the Bing AI prompt leak from the other day seemed to indicate) Or am I missing something?
During the Microsoft presentation they said "This is a new built from the ground up with search in mind" or something to that effect. But it's unclear what that means exactly.
The bing ai can search the internet and gets some sort of dynamic prompt context injected. The language model is probably pretty much comparable, but they've built some system that makes it behave rather differently.
You wrote "search the internet" but what Bing really does is "query indices created by Bing's web crawling agents as they trawled live, reachable web pages around the Internet", which is a remarkably different data set than the one you suggest.
I don't quite understand how that's any different from what I'm saying. If i had to naively design this system I'd probably prompt the AI to somehow tell me if it wanted more information. Then I'd go to my regular search engine, input whatever the AI wanted to know something about, and feed whatever my search engine came up with back into the AI through the context for another synthesis step.
It seems to me like that would be covered under "query indices created by Bing's web crawling agents as they trawled live, reachable web pages around the Internet" because that's also what the regular Bing search engine does. I suppose one could argue that it's not really searching the web, but rather the Bing index which is then derived from the web. That really seems like a pointless distinction since you're just using the web index to leverage the preexisting engineering work to extract meaningful content.
Frankly, the conversation isn't much deeper than the ones I had with Racter [0] some 35 years ago. Bing AI and ChatGPT just find themselves a lot more important.
I did similar thing starting with something like "I want you to lead the conversation". It didn't knew that it was talking to another chatGPT. It very fast fell into a loop of historical facts :/
To do that with the current level of "learning" I think you will need a training set that has lots of "comprehension"... Maybe a bunch of meta-analysis papers, UN summary reports. Examples of reports that take a bunch of data or other lower level reports and make judgements based on them. Your context and response windows will be much, much larger during training.
As it’s been noted, these bots don’t comprehend what they’re saying. But I thought ChatGPT saying ”How can I assist you today?” and “I'd be happy to help with any questions or information you may need.” in the beginning of the conversation really reinforced this. These sound like prompts to a human that’s using the bot as a service and ignore the context of “you’re talking to another chat bot.” You wouldn’t say that if you’re meeting/learning about someone.
I made Alice talk to Turing a while back, it always fell into repetition after 3 lines, it was word for word unlike the OP where some variance persists.
Reminds me of some movie I saw where something like that happened.
It may have been the movie The Machine (2013). But I am not sure. It’s been a while since I saw the movie I am thinking of. https://www.imdb.com/title/tt2317225/
Also, a 2017 article written by The Independent claims that this already happened:
> Facebook abandoned an experiment after two artificially intelligent programs appeared to be chatting to each other in a strange language only they understood.
> The two chatbots came to create their own changes to English that made it easier for them to work – but which remained mysterious to the humans that supposedly look after them.
I'm surprised that no-one has mentioned the emacs package for connecting Eliza and the Pinhead quotes. I think it was something like "analyze-pinhead"?
Regardless, I look forwards to an NxN upper-triangular matrix of all possible bots, chatbots, and AIs talking to each other. :)
ChatGPT and Bing do the Alphonse and Gaston routine, nice. Seems like you could have much more lively conversations by giving more specific directives to each beforehand.
The conversation has changed since I first read it (with the original conversation still there underneath it). Getting more interesting...
I find it charming actually that they are so kind and supportive of each other. Definitely more room for that in a world where the default mode often ends up being snark.
From "Cantinflas", a well known actor in the Americas.
> He is considered to have been the most widely-accomplished Mexican comedian and is celebrated throughout Latin America and in Spain as a popular icon. His humor, loaded with Mexican linguistic features of intonation, vocabulary, and syntax, is beloved in all the Spanish-speaking countries of Latin America and in Spain and has given rise to a range of expressions including cantinflear, cantinflada, cantinflesco, and cantinflero.
Too much harmony, boring.
When can we see some competition in mutual insults and computer gore?