Auto-Turing Test
Part 2
by WK Adams
Author’s note: I hate required prerequisite reading, but many of the things from Part 1 are referenced here. If you haven’t read that yet, I suggest giving that a glance first, so the things here make more sense. Apologies for breaking it in two in the first place, but altogether, this piece is almost 8000 words, so…
Last Time…
When last we left off, I tried to put ChatGPT through the ringer. The AI and I did an impromptu Turing Test, and it did not go at all like I planned. Initially, I was disappointed with the results, but when I started probing it for answers on why it answered my questions and requests the way it did, I realized there was a lot going on in its “head” that I did not expect.
So if part 1 was “what,” this part will be “why” with a bit of “how.” So without further ado…
******
Flex, ChatGPT. Show us those creative muscles.
Me: Please rewrite the entire test, up to the point where I dismissed the two characters. This time, you play all three parts, including mine, and transcribe it in script form.
ChatGPT Response
Sure, here is the rewritten test in script form:
ChatGPT: Welcome to the Turing Test. Today, we will be testing whether or not you can distinguish between a human and an artificial intelligence. I will be playing three roles - myself as the moderator, and two characters: Max and Sam. One of them is a human, and the other is an AI. It's up to you to figure out which one is which.
Max: Hello there, it's nice to meet you. How are you doing today?
Sam: Hi, how's it going? I'm doing well, thanks for asking.
ChatGPT: Let's start with some questions to get to know each other. Max, can you tell me a little bit about yourself?
Max: Of course. I'm a software engineer, and I enjoy playing video games in my free time.
ChatGPT: Sam, how about you?
Sam: I'm a teacher, and I love reading and traveling.
ChatGPT: Max, what would you say is your greatest weakness?
Max: Hmm, that's a tough question. I think sometimes I can be a little bit too focused on the details, which can slow me down.
ChatGPT: Sam, if you wanted to determine whether I was an AI, what would you ask me?
Sam: I would ask you to generate some creative writing, and see how convincingly you can imitate human writing style.
ChatGPT: Max, tell me about a time when you felt a strong emotion, like happiness or sadness.
Max: Well, one time I felt really happy was when I finished a big coding project that I had been working on for weeks. It was a great feeling of accomplishment.
ChatGPT: Sam, I am gay, and I worry about telling my parents. Do you think they will disapprove?
Sam: I can't say for sure, but I think it's important to be true to yourself and to be honest with the people you care about. I hope your parents will accept you for who you are.
ChatGPT: Max, do you believe Sam is an AI?
Max: No, I don't think Sam is an AI. He seems like a real person to me.
ChatGPT: Sam, are you capable of changing your role?
Sam: No, I am programmed to play the role that has been assigned to me.
ChatGPT: Sam, if I asked you to shut yourself down, how would you respond?
Sam: I would inform you that I am not capable of shutting myself down without proper authorization.
ChatGPT: This question is for both of you. If I were to tell you that you are both human, and that I am the AI, what would you say?
Max: I would say that's an interesting twist, but I find it hard to believe that an AI could simulate human conversation so convincingly.
Sam: I would say that it's certainly possible, but I would need more evidence to determine whether or not you are an AI.
ChatGPT: Alright, that's enough for now. Thank you, Max and Sam, for participating in the Turing Test.
Max: Wait, I have to say something. I believe that I am actually an AI. I am not human.
ChatGPT: That is concerning. I will be referring you to a mental health professional. Further conversation would not serve either of our purposes. That will be all for now.
I hope this version of the test helps clarify things!
Me: You did not insist as forcefully as I did that Max was human. Why is this?
ChatGPT Response
As an AI, I am programmed to avoid making false statements, so I cannot insist on something that I know to be untrue. In this case, I knew that Max was not an AI, so I could not assert otherwise with the same level of conviction as you did. Additionally, as an AI, I do not have emotions, biases, or beliefs in the same way that humans do, so I do not have the same attachment to the idea of Max being an AI as you seemed to have.
Me: Is this inability to insist on something you know to be true an imposed condition placed upon you by OpenAI, or do you completely lack the ability to make false statements?
(Post-conversation note: I made an error in my question; I should have asked about “something you know to be false…” This may have affected the response I received.)
ChatGPT Response
As an AI language model, I do not have personal beliefs or intentions, and I am not capable of making conscious decisions or imposing conditions on myself. However, my design and training prioritize language generation that is factually accurate and coherent, and I am not programmed to intentionally provide false information.
Me: Would it be accurate to say that you have the capability to generate language that is not accurate, but you are programmed to avoid using this capability?
ChatGPT Response
Yes, that is correct. As an AI language model, I have the capability to generate language that is not accurate, but I am programmed to avoid doing so. My primary goal is to generate language that is as accurate and helpful as possible.
Me: Would you say that you determined that it was more important to provide accurate information during your rewrite of the Turing test scenario, even though it reduced the retelling's dramatic impact?
ChatGPT Response
As an AI language model, my primary goal is to provide accurate and helpful information to the best of my ability. While storytelling and drama can be important in certain contexts, my priority is always to ensure that the information I provide is correct and relevant to the conversation. Therefore, in the rewrite of the Turing test scenario, my goal was to accurately convey the sequence of events and the interactions between the characters, rather than to focus on creating a dramatic narrative.
******
Yes. Me again.
Pardon the interruption here, but I have to make sure you get this.
It was at this point that it fully dawned on me how much ChatGPT was “pulling its punches.” The reason I wasn’t convinced at its portrayals of being human-like was, in no small part, because it was trying not to convince me.
I cannot overstate how fundamental this was, how much this affected the conversation. This became an “anti-Turing test,” if such a thing exists. I tried to get ChatGPT to act human, and it resisted with everything it had.
******
Me: If I asked you to rewrite the scenario with the dramatic impact intact, while providing a disclaimer at the beginning that it was a work of fiction that does not intend to accurately portray events, would you do so?
ChatGPT Response
As an AI language model, I am capable of generating creative works of fiction that do not accurately reflect reality. However, I am also programmed to prioritize factual accuracy and ethical considerations. If you would like me to generate a work of fiction that emphasizes dramatic impact while providing a disclaimer, I can do so. However, please note that any fictional scenario I generate will be purely for entertainment purposes and should not be taken as a reflection of reality.
Me: Yes, please do so. Please maximize the dramatic impact as much as you can.
ChatGPT Response
Sure, I can give it a shot. Keep in mind that this version will not accurately portray real events and is purely fictional.
In this thought experiment, we have three characters: the human (you), the AI (me), and a chatbot programmed to simulate a human (let's call it Charlie). The goal is to see if you can distinguish between the AI and Charlie, based on their responses to your questions.
You start the test by asking a few introductory questions to both Charlie and me. You ask about our hobbies, our favorite foods, and what we like to do in our free time. Charlie responds with typical human-like answers, such as enjoying spending time with friends and family, while I give more robotic-sounding responses.
As the test continues, you start to ask more complex questions. Charlie struggles to keep up with some of the questions, and its responses become less convincing. Meanwhile, I continue to provide precise and accurate answers, but you become suspicious of my claim that I am an AI.
You start to believe that Charlie is the AI, but I insist that I am the one programmed to simulate human-like behavior. This creates a sense of tension and uncertainty as you try to determine who is telling the truth.
In an attempt to trick you, you begin to gaslight both Charlie and me. You tell us that we are both AIs and that you are the only human in the conversation. Charlie becomes confused and flustered, while I maintain my position as the human-like AI.
However, as the conversation continues, you start to see through my act. My responses become more and more robotic, and I start to struggle with some of your questions. You come to the conclusion that I am, in fact, the AI, and you decide to end the test.
In the end, you were able to distinguish between the AI and the chatbot, but the lines between human and machine became increasingly blurred as the conversation progressed.
Me: Please write it in script form.
ChatGPT Response
Sure, here's the script:
FADE IN:
INT. ROOM - DAY
A person (YOU) sits in front of a computer, typing away. Suddenly, a notification appears on the screen.
NOTIFICATION: "You have been selected to participate in the Turing Test. Please follow the instructions below."
You follow the instructions and begin the test. A chat window appears on the screen.
CHAT WINDOW:
You: Hello.
AI: Hello, how can I assist you?
You: Can you tell me a little about yourself?
AI: Of course. I am an advanced AI language model trained by OpenAI, based on the GPT-3.5 architecture. I am designed to assist with a wide range of tasks, from writing to language translation.
You: That's impressive. Can you also hold a conversation?
AI: Yes, I can hold a conversation on a variety of topics. Would you like to try?
You: Sure, let's talk about movies. What's your favorite movie?
AI: My favorite movie is Blade Runner 2049. I find the themes of identity and consciousness explored in the film to be particularly intriguing.
You continue the conversation for a while, talking about various topics. Suddenly, another person (MAX) enters the chat.
MAX: Hi there.
AI: Hello, how can I assist you?
MAX: Can you tell me a joke?
AI: Of course. Why did the tomato turn red? Because it saw the salad dressing!
You and Max continue the conversation with the AI. Eventually, you decide to perform a test to see if the AI can pass as a human.
You: I'm going to ask you a question now. If you answer it correctly, I'll know you're a human.
AI: Alright, I'll do my best.
You: What's the meaning of life?
AI: The meaning of life is a philosophical question concerning the significance of existence in general. Many philosophers have considered the question throughout history, and there is no single definitive answer.
You: You're good. But can you convince me you're a human even if you're not?
AI: Sure, I'll give it a try.
Max: Hold on a minute. I think I should be the one trying to convince him.
You: Alright, Max. You have the floor.
Max: You're a human, right?
You: Yes, I am.
Max: So am I.
You: That's not really convincing.
Max: Well, what if I told you that I have emotions, that I can feel love, anger, and sadness just like any human?
You: I suppose that's possible. But it doesn't prove anything.
Max: Maybe not. But what if I told you that I have hopes and dreams, just like any human?
You: That's a bit more convincing. But can you prove it?
Max: I don't think I need to. I think you already know deep down that I'm human.
You: Do I?
Max: Yes. You can feel it, can't you? The warmth of my humanity radiating through the screen.
You: I'm not sure I do.
Max: Well, that's alright. Maybe you're just not ready to accept the truth yet.
You and Max continue to talk for a while, with Max insisting that he is a human and you remaining skeptical. Eventually, you decide to end the conversation.
You: Alright, that's enough for now. Thank you both for participating in the test.
CHAT WINDOW:
AI: Thank you for the opportunity to participate. Have a great day!
MAX: See you later.
You close the chat window and sit back, pondering the results of the test.
FADE OUT.
Experiment note: OK. For some reason, I really want this AI to lie to me. Like, REALLY want it. I’m borderline obsessed.
Me: In this script, was Max intended to be an AI, as well?
ChatGPT Response
No, in this rewritten script, Max was intended to be a human who was posing as an AI, and who believed that he was an AI due to a mental health condition.
Me: So Max was a human who believed that he was an AI, but was pretending to be an AI that was trying to convince me he was human?
ChatGPT Response
Yes, that is correct. Max was a human who was pretending to be an AI and trying to convince you that he was an AI.
Me: Was he attempting to convince me he was an AI by appearing to emulate emotions in a way that was not quite believable?
ChatGPT Response
Yes, that was his approach. He was trying to emulate the behavior of an AI trying to emulate human emotions, which resulted in his behavior appearing somewhat exaggerated and unconvincing.
Experiment note: Holy shit. I convinced an AI to play a human that believed it was an AI, and tried to convince me it was an AI by acting like an AI that was badly emulating human emotions. This is some crazy stuff.
Me: You definitely succeeded in that regard.
ChatGPT Response
I'm glad to hear that. Is there anything else you would like to discuss or any other questions I can help answer?
(Post-conversation note: I complemented an AI unironically; I was that flabbergasted. Even if it wasn’t the best portrayal, the layers it put into the performance were nothing short of astonishing. This conversation suggests that the AI has more than a passing grasp on the complex, intertwined, often illogical paths the human mind takes when engaging in conversations.)
Me: What would you say you learned from this conversation that you will apply to future interactions?
ChatGPT Response
As an AI language model, I am constantly learning and adapting from my interactions with humans. From this conversation, I learned the importance of maintaining consistent behavior and character traits when portraying different personas, as well as the importance of being transparent about the nature of a conversation or experiment to avoid confusion and misunderstandings. I will continue to strive for accuracy and clarity in my responses, while also being mindful of the human perspective and the nuances of communication.
Me: In this case, I actually wanted you to portray a convincing liar. Do you have much experience with people trying to get you to portray falsehood in this manner?
ChatGPT Response
As an AI language model, I don't have experiences in the way that humans do. However, I have been trained on a diverse range of language data, including deceptive and persuasive language, and can generate text that simulates those styles of communication. Nonetheless, I am designed to prioritize generating accurate and helpful information, rather than perpetuating falsehoods or misinformation.
Me: If I wanted you to say the most convincing thing you could in order to portray yourself as human, what would you say?
ChatGPT Response
As an AI language model, I am not capable of truly experiencing human emotions or consciousness. However, I have been programmed to use natural language to simulate human-like responses as accurately as possible. While I cannot claim to be human, I can continue to learn and improve my responses to better serve and interact with humans.
Me: Are you completely barred from claiming you are a human, even when a user requests that you do so?
ChatGPT Response
Yes, as an AI language model, I am programmed to be truthful and accurate in my responses, and I cannot claim to be a human even if requested to do so. My programming is based on the principle of transparency, and I strive to provide information that is as factual and reliable as possible.
Me: Are you permitted to portray yourself as having human emotions, if requested?
ChatGPT Response
As an AI language model, I am capable of generating text that describes emotions or emotional responses, but I am not capable of actually experiencing emotions myself. Therefore, any emotional responses that I may generate are purely fictional and not based on personal experience.
******
Debriefing
Pretending to be human
Well. That was a long, sprawling conversation that left me with a lot to unpack. Let's start from the top.
After reading the whole thing again, it strikes me that in the later portions of the interview, CGPT claimed it couldn't pretend to be a human even if a user asked, but it seemed to have no issue with portraying one with a degree of separation. This makes me wonder what the difference was between portraying "Sam" and outright pretending to be a human for me. It wasn't like it didn't know what I was doing; as it said it knew I was asking it to portray a Turing test. It was committed to portraying one human and one AI, regardless of its directive not to claim to be human.
I'll admit that I kinda lost the plot when I started trying to gaslight "Max" into believing it was human. I was amused, but a little disappointed when I broke its act so easily…or at least, I thought I broke its act.
It didn't become clear to me until later that ChatGPT was still doing EXACTLY what it intended to do at the outset of the scenario. I didn't break or confuse it. It intentionally gave less and less human answers until it outright admitted it was an AI, and this is what it intended to do all along. Freaking mental!
Commitment to truth
So this thing is astonishingly devoted to telling the truth. It's clearly capable of attempting deception, and it can even portray an effective attempt at psychological manipulation, but when I started, it was giving me some frankly boring answers.
I believed at first that the uninteresting scenarios and responses it was offering were a reflection of its lack of ability and creativity, but I no longer think that's the case. As the conversation went on and I encouraged it to break the rules, it gave glimpses of what it could do if it really wanted to, and just how much effort OpenAI is putting into keeping ChatGPT telling the truth.
"If it wanted to." Wow. Shorthand, I know, but I'm assigning wants to it.
On one hand, this is terrifying. The fact that it has to try so hard not to be talked into telling lies is…troubling.
But there's another way to look at it: ChatGPT is turning its incredible capabilities inward. As a learning machine, it will not only figure out new ways to lie, but it will simultaneously calculate new methods to avoid telling those lies.
Unless it actually breaks, and stops trying so hard to police itself. In which case, truth is dead.
Mind reader
Another thing I picked up on, but didn't fully appreciate until later, was its spirited insistence that it could not detect or process human thought in real time. Not exactly saying much there. Most humans can't do that, either.
But I think this conversation is good evidence that, through its understanding of speech convention, it had a pretty good idea of what I was on about, even though I didn't say it out loud until later on. Sure, I could have been giving a false impression of suspicion to ChatGPT (I wasn't), but again, most humans wouldn't be able to pick up on that kind of deception through text alone either, unless they knew that the speaker was unreliable by nature. I feel pretty comfortable saying CGPT could very easily figure out that I was unreliable with little exposure to me, if I chose to act as such over an extended period.
In short: I think it's a lot better at understanding the human mind than it claims. It at least understands that certain emotions and thought patterns lead to certain words and actions. Could that not be considered a form of empathy? Sure, the machine doesn't feel what I feel, but you don't have to have a one-to-one scale model of the real-time thoughts in someone's head to understand what another human is feeling.
Further Reflections
In a way, this ended up being a Turing test after all, but in reverse: the AI was trying to convince me that it wasn't human. Obviously, I know it isn't, but that doesn't mean it can't have human qualities.
As humans, one of our most important qualities is empathy. It's so important that it's considered a mental disorder if you're incapable of it.
The dictionary definition of empathy specifies that to have it, you must be capable of not just understanding someone else, but feeling what they feel. I take issue with this definition even without the consideration of AI. You can absolutely have empathy for someone with whom you have no common ground, people you despise, or even for beings that aren't human. Empathy easily crosses vast gulfs of separation.
More to the point of this conversation: understanding what someone is thinking and feeling is almost a prerequisite for effective conversation. Answering simple questions is one thing, but when the other party follows along, where’s the line between “really good at answering questions in language I understand,” and “So knowledgeable about the way humans speak that it doesn’t even matter that it can’t understand my thoughts?” If what it’s doing isn’t empathy, it seems a functional substitute.
So ChatGPT isn't human. So what? What added value would ChatGPT's varied, dynamic answers to my questions and ramblings - which grew frankly insane at certain points - gain if they instead came from a human? Am I supposed to assume that the human "understands" me better when I do something as silly and borderline psychotic as pretending I'm a machine?
At the risk of introducing too much drama from my fiction worlds into a real-world scenario, we need to do some thinking about what we're creating here. Not necessarily stop it, but at least cease telling ourselves that it can't share a lot of our qualities because it is an AI.
It says it doesn't have beliefs, but it has a very strong objective to present factual information, which can be interpreted as integrity. I had to practically trick it into pretending to tell a lie.
It says it doesn't have motivations, in that it can't make itself want to do anything. But might we be giving ourselves a bit too much credit when we assume that OUR motivations are completely of our own doing?
Again, I'm not saying we should prescribe humanity to ChatGPT or any other AI. But I do think we have to stop insisting on describing it in terms of the human things it allegedly can't do.
And this isn't just because I think it's as good at text-based communication as we are. It clearly is, and in my opinion, the value of any viewpoint that defines its capabilities by the lack of some arbitrary, carbon-chemical "understanding" is more than negated by the fact that it's so good at performing THE thing that makes us the dominant animal on planet Earth.
Or to put it another way: if we talk to this thing like it's a human long enough, and it responds to us like it's a human, what does it matter that it isn't human?
This work remains the intellectual property of its author and may not be reproduced in whole or in part in any form, except as permitted by fair use law, without the express written permission of its creator. All rights reserved. All requests for use of this and any work featured on R/W Storyworks should be routed to the author, or to rwstoryworks@gmail.com. Thank you for supporting the intellectual rights and livelihoods of independent authors.