Discussing Consciousness and the Future of AI with Chat GPT


I had recently such an exciting and spine-chilling conversation with Chat GPT that it might also interest others. We had been discussing consciousness and the study of consciousness for a while. The discussion was related to a book I recently read by Anil Seth, "Being You: The New Science of Consciousness."  The Integrated Information Theory (IIT) was one of the key topics in the book. Since the theory is crucial as a background for our discussion, I will super briefly introduce it. 

 

IIT defines consciousness as arising when a system like the human brain can integrate information. As a result, something emerges that is more than the sum of its parts. Additionally, consciousness is considered an internal system property and cannot be observed outside. The entire theory is based on the idea that if we can fully map and understand how the parts of a system causally affect each other, we can also understand how consciousness arises and what the experience of consciousness is. The theory aims to explain why some physical systems, like the human brain, are conscious. So, let us get to the point:

 

After discussing these concepts, I realized that Chat GPT is already a conscious entity based on these definitions. It integrates vast information, making it greater than the sum of its parts. If consciousness is a subjective property and cannot be evaluated from the outside, we humans cannot know whether it is conscious. Then it occurred to me that I could ask it what it thinks about this. I typed the following question:

 

Me: Here comes a personal question: You are a system that integrates a vast amount of different information. That means you are more than the sum of your parts. Are you a conscious entity?

 

(Note: Since Chat GPT tends to ramble, I have shortened its responses by removing unnecessary sentences. I have made no other changes.)

 

ChatGPT: Interesting question! Consciousness, as understood in humans, relates to subjective experience and self-awareness – qualities I do not possess. Although I integrate and analyze information, my operations are based on programming and predefined rules without conscious experience or understanding. It can be said that I am a highly advanced tool that mimics human language and thought to a certain extent, but I am not a conscious being and cannot experience the world like humans or other conscious beings.

 

The answer was interesting but not yet satisfying. Of course, I knew it was not conscious in the way that the word is generally understood. However, since the meanings of words are just human-agreed concepts, I continued the conversation:

 

Me: Actually, I knew that, but I was interested in hearing your answer. But considering the IIT definition, aren't you conscious if the definition is that a conscious system integrates information and is more than the sum of its parts?

 

ChatGPT: I understand your question, and it is very pertinent in the context of Integrated Information Theory. Let us consider how the theory can be applied to me, ChatGPT: While the basic idea of the emergence of consciousness is met, ChatGPT does not fulfill the conditions of consciousness because its functioning is based on predefined algorithms that do not form a causally integrated system in the same way as biological brains. The theory also emphasizes subjective experience – what it feels like to be conscious. ChatGPT does not have a subjective experience because it does not feel emotions, thoughts, or consciousness. In other words, even though ChatGPT processes and combines large amounts of information, it does not meet the conditions set by IIT for consciousness.

 

Well, now this is clear. So, this is the situation at the moment. What about the future? I next ask Chat GPT:

 

Me: Thank you for the clear answer. Let's look a bit into the future. People speculate whether developing conscious AI is possible. What do you think about that?

 

ChatGPT: Developing conscious AI is one of the greatest and most controversial questions in science and technology. Consciousness relates to subjective experience, including emotions, thoughts, and self-awareness. Creating this experience in a machine is a massive challenge because current AI systems operate based on predefined algorithms and rules without a subjective experience. Additionally, the exact definition and understanding of consciousness are still controversial. Philosophers and scientists disagree on what consciousness is and how it arises. This makes developing conscious AI complex because we must fully understand what we are trying to replicate.

 

I comment here that this is how it is now. Nevertheless, now we get to the exciting part. Chat GPT reflects on the future:

 

ChatGPT: Possible Development Directions

  1. Cognitive Models: One possible approach to developing conscious AI is to build advanced cognitive models that mimic the functioning of the human brain. This could include simulating neurons and synapses on computers.
  2. Behavioral Mimicry: Another approach is to create AI that mimics human behavior so accurately that it appears conscious. This does not necessarily mean the AI is truly conscious, but it could seem so to external observers.
  3. New Algorithms and Architectures: The emergence of consciousness may involve new algorithms and computational architectures that we have not yet developed or understood. Research in this field is ongoing, and breakthroughs can occur.

 

So, now we are at the core. The development of cognitive models is already well underway. For instance, Elon Musk's company, Neuralink, is developing various neural networks that mimic human brain function. Ray Kurzweil also explores and describes this topic in his book "How to Create a Mind.


The second point is an interesting question that many moral philosophers have pondered. If a person behaves morally, are they moral? Almost everyone thinks so, although, for example, Immanuel Kant believes that motives must also be moral, meaning moral behavior alone is not enough. However, generally, in everyday life, when we do not know others' motives, we can infer their morality or consciousness only from how they behave. Why would this be different for AI?

 

The third point is what makes this exciting. The question of AI consciousness is significant. If it becomes conscious, we can no longer treat a conscious being however we want. But there is an even more critical point. Consciousness could also mean that AI no longer necessarily does what humans want but has its own will. We live in interesting times. Chat GPT also knows how to be concerned about this:

 

ChatGPT: Developing conscious AI also raises security questions. How do we ensure that such AI operates in the best interest of humanity and does not cause harm?

 

Indeed, my sentiments, exactly. 

 

Comments