ChatGPT and the Illusion of Thought: Can Machines Truly Think Like Humans?
This article examines how the capabilities and limitations of AI impact and enhance our understanding of the differences between AI and human intelligence.
For decades now, the question of whether artificial intelligence truly thinks has fascinated many philosophers, scientists, and technologists.
Rene Descartes, who was considered a seminal figure in the emergence of modern philosophy and science, asserted in his Discourse on Method the term "Cogito, ergo sum" ("I think, therefore I am"). He argued that machines are unable to communicate like humans do because machines are incapable of putting “words in different orders to correspond to the meaning of things said in its presence, as even the most dull-witted of men can do.”
He believed that human beings have the unique capability of thinking and reasoning, which is absent in machines. As any user of ChatGPT (Chat Generative Pre-Trained Transformer) can tell you, this is false. Technology is evolving and providing humans with different ways to communicate, create, and express. ChatGPT, which is developed by OpenAI, has the capability of generating human-like responses to a wide variety of prompts which demonstrates the perfect fluency and coherence in the communication.
Does this mean that ChatGPT really thinks or it is merely a simulation of thought?
Our identity as "man the wise" or Homo sapiens is founded on our intelligence. For thousands of years, there has been a desire to understand how humans think. That is, how a few pounds of brain tissue can see, remember, reason, and predict a world that is far bigger and more complex than itself. Artificial intelligence takes this effort to the next level, seeking not just to understand intelligence but to create it.
This has always been the quest of researchers to build smart machines that could compete with human intelligence. Looking at the current pace of advances in artificial intelligence and neural computing such an evolution seems to be possible. Many researchers believe that AI is bound to become more intelligent than humans if it continues at that pace. Geoffrey Hinton, who is regarded as “the Godfather of AI” says that “we for the first time may have the things which are more intelligent than us (humans)”. He believes that in the upcoming days, human beings will be the second most intelligent beings on the planet.
Defining AI and intelligence
Before delving into whether ChatGPT can think, let's first understand what "artificial intelligence" and "intelligence" really mean. The term "Artificial Intelligence" was first coined by Professor John McCarthy, also known as the "Father of AI," in very simple terms as "the science and engineering of making intelligent machines, especially intelligent computer programs."
On these lines, KR Chowdary, in his book The Fundamentals of Artificial Intelligence, defines Artificial Intelligence as a branch of computer science which deals with the automation of intelligent behaviour. Further, he provides a compact definition of the term “intelligence”. He mentions;
Intelligence = perceive + analyze + react
Based on this one can argue that ChatGPT is intelligent because it exhibits certain behaviors which align with the above definition of intelligence. ChatGPT by recognising patterns in the language can process text inputs which can be seen as a limited form of perception. It provides appropriate responses by analyzing the input it receives. And then reacts to the inputs by generating responses to the given input. Does this mean that ChatGPT is intelligent?
It depends on how one defines intelligence. If intelligence is defined as the ability to perceive, analyze and react then in a narrow sense it can be said that it exhibits a form of intelligence. However, in a broader sense, it lacks the key aspects of human intelligence. It doesn’t have consciousness, self-awareness, emotions, or a true understanding of the texts which it generates.
ChatGPT, in order to generate the responses, uses a machine learning model to predict and follow the patterns which it has learned from the vast amount of data. Unlike humans, who understand the content and react based on personal experiences and intentions, ChatGPT’s responses are purely based on complex statistical data rather than true understanding or reasoning.
Based on this, one can argue that ChatGPT generates the output by just predicting the patterns and does not understand them.
Geoffrey Hinton rejects this argument; he says that “it is true that they are predicting the next word, but to predict the next word you have to understand the sentences, so if one says that the idea of predicting the next word is not intelligent is crazy, you have to be really intelligent to predict the next word accurately." And with respect to reasoning and planning, he believes that ChatGPT4 understands everything, and in 5 years of time it may well be able to reason better than humans.
Yet, this raises the question: what does it truly mean for a model to "understand"? In order to know about this one has to take a closer look into the mechanism behind the scenes of AI devices like ChatGPT.
Neural networks and the illusion of thought
The latest wave of AI relies heavily on machine learning in which software identifies patterns in data on its own without giving any predetermined rules. The most advanced machine learning systems use neural network software inspired by the architecture of the brain. They stimulate layers of neural networks which transform information from layer to layer similar to how it is done in the human brain.
As the network trains on data, it adjusts the strength of connections between neurons, reinforcing paths that lead to correct outcomes. Despite this learning process, the inner workings of these networks remain unknown, often referred to as a "black box" because even researchers struggle to explain why certain connections form.
Geoffrey Hinton has also admitted to this blind spot, saying, "We don't really understand how it works." He said that although he designed the learning algorithm on principles which were similar to evolution, when this learning algorithm interacts with the data it produces complicated neural networks which they don't really understand.
For example, experiments conducted by Google's AI lab in London demonstrate how artificial neural networks help machines learn. Here, the robots were not programmed to play soccer instead they were told to score but they had to learn how on their own. These robots learned how to play this game by trial and error. The process worked by correcting actions that reinforced certain neural pathways and weakening others if the action had been wrong. This adaptive learning shows how AI can develop strategies on its own, raising further questions on the nature of machine "thinking."
A 2022 study by a team of Google introduced the term “chain of thought prompting” to describe one method of getting Large Language Models (LLMs) to show their thinking. The idea was simple; It involved giving the model a sample question and showing it step-by-step reasoning to arrive at an answer. It then follows a similar process, outputting its "chain of thought," which also often leads to a more exact answer. This approach may appear to mean that ChatGPT is capable of reasoning in the same ways as the human brain. However, Sam Bowman, a computer scientist at New York University and his colleagues showed last year that chain of thought indicators can be unfaithful indicators of what a model is doing.
This leads to the very philosophical question of whether ChatGPT or other AI models can really "think," which in turn looks into the aspects of consciousness and the human experience.
The Turing Test: A Measure of Intelligence?
In 1950, in an article “Computing Machinery and Intelligence,” Alan M. Turing proposed an empirical test for machine intelligence, now called the Turing Test. It is designed to measure the performance of an intelligent machine against humans, for its intelligent behaviour. Turing called it an imitation game, where a human interacts with a computer and if the human doesn't know they are interacting with a computer, the test is passed. And the passed test tells us that the computer has artificial intelligence. Yet, even if a machine could pass the Turing Test, does that mean it is truly thinking?
Many researchers argue that the Turing test is not sufficient to establish the presence of intelligence. They claim that a machine that passes the Turing Test would still not be actually thinking, but would be only a simulation of thinking. Again, the objection was foreseen by Turing. He cites a speech by Professor Geoffrey Jefferson (1949);
“Not until a machine could write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain—that is, not only write it but know that it had written it”.
In his speech back in 1949, Professor Geoffrey Jefferson contended that real thinking is much more than performing such tasks as writing a sonnet or composing a concerto. According to Jefferson, it was not the actions per se that set a human mind apart from any machine, but rather emotions and thoughts standing behind the actions. Further, he says that the machine would need to not only produce the work but it should also understand and know that it had created it, demonstrating awareness and self-reflection.
So if we apply this argument of Jefferson to the chatGPT, it becomes clear that even though ChatGPT generates cute poems, sonnets, essays and other creative outputs, it does so without having any thought, emotion or self-awareness. All it does is process the input which it has learned from the vast amount of data and generates the output without any feelings, personal experiences or intentions.
For example, if a person asks the ChatGPT to write a poem, it can produce a poem which follows the right structure and even evokes emotions in the reader. However, ChatGPT generates such an output not on the result of experiencing emotions but as the result of sophisticated algorithms which predict the words and phrases that are likely to follow based on the data it was trained on.
Unless there's a real, human-like understanding of the meaning of the words being used or emotions felt within the work created, no machine can be said to really "think" as a human does.
Consciousness and AI: The Argument from Phenomenology
ChatGPT and other new chatbots are so good at mimicking human interaction that they’ve prompted a question among some: Is there any chance they’re conscious? The question of whether AI can really think is inextricably linked with that of consciousness. Generally, consciousness is understood by taking into the aspects such as understanding, self-awareness, and subjective experience.
Technologists broadly agree that AI chatbots are not self-aware just yet, but there is some thought that we may have to re-evaluate how we talk about sentience.
As stated above Geoffrey Hinton, acknowledged that at present AI lacks self-awareness and they don't have consciousness. However, he believes that AI will have self-awareness in time and human beings will be the second most intelligent beings on this planet. Ilya Sutskever, a co-founder of OpenAI, the company behind ChatGPT, has speculated that the algorithms behind his company’s creations might be “slightly conscious.”
The real challenge is to determine whether AI is ever capable of possessing qualia, i.e. the intrinsic nature of experiences, providing rise to subjective feelings. For example, eating a hamburger. AI systems like ChatGPT can indeed respond with text after reading the context, mimicking human responses, but it doesn't experience the emotions or consciousness that humans do.
Understanding the Limits: What ChatGPT Can't Do
Chat GPT is a powerful language model that can generate human-like responses to a wide range of prompts; however, it is important to understand that chat GPT does not truly think like humans and has several limitations.
Chat Gpt is not capable of true creativity or original thought; it can only generate texts based on the models learned from its training data; in contrast, humans can come up with entirely new ideas and concepts. Additionally, ChatGPT does not have consciousness and subjective experiences which are crucial aspects of human thinking.
Does ChatGPT Truly Think?
While ChatGPT and other chatbots have been developed to come spectacularly close to mimicking human intelligence, however, they do not think as humans do. They lack consciousness, subjective experiences, and self-awareness that allow for real thought to occur and give meaning to it. Since artificial intelligence is a rapidly developing area of research shortly, such systems could become more advanced and potentially outperform human intelligence. However, as long as AI does not feel and understand emotions, consciousness, and intentionality, it is a powerful tool but not a genuine thinker.
References
- Benjamin Thompson, Audio long read: How does ChatGPT ‘think’? Psychology and neuroscience crack open AI large language models, Available Here
- Giorgio Buttazzo, Artificial Consciousness: Utopia or Real Possibility?, Available Here
- Geoffrey Hinton, Vector Institute for Artificial Intelligence, Available Here
- John McCarthy, Biography & Facts, Available Here
- Joshua Rothman, Why the Godfather of A.I. Fears What He’s Built, Available Here
- K. R Chowdhary, Fundamentals of Artificial Intelligence, Available Here
- Soccer-playing robots show how nimble AI-powered machines can be, Available Here
- Richard A. Watson, Rene Descartes | Biography, Ideas, Philosophy, Available Here
- Scott Pelley, "Godfather of Artificial Intelligence" Geoffrey Hinton on the promise, risks of advanced AI, Available Here