IT’S BEEN HAILED as a cultural phenomenon and at the moment, it’s all anyone can talk about. It’s ChatGPT, the language model created by OpenAI, a San Francisco-based artificial-intelligence company. We touched on the subject briefly before, but will take a more in-depth look today at the fastest-growing internet service ever (it reached 100 million users in January, just two months after its launch at the end of November 2022).
IN THIS ARTICLE:
What exactly is ChatGPT?
What does it do?
What can you use it for?
Limitations
Implications
What exactly is it?
At its core, ChatGPT (GPT is short for Generative Pre-trained Transformer) is a large language model, which in turn, is a type of neural network. To break it down further, a neural network is a type of machine learning algorithm modelled after the structure and function of the human brain that has been trained on vast quantities (175 billion parameters) of text. The technology has been around a long time (since the 1980s), but the real advancement came in 2017, when a team of Google researchers invented transformers, a neural network that can track where each word or phrase appears in a sequence. This was an important breakthrough because in language, the meaning of words often depends on the meaning of other words that come before or after (in other words, context). Transformers were able to track contextual information, allowing it to process longer strings of text and pin down the meanings of words more accurately.
What does it do?
ChatGPT uses state-of-the-art machine learning techniques to generate human-like responses to natural language queries and conversations. In other words, it’s a robot that talks like a human. It is an artificial intelligence program designed to understand and respond to what people say in written form, and answer back in a conversational manner, much like a human would. As mentioned, it has been trained on a massive amount of textual data, which allows it to generate coherent and contextually appropriate responses to a wide range of queries and topics, or statistically probable outputs—such as seemingly humanlike language and thought.
Furthermore, the dialogue format makes it possible for ChatGPT to have a conversation with you, like a real person would, and even answer followup questions. If you ask it something and it gets it wrong, it will try to correct itself, and if you say something that’s not true, it might challenge you on it; and if you ask it to do something inappropriate, it can refuse.
The third iteration of the technology, GPT-3, is even more impressive than its predecessors with its ability to produce human-like text. GPT-3 can answer questions, summarise documents, generate stories in different writing styles, and translate between English, French, Spanish, Japanese and more. Its imitation of human behaviour is so convincing, that it can be unsettling at times.
What can you use it for?
Its own explanation of what it could be used for includes: providing quick and accurate answers to a wide range of questions (useful for students, researchers, and anyone who needs information on a specific topic); language translation (English, Spanish, French, German, Italian, Portuguese, Dutch, Russian, Japanese, Chinese, Korean, Arabic, and more); creative writing (it can generate creative writing prompts and even complete stories, useful for writers looking for inspiration or seeking to improve their writing skills); mental health support (by engaging in conversations and providing resources for individuals experiencing mental health issues, which can be particularly useful for people who are hesitant to seek help from a human therapist).
So far, P has been using it for recipes (spicy lentils⏤which, by the way, was really good); counting calories for various food items and specific dishes; and a few of other things. I’ve used it to generate synonyms and provide the definitions for words while writing, and for drafting some business emails. On the whole, I haven’t really found many uses for it yet, but P seems to come up with new ideas every day.
Limitations
ChatGPT is not connected to the internet, so it can occasionally produce incorrect answers, which means that if you want to use it in your work, you will need to fact-check rigorously. It also has limited knowledge of the world and events after 2021, and may also sometimes produce harmful instructions or biased content. ChatGPT will even occasionally make up facts, or what OpenAI describes as hallucinating outputs.
Another limitation is that if two or more people it the exact same question, they will receive the exact same answer. As an artificial intelligence program, ChatGPT’s responses are determined by algorithms and data designed to generate the most appropriate and accurate responses possible, based on the input provided. Therefore, if the input is the same, the output should also be the same, regardless of who is asking the question. However, if the context or wording of the question differs slightly, its response may also differ slightly to reflect the nuances of the input. Something to remember if you’re thinking of using it to generate writing for your work.
Implications
On March 8, 2023, Noam Chomsky wrote an opinion piece in The New York Times, The False Promise of ChatGPT, which voiced concerns that “the most popular and fashionable strain of A.I. — machine learning — will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge”.
It goes on further to say:
“These programs have been hailed as the first glimmers on the horizon of artificial general intelligence — that long-prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic creativity and every other distinctively human faculty.
That day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments. The Borgesian revelation of understanding has not and will not — and, we submit, cannot — occur if machine learning programs like ChatGPT continue to dominate the field of A.I.”
The general idea of the article is that while ChatGPT is great at searching for patterns in a large amount of data and generating the most probable answer in response, it lacks reasoning and the ability to create explanations. Understanding language properly is not easy and it’s not something a machine can learn just by being exposed to a lot of information. ChatGPT is also incapable of moral thinking, and is not conscious, self-aware, or capable of having personal perspectives, or the ability to form personal opinions or beliefs. The main idea of machine learning is to describe and predict things, without trying to explain why they happen or how they work in the real world. In other words, ChatGPT is limited by its inability to provide causal explanations of how the world works, which is a critical component of true intelligence.