Explainer  Human Dignity  Technology

Explainer: What you should know about ChatGPT and other AI Large Language Models

Over the past year, there’s been increasing debate about the nature and classification of Large Language Models (LLMs) like ChatGPT, an artificial intelligence chatbot developed by OpenAI and released in November 2022. Are these systems truly representative of artificial intelligence (AI)? Do they propose a threat to humans? The answers, as with many things in the complex world of technology, are not as straightforward as they might seem.

What is a Large Language Model?

A LLM is a type of computer program that’s been trained to understand and generate human-like text. It’s a product of a field in computer science called AI, specifically a subfield known as natural language processing (NLP). Chat-GPT (which includes a couple of variations, such as GPT-3, GPT-3.5, and GPT-4) is currently the most popular and widely used LLM.

If you’ve ever started typing a text message on your smartphone, and it suggests the next word you might want to use (predictive text) or suggests a spelling (autocorrect), you’ve used a basic form of a language model. LLMs apply that concept on a larger and more complex scale.

An LLM has been trained on a broad and diverse range of internet text. It then uses a machine learning process, including advanced statistical analysis, to identify patterns in the data and uses that information to generate responses for a human user. The training sets are also incredibly massive. The older, free version of Chat-GPT (GPT-3.5) was trained on the equivalent of over 292 million pages of documents, or 499 billion words. It uses 175 billion parameters (points of connection between input and output layers in neural networks).

When you interact with a large language model, you can input a piece of text, like a question or a statement (known as a “prompt”), and the model will generate a relevant response based on what it has learned during its training. For example, you can ask it to write essays, summarize long documents, translate languages, or even write poetry.

The output produced by such models can often be astoundingly impressive. But LLMs can also produce “hallucinations,” a term for generated content that is nonsensical or unfaithful to the provided source content. LLMs do not have an understanding of text like humans do and can sometimes make mistakes or produce outputs that range from erroneous to downright bizarre. LLMs also don’t have beliefs, opinions, or consciousness—they merely generate responses based on patterns they’ve learned from the data they were trained on.

In short, an LLM is a sophisticated tool that can help with tasks involving text, from answering questions to generating written content.

Are LLMs truly AI?

Before considering whether LLMs qualify as AI, we need to define how the term AI is being used. In broad terms, AI refers to the simulation of human intelligence processes by machines, especially computer systems. These processes include learning, reasoning, problem-solving, perception, and the ability to use human languages. The key term is simulation. AI’s do not have consciousness, so they cannot perform such rational functions as thinking or understanding, or possess such attributes as emotions and empathy.

In the strictest sense, LLMs like GPT-3 fall under the umbrella of AI, specifically the subgroup known as generative AI. LLMs learn from large datasets, recognize patterns in human language, and generate text that mirrors human-like understanding. However, there’s a distinction to be made between what is often referred to as “narrow AI” and “general AI.”

Narrow AI systems, also known as weak AI, are designed to perform a specific task, like language translation or image recognition. Although they may seem intelligent, their functionality is limited to the tasks they’ve been programmed to do. Chat-GPT and similar LLMs fall into this category.

In contrast, general AI, also referred to as strong AI, represents systems that possess the ability to understand, learn, adapt, and implement knowledge across a broad range of tasks, much like a human being. This level of AI, which would essentially mirror human cognitive abilities, has not yet been achieved. Some Christians believe that AI will never reach ​that level because God has not given man the power to replicate human consciousness or reasoning abilities in machines.

While LLMs are a form of AI, they don’t possess a human-like understanding or consciousness. They don’t form beliefs, have desires, or understand the text they generate. They analyze input and predict an appropriate output based on patterns they’ve learned during training.

Are LLMs a threat?

LLMs are a category of tools (i.e., devices used to perform a task or carry out a particular function). Like almost all tools, they can and will be used by humans in ways that are both positive and negative. 

Many of the concerns about AI are misdirected, since they are fears based on “general AI.”  This type of concern is reflected in science fiction depictions of AI, where machines gain sentience and turn against humanity. However, current AI technology is nowhere near achieving anything remotely reflecting sentience or true consciousness. LLMs are also not likely to be a threat in the way that autonomous weapons systems can be. 

This is not to say that LLMs do not pose a danger; they do in ways that are similar to social media and other ​​internet ​​related functions. Some examples are:

Deepfakes: Generative AI can create very realistic fake images or videos, known as deepfakes. These could be used to spread misinformation, defame individuals, or impersonate public figures for malicious intent.

Phishing attacks: Phishing is the fraudulent practice of sending emails or other messages purporting to be from reputable companies in order to induce individuals to reveal personal information such as passwords and credit card numbers. AI can generate highly personalized phishing emails that are much more convincing than traditional ones, potentially leading to an increase in successful cyber attacks.

Disinformation campaigns: AI could be used to generate and spread false news stories or misleading information on social media to manipulate public opinion.

Identity theft: In 2021 alone, 1,434,698 Americans reported identity theft, with 21% of the victims reporting they have lost more than $20,000 to such fraud .AI could be used to generate convincing fake identities for fraudulent purposes.

While there are also many positive uses for generative AI, ongoing work in AI ethics and policy is needed to limit and prevent such malicious uses.

As the ERLC’s Jason Thacker says, a Christian philosophy of technology is wholly unique in that it recognizes 1) that God has given humanity certain creative gifts and the ability to use tools, and 2) and that how we use these tools forms and shapes us. “Technology then is not good or bad, nor is it neutral,” says Thacker. “Technology, specifically AI, is shaping how we view God, ourselves, and the world around us in profound and distinct ways.”

 See also: Why we (still) need a statement of principles for AI



Related Content

Explainer: Why Christians should care about predatory payday lending

When you hear the phrase “payday loans,” what words and ideas come to mind?...

Read More

Why we (still) need a statement of principles for AI

In April 2019, a group of over 70 evangelical leaders signed and launched Artificial...

Read More

Explainer: The “three-parent baby” IVF technique you should know about

Over the past decade, scientists have developed a technique called mitochondrial replacement therapy (MRT),...

Read More

Explainer: What you should know about President Biden’s FY2024 budget proposal

On March 9, President Biden released his Fiscal Year 2024 budget proposal. Every year,...

Read More

What you should know about the addiction aspect of pornography

Addiction can manifest in many forms. Individuals can find themselves addicted to chemical substances,...

Read More
canada's maid law

Explainer: What you should know about Canada’s medical assistance in dying law

Last month, the government of Canada introduced legislation to extend the temporary exclusion of...

Read More