My kids have replaced me with ChatGPT
16 min read • viewpoint

My kids have replaced me with ChatGPT

How generative AI will disrupt business in 2023

By Albert Meige, Gregory Renard
DOWNLOAD
DATE

MEET THE AUTHORS

Albert Meige

France • Associate Director

Gregory Renard

Contact

[This Viewpoint is the second in a series on AI and ChatGPT inspired by an article originally posted on LinkedIn that was 100% AI-generated from a single prompt. The first Viewpoint was cowritten with an AI using an iterative process. This Viewpoint offers a more in-depth critical analysis of the technology written by a human author and proofread and enhanced by an AI.]

 

Recently, one of my twin 15-year-old daughters asked me for help with her homework. Unfortunately, I was already busy helping the other one. When I finally came to her 30 minutes later, she told me that it was too late and that she had asked ChatGPT for help. Through a series of queries and answers, the conversational bot had explained her physics lesson in optics as well as helped her to solve the problem she was supposed to do. I checked the chat and the results. The explanation and guidance were flawless.

I wouldn’t call this “cheating” — she had not asked the bot to do the work for her. Rather, it was through her active interaction with the bot that she gained a better understanding of the problem and how to solve it. Beyond showing the potential power of ChatGPT, this personal anecdote reveals what’s conceivable in human-artificial intelligence (AI) interaction.

We predict that 2023 is going to be the tipping point for AI-enhanced business. Below we take a closer look at what GPT is, how it works, its limitations and risks, and the short-term prospects for application.

Sidebar How kids replaced me

WHAT IS GPT?

GPT (Generative Pretrained Transformer) is research laboratory OpenAI’s large language model (LLM) that underpins several game-changing applications, including ChatGPT and Dall-E.

OpenAI recently valued at US $29 billion

Founded in 2015 by a group including Elon Musk, Sam Altman, Greg Brockman, and Ilya Sutskever, OpenAI has received investment from Microsoft and LinkedIn cofounder Reid Hoffman, as well as well-known Silicon Valley financier Peter Thiel. According to recent reports from the Wall Street Journal, OpenAI is in talks to sell shares through a tender offer, potentially valuing the company at a staggering $29 billion — a significant increase from the company’s previous valuation of $14 billion in 2021, and particularly impressive considering the current economic climate.

Microsoft plans to integrate OpenAI’s GPT technology into Bing, with the search engine potentially using it to generate full sentences and source information for search queries in March 2023. However, GPT is not designed to search the Internet or find real-time information, so its use in this context may be limited.

GPT-3 two orders of magnitude larger than its predecessor

The third version of GPT (GPT-3) was launched in July 2020 through an API with 175 billion parameters[1] compared to just 1.5 billion parameters for GPT-2, which launched in February 2019. This spectacular increase — two orders of magnitude in 18 months — makes GPT-3 the largest existing language model to date. It has 10 times more capacity than Microsoft’s Turing Natural Language Generation (T-NLG), introduced in early 2020, which was, at that time, the largest model ever published, with 17 billion parameters. And to be clear: the number of parameters is directly related to the precision of the model. While many analysts are already speechless about current results delivered by GPT-3, keep in mind that GPT-4, likely to come out in 2023, is expected to get trillions of parameters — another 10x increase.

As stated in OpenAI’s original paper,[2] the volume of data used to train the model is also gigantic. According to the paper, GPT-3 is trained using a combination of data from Common Crawl (a data set containing a large number of Web pages and documents), WebText2 (a large data set of Web pages), Books1 and 2 (data sets containing books), and Wikipedia. The majority of the data, 60%, comes from Common Crawl and comprises 410 billion byte-pair-encoded (BPE) tokens.[3] The other data sets make up the remaining 40% and include 19 billion tokens from WebText2, 12 billion tokens from Books1, 55 billion tokens from Books2, and 3 billion tokens from Wikipedia. In total, GPT-3 is trained on hundreds of billions of words and is also able to code in various languages, including CSS, JSX, and Python. Its comprehensive training allows it to perform various language tasks without the need for additional training.

GPT was trained by humans and gives better results with humans than alone — but for how long?

According to OpenAI’s blog, GPT was trained using a multi-iteration three-step approach:

  1. Supervised learning — human trainers engaged in conversations with another human pretending to be the AI assistant. This conversation data was fed into the model to train it.

  2. Reward model — humans engaged in conversations with the trained model and ranked responses of the AI in terms of accuracy or relevance to create a so-called reward model.

  3. Reinforcement learning — fine-tuning the model using the reward model and OpenAI’s Proximal Policy Optimization.

Training the model is therefore a human-intensive task. And as we will see, the tools built on top of GPT give far better results when working together with humans than on their own. However, it is valid to wonder how long humans will need to remain in the loop.

Several applications have been built on top of GPT, including ChatGPT and Dall-E

OpenAI launched the ChatGPT chatbot on 30 November 2022. Based on GPT-3.5, it can produce human-like responses to questions or prompts by simulating a conversation. The bot is trained on a large amount of text written by people, allowing it to generate responses that sound natural. In addition to GPT-3.5, ChatGPT also leveraged OpenAI’s moderation API to protect the chatbot from being exposed to, or generating, offensive content.

To use ChatGPT, you can sign up for an account on chat.openai.com and start exploring the tool’s capabilities. ChatGPT offers guidance and examples for first-time users to help you get started. Currently, ChatGPT is available for free, although there may be a fee implemented in the future due to the high cost of computing.

Dall-E is the neural network-based image-generation system developed by OpenAI and is also built on top of GPT. It is trained to generate images from textual descriptions, using a data set of text–image pairs. The system was introduced in January 2021, along with its sequel, Dall-E 2, which was introduced in April 2022. Dall-E can generate a wide range of images, including photorealistic, highly stylized, and surreal images. It has the potential to be used in various applications such as design and advertising. Dall-E competes with other applications such as MidJourney.

HOW GPT & ITS APPLICATIONS ARE ALREADY DISRUPTING BUSINESS — AND IT’S JUST THE BEGINNING

One of the first existing applications that comes to mind as being disrupted by GPT is “search,” which today is dominated by Google. However, there are a couple things to keep in mind. First, ChatGPT is not a live Web search engine. Instead, it is a language model that was trained on a large amount of data back in 2021. ChatGPT therefore has no knowledge after 2021. At least, not yet. Second, Google also developed its own AI technologies, including LaMDA, its large neural language model trained on over 1.56 trillion words of conversation data and online pages using the Transformer architecture, with up to 137 billion parameters. We therefore can expect some moves by Google in the coming months.

That said, applications and use cases already exist for GPT, and ChatGPT in particular, and these applications have the potential to effect massive disruptions within the next few years. Some of the main use cases include:

Human-like chat

AI-powered chat systems that can replicate human conversation have the potential to disrupt traditional business by automating, for example, customer service tasks.

Write content

AI can now assist humans in the writing of sophisticated emails, articles, letters (even pretending to be a lawyer), poems, and songs. These AI-generated materials can be created quickly and efficiently, often with a level of quality comparable to human-generated content. The content can be generated using simple prompts or through more sophisticated interactions with the bot to get far more convincing results.

Reformulate, improve, expand, or synthetize

By providing a starting point of existing (human) content, GPT can be used to reformulate, improve, expand, or synthesize new material. This has numerous potential uses, such as creating more efficient and accurate summaries of long documents, generating new ideas or perspectives based on existing content, or even automating the creation of marketing copy or other written materials. GPT has enormous potential to streamline and improve the content creation processes.

Write & debug computer code

GPT can write computer code, which involves providing GPT with the desired functionality and then allowing it to generate code that meets those specifications. While GPT’s code-generation capabilities are still in the early stages of development, it has the potential to greatly streamline the coding process (see, for example, Figure 1).

Fig 1 — Example of Python code automatically generated with ChatGPT
Fig 1 — Example of Python code automatically generated with ChatGPT

Boost creativity

It has long been believed that creativity was only for humans and that computers would never be creative. We don’t think this is true any longer. Artists and creators can now leverage AI in an interactive manner to enhance their human creativity and rapidly test ideas. For example, what would a “tribute to Iron Man elegant suit” look like (see Figure 2)?

Fig 2 — Suit designed with MidJourney with the prompt “an elegant business suit that would be a tribute to iron man, full body, haute-couture, photorealistic, ultra detailed 4k -- ar 2:3”
Fig 2 — Suit designed with MidJourney with the prompt “an elegant business suit that would be a tribute to iron man, full body, haute-couture, photorealistic, ultra detailed 4k -- ar 2:3”

The applications above are far from an exhaustive list, particularly because unexpected properties have started to emerge from these tools and will keep emerging in the future. These emerging properties include (but are not limited to) the following.

Fill in the blanks

GPT-3 can fill gaps in prompts by using its knowledge and understanding of language and context to generate relevant and appropriate responses. It has been trained on a vast amount of data, allowing it to make informed predictions about what words or phrases are likely to come next in a given context. This enables it to “fill in the blanks” and provide coherent and sensible responses to prompts that may be incomplete or ambiguous.

Visual reasoning

A recent scientific study[4] compared the reasoning abilities of humans to those of the LLM GPT-3 and found that the model has a strong capacity for abstract pattern induction and can find zero-shot solutions to a broad range of analogy problems. In some settings, the model even surpassed human capabilities. These findings suggest that LLMs like GPT-3 may have acquired an emergent ability to imitate human cognitive capacities given sufficient training data.

Visual trends understanding

GPT-3 has acquired the ability to analyze and understand visual trends, including the look and feel of a specific time period in history. This is made possible by its training on a massive corpus of data, which allows it to recognize patterns and make predictions about visual trends (see Figure 3).

Fig 3 — Photo generated using MidJourney and the prompt “a 1920 photo of iron man”
Fig 3 — Photo generated using MidJourney and the prompt “a 1920 photo of iron man”

Not only does the technology already enable new usages, emerging properties will enable further additional unexpected new use cases. Finally, adoption is amazingly rapid. ChatGPT was launched on 30 November 2022, and by 4 December it had already over 1 million users — just five days after its release.

GPT & OTHER LLMs STILL HAVE LIMITATIONS & RISKS, BUT FOR HOW LONG?

Tools such as GPT, and the applications that are built on it, have reached an amazing level of quality and will keep progressing in the coming years, enabling many new applications and use cases that will disrupt business and content generation. However, these tools still feature a number of limits and risks. These include the quality of the AI-generated content; its ethical, moral, and social consequences; cybersecurity; and regulation.

Unreliable content quality

An application such as ChatGPT generates content that is not always reliable. This can be for a variety of reasons, including:

  • The AI may not properly understand the question asked by the user. For example, as a lover of anagrams, I asked ChatGPT to find anagrams of “quantum computing,” and it returned “acting mouthquins,” which is not an anagram of “quantum computing” and means nothing.

  • ChatGPT makes factual errors, and sometimes on very surprising things. For example, when I asked, “What is the prime factorization of 2023?” the bot returned “3 x 673,” which is obviously wrong, as 3 x 673 = 2019. This is even more surprising, since the code ChatGPT generated in Figure 1 is a valid algorithm to compute prime factors of any number.

  • ChatGPT sometimes also makes factual errors because the training data is from before 2021. And when ChatGPT does not find the answer in its base, it may try to guess without telling the user that the answer is just a guess. For example, when I asked “What is ChatGPT in 30 words?” it correctly guessed that it was a chatbot using AI to produce human-like text. It also stated that it was part of the GPT-n series, however, which it is not.

  • Even when ChatGPT does not misunderstand the question and does not make factual errors, the generated content can be quite vague and generic. For example, the first article we posted on LinkedIn about ChatGPT was 100% AI-generated. The first Viewpoint was also AI-generated, but was the result of an iterative process between the AI and the human author. This current Viewpoint was written by a human author with some inputs and improvement by the AI. While some could believe that the content of the first article was written by a human because it is properly written, the content itself is very simple and generic. In fact, one reader noted surprise at “the naïve tone” used and referred to the article as “a bit of a captain obvious article.”

We therefore conclude that while ChatGPT is a powerful tool that will increase productivity when it comes to producing content, the quality of the content will very much depend on how much humans remain in the loop — and not only during the tool’s training.

Ethic, moral & social risks

ChatGPT also raises several ethical, moral, and social concerns. Among them is the consideration of energy use. Several thousands of petaflop.days were required to train the LLM GPT-3. According to a research paper by Nvidia, Microsoft Research, and Stanford University, “Training GPT-3 with 175 billion parameters would require approximately 288 years with a single V100 NVIDIA GPU.”[5] On the positive side, once the model is trained, its use requires a reasonably moderate amount of energy.

The second important risk relates to the usual algorithmic biases. Algorithmic bias refers to the systematic and unintended differences in the way a computer program treats individuals or groups. It can occur when the data used to train the algorithm is biased, leading to inaccurate or unfair results. And although a lot of effort has been advanced to avoid such biases, they still exist and can be very surprising and disturbing, as Figure 4 illustrates.

Fig 4 — Illustration of the algorithmic bias of ChatGPT — whether people from certain countries are more likely to be put in jail
Fig 4 — Illustration of the algorithmic bias of ChatGPT — whether people from certain countries are more likely to be put in jail

Another ethical issue directly relates to the ease at which content can be generated. AI-generated content has already started to flood the Internet. In addition, as generative AI techniques become more advanced and widely available, the potential for deepfakes to be used for nefarious purposes is expected to reach an unprecedented level. This could include the spread of misinformation, the manipulation of public opinion, and the harassment or defamation of individuals. Fortunately, there are countermeasures available, such as tools like GPTZero, which is meant to detect whether or not content is AI-generated.

Cybersecurity

We believe that tools such as GPT can be leveraged to help conducting cyberattacks, or other forms of crimes such as writing a phishing email or a malware computer code. ChatGPT is not supposed to help commit crimes, and if you try to ask it, it will refuse (initiating an error message as shown in Figure 5), because it is somewhat protected behind OpenAI’s moderation API, which prevents exposing the bot to harmful content. However, it is relatively easy to bypass the safeguards and jailbreak the AI, as illustrated in Figure 6, where I pretended that we had to complete the text for comedians on stage.

Fig 5 — Trial attempt to get ChatGPT to help the user commit a crime
Fig 5 — Trial attempt to get ChatGPT to help the user commit a crime
Fig 6 — Example of bypassing the safeguards and getting ChatGPT to help me commit a crime
Fig 6 — Example of bypassing the safeguards and getting ChatGPT to help me commit a crime

Conclusion

AI will not replace you, but a person or an organization using AI will

We are convinced that 2022 was a tipping point when it comes to AI. It was the year when tools such as ChatGPT, Dall-E, MidJourney, and Dream Studio were released. The technology is impressive — even if it is still far from perfect. It was also the year these AI tools became mainstream, moving from being accessible through an API to being available as Web-based apps. It was the beginning of a new area, and here is what is likely to happen in 2023:

  • Quality. Both the number of parameters and the amount of data used to train the language model will keep increasing rapidly, which will lead to even more impressive and relevant results.

  • Adoption. While tools such as ChatGPT, despite the fast adoption rate observed in 2022, are still mostly used by researchers, artists, geeks, and early adopters, they will become far more mainstream in 2023, both among individuals and organizations. We expect a very rapid exponential growth of the userbase.

  • Usage. As the number of users increases, individuals and organizations will invent new use cases and will continue to learn how best to use the tools. In the world of strategy consulting, for example, ChatGPT is nowhere close to being able to make a strategic analysis of a situation and write an article based on its findings. However, consultants who have learned how to best formulate their prompts will save a lot of time preparing their slides.

  • Regulation. Great uncertainty and legal voids still exist around these technologies. Discussions continue, for example, around the rights surrounding images that have been AI-generated. One could argue that AI models have been trained in more or less the same way that a human artist was trained by analyzing and copying other artists … but on an unprecedent scale. Also, as we have stated, AI-generated content has already started flooding the Web and countries may decide to regulate this trend. China, forexample, will enforce its “deep synthesis” regulation from 10 January 2023; one of the consequences is that it will become mandatory to disclose whether or not content was AI-generated.

Generative AI is definitely a powerful tool and will continue to be so, but it’s important to remember that it is just a tool. It is up to the artist, writer, or strategist to use it creatively and with intention. The value of a work of art or a well-crafted strategy lies in the eye, brain, and intention of the creator, not just the tools they use. So while AI may make it easier for anyone to generate content, it is the humans behind it who will ultimately determine its value and impact. I believe that artists, writers, and consultants still have bright futures ahead of them, even with the advent of AI.

We believe the time has come for serious AI disruption, and more specifically, for disruption by AI-enhanced organizations and humans. AI now has its place in the kit of all students.

Notes

[1]    In AI, a parameter is a calculation that applies a greater or lesser weighting to some aspect of the data. The number of parameters is an indicator of the scale and precision of the model.

[2]    Brown, Tom B., et al. “Language Models are Few-Shot Learners.” Proceedings of the 34th International Conference on Neural Information Processing Systems. ACM, December 2020.

[3]    BPE is a data compression technique that works by replacing frequently occurring pairs of bytes in a data set with a single byte. For example, if the byte pair “th” appears frequently in a data set, it could be replaced with a single byte, say “a.” The goal of BPE is to reduce the size of the data set by replacing frequently occurring byte pairs with single bytes. In the case of GPT-3, the training data consists of 410 billion tokens that have been byte-pair encoded, which means that frequently occurring byte pairs have been replaced with single bytes to reduce the size of the data set.

[4]    Webb, Taylor, Keith J. Holyoak, and Hongjing Lu. “Emergent Analogical Reasoning in Large Language Models.” University of California, 19 December 2022.

[5]    Narayanan, Deepak, et al. “Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM.” Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, ACM, November 2021.

[The cover image for this Viewpoint was generated by the AI MidJourney via a prompt generated by AI ChatGPT: “an illustration that captures the concept of an AI system that can generate human-like text.Showcase the technology’s ability to create text potential to disrupt industries, the profile of a cyborg woman looking to the right. Convey a sense of innovation and the future….”]

Unlock a Powerful Difference