close
close

Gottagopestcontrol

Trusted News & Timely Insights

The hype about generative AI is coming to an end – and now the technology could actually be useful
Alabama

The hype about generative AI is coming to an end – and now the technology could actually be useful

Less than two years ago, the launch of ChatGPT sparked a hype around generative AI, with some saying the technology would spark a fourth industrial revolution and completely reshape the world as we know it.

In March 2023, Goldman Sachs predicted that 300 million jobs would be lost or eliminated due to AI. It seemed like a massive change was about to happen.

Eighteen months later, generative AI is not changing the business world. Many projects using the technology are being canceled, such as McDonald’s attempt to automate drive-thru ordering, which went viral on TikTok after producing comical failures. Government efforts to develop systems to aggregate public input and calculate welfare entitlements have suffered the same fate.

So what happened?

The AI ​​hype cycle

Like many new technologies, generative AI follows a path known as the “Gartner hype cycle,” first described by the American technology research firm Gartner.

This widely used model describes a recurring process in which the initial success of a technology leads to inflated public expectations that ultimately fail to materialize. The initial “peak of inflated expectations” is followed by a “trough of disappointment,” followed by a “slope of enlightenment,” which finally reaches a “plateau of productivity.”


The conversationCC BY

A Gartner report published in June listed most generative AI technologies as either at the peak of inflated expectations or still trending upward. The report argued that most of these technologies are still two to five years away from being fully productive.

Many convincing prototypes of generative AI products have been developed, but their practical implementation has been less successful. A study published last week by the American think tank RAND showed that 80% of AI projects fail – more than twice as many as non-AI projects.

Shortcomings of current generative AI technology

The RAND report lists many difficulties with generative AI, ranging from high investment requirements in data and AI infrastructure to a lack of the necessary human talent. However, the unusual nature of GenAI’s limitations poses a critical challenge.

For example, generative AI systems can solve some highly complex university admissions tests, but fail at very simple tasks. This makes it very difficult to assess the potential of these technologies, leading to false confidence.

If it can solve complex differential equations or write an essay, it should also be able to take simple drive-through orders, right?

A recent study has shown that the capabilities of large language models such as GPT-4 do not always match expectations. In particular, in cases where the stakes are high and incorrect answers can have catastrophic consequences, more powerful models performed significantly worse.

These results suggest that these models can inspire false confidence in their users. Because they answer questions fluently, people may reach optimistic conclusions about their abilities and use the models in situations for which they are not suited.

Experience from successful projects shows that it is difficult to get a generative model to follow instructions. For example, Khan Academy’s Khanmigo tutoring system often revealed the correct answers to questions even though students were instructed not to do so.

Why is the hype about generative AI not over yet?

There are several reasons for this.

First, despite its challenges, generative AI technology is improving rapidly, with scale and size being the main drivers of this improvement.

Research shows that the size of language models (number of parameters) as well as the amount of data and computational power used for training contribute to improved model performance. In contrast, the architecture of the neural network driving the model appears to have minimal impact.

Large language models also exhibit so-called emergent abilities, that is, unexpected skills on tasks for which they were not trained. Researchers have reported that new abilities “emerge” when models reach a certain critical “breakthrough size.”

Studies have shown that sufficiently complex language models can develop the ability to think analogically and even reproduce optical illusions experienced by humans. The exact causes of these observations are controversial, but there is no doubt that language models are becoming more sophisticated.

So AI companies continue to work on larger and more expensive models, and tech companies like Microsoft and Apple are banking on the return on their existing investments in generative AI. According to a recent estimate, generative AI needs to generate $600 billion in annual revenue to justify current investments—and that number is expected to rise to $1 trillion in the coming years.

The biggest winner of the generative AI boom is currently Nvidia, the largest maker of the chips that are powering the generative AI arms race. A proverbial shovel maker in the gold rush, Nvidia recently became the most valuable publicly traded company in history, with its share price tripling in a single year to reach a valuation of $3 trillion in June.

What happens next?

As the hype around AI slowly dies down and we go through a phase of disillusionment, more realistic strategies for the introduction of AI are also emerging.

First, AI is being used to assist humans rather than replace them. A recent survey of American companies found that they primarily use AI to increase efficiency (49%), reduce labor costs (47%), and improve the quality of their products (58%).

Second, we are also seeing a rise in smaller (and cheaper) generative AI models that are trained on specific data and deployed locally to reduce costs and optimize efficiency. Even OpenAI, which is leading the race to build ever-larger models, has released the GPT-4o Mini model to reduce costs and improve performance.

Third, we see a strong focus on providing AI competency training and educating the workforce on how AI works, its potential and limitations, and best practices for using AI ethically. We will likely need to learn (and relearn) how to use various AI technologies for years to come.

Ultimately, the AI ​​revolution will look more like an evolution. Its use will increase over time and gradually change and transform human activities. And that is much better than replacing them.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *