2023 was the year AI went mainstream. It was also the year we started to panic.

Artificial intelligence (AI) went mainstream in 2023. It’s been a long time coming, but there’s still a long way to go for the technology to match people’s sci-fi fantasies about human-like machines.

The catalyst for a year of AI fanfare was ChatGPT. The chatbot gave the world a glimpse of recent advances in computing, even if not everyone figured out how it works or what to do with it.

“I would call this a turning point,” said pioneering AI scientist Fei-Fei Li.

“Hopefully 2023 will be remembered in history for the profound changes in technology, as well as the awakening of the public. It also shows how messy this technology is.”

It was a year for people to discover “what this is, how to use it, what the impact is: all the good, the bad and the ugly,” he said.

AI Panic

The first AI panic of 2023 came shortly after New Year’s Day, when classrooms reopened and schools from Seattle to Paris began blocking ChatGPT.

Teenagers were already asking the chatbot, launched in late 2022, to write essays and answer tests to take home.

The big AI language models behind technologies like ChatGPT work by repeatedly guessing the next word in a sentence after having “learned” the patterns from a huge trove of human-written works.

They often get the facts wrong. But the results seemed so natural that they sparked curiosity about upcoming advances in AI and its potential use for deception and deception.

Concerns grew as this new cohort of generative AI tools – spitting out not just words but novel images, music and synthetic voices – threatened the livelihoods of anyone who writes, draws, strums or codes for a living.

It prompted strikes by Hollywood writers and actors and legal challenges by visual artists and best-selling authors.

Some of the most esteemed scientists in the field of AI warned that the technology’s unbridled progress was moving toward outsmarting humans and possibly threatening its existence, while other scientists called their concerns overblown or drew attention to further risks. immediate.

By spring, AI-generated deepfakes (some more convincing than others) had jumped into US election campaigns, where one falsely showed Donald Trump hugging the country’s former top infectious disease expert.

The technology made it increasingly difficult to distinguish between real and fabricated images of war in Ukraine and Gaza.

By the end of the year, the AI ​​crisis had shifted to the San Francisco-based maker of ChatGPT itself. home OpenAInearly destroyed by corporate turmoil around its charismatic CEO, and to a government meeting room in Belgium, where exhausted political leaders from across the European Union emerged after days of intense talks with an agreement on AI’s first major legal safeguards of the world.

He new EU AI law It will take a few years to come into full effect, and other legislative bodies – including the US Congress – are still a long way from enacting their own.

Too much exaggeration?

There is no doubt that the commercial AI products introduced in 2023 incorporated technological achievements that were not possible in earlier stages of AI research, dating back to the mid-20th century.

Generative AI is right on top of inflated expectations. There are massive claims from generative AI vendors and producers around their capabilities, their ability to deliver those capabilities.

But the latest generative AI trend is at its peak, according to market research firm Gartner, which has followed what it calls the “hype cycle” of the emerging technology since the 1990s. Imagine a wooden roller coaster. climbing its highest hill, about to hurtle into what Gartner describes as a “trough of disillusionment” before returning to reality.

“Generative AI is right on top of inflated expectations,” said Gartner analyst Dave Micko. “There are massive claims by generative AI vendors and producers around their capabilities, their ability to deliver those capabilities.”

Google came under fire this month for editing a demonstration video of its most capable artificial intelligence model, called Gemini, in a way that made it look more impressive and human-like.

Micko said that leading AI developers are pushing certain ways to apply the latest technology, most of which correspond to their current line of products, whether search engines or workplace productivity software. That doesn’t mean the world will use it that way.

“As much as Google, Microsoft, Amazon and Apple would love for us to adopt the way they think about their technology and how they deliver it, I think adoption actually comes from the bottom up,” he said.

Is it different this time?

It’s easy to forget that this is not the first wave of AI Commercialization. Computer vision techniques developed by Li and other scientists helped classify a huge database of photographs to recognize individual objects and faces and help guide autonomous vehicles. Advances in voice recognition have made voice assistants like Siri and Alexa an integral part of many people’s lives.

“When we launched Siri in 2011, it was at the time the fastest-growing consumer app and the only major AI app that people had ever experienced,” said Tom Gruber, co-founder of Siri Inc., which Apple bought and made. an integral feature of the iPhone.

But Gruber believes what’s happening now is the “biggest wave ever” in AI, unleashing new possibilities and dangers.

“We’re surprised that we could accidentally find this amazing language skill by training a machine to play solitaire on the Internet,” Gruber said. “It’s an amazing thing.”

The dangers could come quickly in 2024, when major national elections in the United States, India and elsewhere could be inundated with AI-generated deepfakes.

In the long term, the rapidly improving language, visual perception and step-by-step planning capabilities of AI technology could supercharge the vision of a digital assistant, but only if it is given access to the “inner loop of our current.” of digital life,” Gruber said.

“They can manage your attention as if you were saying, ‘You should watch this video. You should read this book. You should respond to this person’s communication,'” Gruber said.

“That’s what a real executive assistant does. And we could have it, but with a really big risk to personal information and privacy.”

Leave a Reply

Your email address will not be published. Required fields are marked *