2023: the year of the AI boom

 Digital generated image of hand with dark skintone going through portal and touching robotic hand.
Digital generated image of hand with dark skintone going through portal and touching robotic hand.
  • Oops!
    Something went wrong.
    Please try again later.

Generative artificial intelligence hit the scene in 2023 and quickly became the next big thing in tech. ChatGPT, an advanced chatbot created by OpenAI, a former nonprofit turned tech-industry unicorn, was at the center of the enthusiasm for AI. The company had a busy year, including sparking an AI arms race and co-founder Sam Altman's near ouster.

Here's a look at how generative AI took over the tech industry in 2023:

The start of the AI 'gold rush'

OpenAI wasn't expecting ChatGPT to be "much more than a passing curiosity among AI obsessives on Twitter," Charlie Warzel wrote for The Atlantic, but it surpassed expectations quickly. Within the first five days of its debut, 1 million users signed up. The advanced chatbot was supposed to be "the software equivalent of a concept car," Warzel added. "Instead, it became one of the most popular applications in the history of the internet." Other generative AI apps gained popularity, and ChatGPT's viral success fueled a swift pivot in Silicon Valley, signaling the beginning of an AI arms race. "The AI 'gold rush' is here," The Washington Post proclaimed at the start of the year.

Generative AI isn't limited to chatbots or text generators. The internet has been flooded with AI-generated portraits, music and videos. The realm of possibilities with the budding technology seems nearly limitless, grabbing the attention of investors. This year, over 1 in 4 dollars invested in American startups went to an AI-related company, per data from Crunchbase. The AI gold rush also helped make Nvidia, which creates microchips needed to run AI, a trillion-dollar company.

With AI advancing rapidly, big tech companies had to move swiftly to capitalize on the momentum. After falling behind in recent years, Microsoft made a deal with OpenAI that "allowed the computer giant to leap over such rivals as Google and Amazon," said The New Yorker. After investing more than $3 billion since 2019, Microsoft reached another $10 billion deal with OpenAI in January. Over the last year, the company has integrated ChatGPT into its search engine, Bing, and released a fleet of AI chatbots called Office Copilots for its other products. Google executives declared a "code red" in response to ChatGPT and started fast-tracking their own AI projects. This led to a less-than-stellar debut of Google's chatbot Bard, which the company admittedly didn't believe was ready to be publicly available. Elon Musk, who also helped found OpenAI, introduced Grok, which he described as an AI chatbot with a "rebellious streak." Meta seemingly abandoned the metaverse and released its own chatbots on Instagram and Facebook in an attempt to court Gen Z users.

Experts sound the alarm about 'societal-scale risks'

While the AI arms race forged ahead rapidly with little-to-no guardrails, it wasn't long before the excitement turned to fear. People began to wonder whether these advanced apps would someday steal jobs and make human employees obsolete. ChatGPT caused some musings about the death of the high school English class.  Creatives started pushing back against AI companies using their work to train their programs without permission. Several authors banned together to file lawsuits against Google and OpenAI, accusing them of using a trove of pirated books to train their large language models. Musicians pushed back against AI-generated impersonations. AI even played a significant role in this year's Hollywood writers' strike.

Experts also warned that the lack of regulation and the swift integration of generative AI everywhere could threaten humanity. In the more immediate sense, people worry that the underdeveloped technology is prone to "hallucinating" or presenting false information as fact.  AI could also help perpetuate disinformation in the wrong hands, which many see as a threat to democracy. There also is an undercurrent of discrimination and bias that has some civil rights activists wary of the technology.

Some of the sternest warnings came from some of the industry's most prominent players. In March, about a thousand AI industry leaders, computer scientists and tech industry VIPs signed an open letter warning that AI was moving too fast, with too few regulations. The group included Elon Musk, Apple co-founder Steve Wozniak, AI pioneer Yoshua Bengio and Stability AI CEO Emad Mostaque. They called for companies to "immediately pause for at least six months the training of AI systems more powerful than GPT-4," or else "governments should step in and institute a moratorium."

A few months later, Geoffrey Hinton, known as the "godfather of AI" for his pioneering work on neural networks, retired from his position at Google to join the growing chorus of experts warning about the risks AI could pose to humanity. "It is hard to see how you can prevent the bad actors from using it for bad things," Hinton told The New York Times. He signed another one-line open letter released by the Center for AI Safety, a nonprofit organization, which warned that the "risk of extinction from AI" was on par with other "societal-scale risks, such as pandemics and nuclear war."

Politicians worldwide began taking steps to create regulations to help mitigate AI's risks. After months of closed-door meetings, President Biden unveiled an executive order to develop guidelines for safely working with AI.  In November, 28 countries gathered in the United Kingdom for a two-day AI summit held by U.K. Prime Minister Rishi Sunak. Still, with the technology spreading so rapidly, regulators need help to keep up.