Elon Musk wants to use AI to summarize the news on Twitter and ... it's not a terrible idea?

  • Oops!
    Something went wrong.
    Please try again later.
  • Elon Musk wants to combine Twitter and Grok, his AI engine, and create a news machine.

  • But Grok won't look for "news" — it will look for "things people are saying on Twitter about news."

  • There's several problems with this approach! But it may be the future, regardless.

What do you think of when you think of "news?"

I'm both old and in the news business. So when I think of "news," it's usually like something that comes from an organization that specializes in distributing and sometimes sourcing/verifying facts about current events. You know: like a newspaper or a website or TV show/network.

Elon Musk thinks news is something different: It's what people talk about on the service formerly known as Twitter.

And that's the vision he's using to build a news service at X, the company formerly known as Twitter, using Grok, his homegrown AI chatbot.

Musk's idea, he tells journalist Alex Kantrowitz, is that the best way to learn about the news isn't by reading/listening to the news, but by listening to what people say about the news.

Conversation on X will make up the core of Grok's summaries — or, really almost all of it. Musk said Grok will not look directly at article text, and will instead rely solely on social posts. "It's summarizing what people say on X," he said.

And just to make it clear, a Musk employee confirms to Kantrowitz that these are indeed his marching orders: "Igor Babuschkin, a technical staff member working at Musk's xAI, said his team is focused on 'making Grok understand the news purely from what is posted on X.'"

Look. I know that "understanding the news purely from what is posted on the company formerly known as Twitter" is not going to give a lot of us comfort. Definitely not in the Elon Musk era of the company formerly known as Twitter.

But … I kinda like it? In theory?

Let's be clear: Understanding what is happening in the world based solely on what people say on X, or any other social media platform, is Not A Good Idea. But consuming commentary about what people say about what's happening in the world isn't a terrible idea. Maybe even a good one?

And, more practically: That kind of commentary consumption actually is the way many people learn about what's happening in the world. Even if you're a Serious News Consumer (thank you!), the bulk of the information you're getting likely isn't directly from a primary news source, but from someone who has aggregated or at least repeated what a primary news source says. It's basic economics: It's very expensive to go find news for yourself, and very cheap to talk about things that are in the news, or to package and present news other people have procured. That's why even large, well-funded news outfits — take, for instance, CNN — spend most of their time discussing and debating things we've already heard about, instead of presenting you with new things.

And while there are plenty of use cases where generative AI doesn't do a great job, it does seem quite useful at summarizing existing information, particularly when it's already been typed up. So why not summarize commentary?

The to-be-sures: Yes, you'd be foolish to rely on an Elon Musk-run AI machine for factual information.

But to be honest, that caveat applies to any AI machine at the moment. Last week, for instance, I asked Google's AI (not its much maligned Gemini but the one Google has started inserting into some people's phones whether they want it or not) a question about World War II and the Tower of London and it confidently gave me an answer about Big Ben instead.

So let's assume that any generative AI answer about anything should be deemed a starting point at best — something that may or may not be right and definitely requires a fact-check before you use it to inform a consequential decision. Just like you should if your source was "thing I read on the internet" or "thing I heard on a podcast."

Which gets to the other problem with Musk's solution, as Kantrowitz points out: Right now, Musk is barely even trying to tell you about the original source of the information he's summarizing.

When I asked Grok to "tell me about Elon Musk's plan to summarize news using grok" it provided me a very cogent summary of Kantrowitz's piece. But to find the source of that summary, I needed to scroll to the bottom of the entry, then all the way to right, past other people's tweets with zero information about Musk's plans, to find Kantrowitz's tweet linking to his original article.

That's a lousy way to give people access to more information. It's also lousy for publishers who are still spending effort — like Kantrowitz — to find new information. It means Musk gets the benefit of their work and they get next to nothing — barely even a link — in return.

Alas, I think that's the way we're headed with AI in general: Despite efforts to negotiate or sue Big AI, most publishers are headed to a world where Big AI engines provide increasingly complete answers to queries and give users little incentive to head to original sources to learn more.

It would certainly be nice if Grok gave Kantrowitz more prominent billing when it provides an answer, and I think it may or may not get around to doing that, depending on Musk's feelings at any given moment.

But any media company that doesn't have a plan, or at least a hope, for dealing with AI news — beyond wishing for a check or a court order — is going to be in trouble regardless.

Read the original article on Business Insider