A New Zealand shooting video hit YouTube every second this weekend

YouTube joined the race to remove graphic footage.

In the 24 hours after the mass shooting in New Zealand on Friday, YouTube raced to remove videos that were uploaded as fast as one per second, reports The Washington Post. While the company will not say how many videos it removed, it joined Facebook, Twitter and Reddit in a desperate attempt to remove graphic footage from the shooter's head-mounted camera.

The speed at which the videos were uploaded forced YouTube to take unprecedented measures. Under standard protocol, YouTube's software flags troublesome content, which human moderators then review. But because the system was inundated, it let the AI software both flag and remove content it suspected to be problematic. As Neal Mohan, YouTube's chief product officer, told The Washington Post, the trade-off was that non-problematic content got swept up and deleted, too.

When that wasn't enough, YouTube also disabled the option to search for "recent uploads." Both that search feature and the use of human moderators are still currently blocked. As an added challenge, many of the videos were altered in ways that made it hard for YouTube's AI to recognize them. And while YouTube tries to direct users to authoritative news sources during crises, for hours after the attack, footage could be found simply by searching "New Zealand."

YouTube has been working to improve its system to flag problematic content for years. In 2017, Google announced it would hire 10,000 YouTube content moderators. At that time, its AI could help take down 70 percent of violent, extremist content within eight hours of upload. But as we saw after the Parkland shooting last year, even the company's human moderation still needs work. Unfortunately, this is ongoing issue, as mass shootings and extremist content continue to spread around the globe. For the time being, neither Facebook, YouTube, Twitter nor Reddit can offer a true solution.