Microsoft's Chief Legal Officer Calls for Content Moderation, Collaboration in Tech After New Zealand Mass Shooting

Brad Smith, president of Microsoft Corp., speaks during a presentation on affordable housing in Bellevue, Washington, on Jan. 17. Photo: Chona Kasinger/Bloomberg

Microsoft president and chief legal officer Brad Smith called for stronger tech industry content moderation after videos of a mass shooting in New Zealand went viral online.

In a blog post Sunday, Smith said "words alone are not enough" to combat violent and hateful posts on tech platforms. Earlier this month, a gunman killed 50 people at mosques in Christchurch, New Zealand, livestreaming the attack. Platforms scrambled to remove the video, which was shared millions of times in edited versions.

Smith said some Microsoft services were used to share the video.

"Across the tech sector, we need to do more," he wrote. "Especially for those of us who operate social networks or digital communications tools or platforms that were used to amplify the violence, it’s clear that we need to learn from and take new action based on what happened in Christchurch."

Microsoft is "exploring additional steps" to flag and remove violent, extremist content, he added. Smith proposed the tech industry as a whole collaborate to prevent extremism and violence on their platforms, as Global Internet Forum to Counter Terrorism group members YouTube, Facebook, Twitter and Microsoft did two years ago.

Tech companies could team up to create or enhance artificial intelligence tools that can "identify and apply digital hashes" to pre-existing videos and new or edited versions of violent content, Smith said. The industry could also create a collaborative "major event" crisis protocol, he proposed, that would outline coordinated efforts to remove violent content and increase communication.

"This would enable all of us to share information more quickly and directly, helping each platform and service to move more proactively, while simultaneously ensuring that we avoid restricting communications that are in the public interest, such as reporting from news organizations," Smith, who is currently in New Zealand, wrote Sunday. Microsoft declined to provide additional comment.

Calls for tighter content moderation policies have not come without concern. Some lawyers, including Annemarie Bridy, professor of law and affiliate scholar at Stanford University Center for Internet and Society, said tightly regulating speech on platforms can lead to over-censorship, or confusion about where to draw the line.

What is considered extreme speech to some users may be activism to others, she said. While the removal of violent videos is something many people can agree on, she said restrictive content moderation policies are complicated by more divisive posts. But she offered other suggestions for tech companies aiming to stop hateful or violent ideas from spreading, like changing recommendation engines.

"Maybe we would need to rely less heavily on content moderation and takedowns if platforms began to look a little more architecturally at the mechanisms they're providing for spreading speech and the kind of speech they're incentivizing people to provide and consume," Bridy said.

Read More:

Synagogue Shooting Raises Tough Questions for Tech Companies

Advertisement