Meta responds to EU misinformation concerns regarding Israel-Hamas conflict

The company has created a new operations center with experts fluent in Hebrew and Arabic.

SOPA Images via Getty Images

Meta has shared an updated content monitoring action plan as the devastating Israel-Hamas war continues. It follows a stern letter from Thierry Breton, the European Union's (EU) regulatory commissioner, to Meta CEO Mark Zuckerberg about misinformation concerns (such as deep fakes) and compliance with the EU's Digital Services Act (DSA). The company had 24 hours to respond.

In its statement, Meta said that it created an ever-evolving operations center with experts fluent in Hebrew and Arabic: "Since the terrorist attacks by Hamas on Israel on Saturday, and Israel's response in Gaza, expert teams from across our company have been working around the clock to monitor our platforms while protecting people's ability to use our apps to shed light on important developments happening on the ground." Meta claims this new setup lets them remove content and fight misinformation faster.

Meta reportedly took over 795,000 distinct pieces of content in Hebrew or Arabic and removed or marked them with a disturbing label in the three days following the terrorist attack by Hamas. Seven times more content across these two languages was removed daily for violating its Dangerous Organizations and Individuals policy compared to the two months leading up to the conflict.

Hamas is listed under Meta's Dangerous Organizations and Individuals policy and banned from all of the company's platforms — as is any content praising the terrorist group. However, "social and political discourse," such as news articles and general discussion, are allowed.

Further actions by Meta include restricting certain hashtags that are regularly associated with content that violates its policies and removing any content that clearly identifies a hostage (though blurred images are allowed). The company has also lowered the threshold for its monitoring technology, ideally reducing the chances of it recommending harmful content to users. "We want to reiterate that our policies are designed to give everyone a voice while keeping people safe on our apps," Meta's statement continued. "We apply these policies regardless of who is posting or their personal beliefs, and it is never our intention to suppress a particular community or point of view."

Whether these steps will satisfy Breton is unclear. Breton sent a similar letter to X's owner, Elon Musk. X then released an outline of updated policies, but the EU has decided to move forward with an investigation into its compliance with the DSA.