Harvard and MIT Fund Deepfake Detection, Government Transparency AI Tools

Photo: Shutterstock

Last week, Harvard’s Berkman Klein Center for Internet & Society and the MIT Media Lab announced seven winners of their first "AI and the News: An Open Challenge" competition. Each winner received a grant to help with the development of technology platforms that address the problem of misinformation in society.

The two Boston-area schools teamed up and funded the challenge through their joint Ethics and Governance of AI Initiative, said Tim Hwang, director of the initiative at the Berkman Klein Center and the MIT Media Lab. The initiative, launched two years ago, offers a broad perspective of AI and ethics.

The AI and the News Open Challenge spun off the initiative to focus on issues to fund experimental solutions to misinformation that likely wouldn’t have gotten funding from conventional sources.

Legal Robot

Of those seven organizations, Legal Robot Inc. will receive a $100,000 grant to create a database of city, county and state government contracts.

“It was great to get some funding and have a good rationale for putting it out there on the public domain,” said Legal Robot founder and CEO Dan Rubins. “We aren’t going to make any money off of this product, but it’s just a good thing to do with AI.”

Rubins said the idea for the database of government contracts sprung from the Panama Papers, which were client files leaked from breached former law firm Mossack Fonseca.

“The place we got the idea is the Panama Papers. The journalist there used graph databases to understand the link between all the entities; that's the idea of what we are doing,” Rubins said. However, he added, the public would “rely on public documents received through legal means” and not documents obtained through breaches.

The database will allow users to track the terms of government contracts and government’s business interactions with companies. It will also create data visualizations in partnership with TLM Works, a web development training program that uses labor from San Quentin State Prison to code software.

Rubins acknowledged using prison labor may sound alarming, but noted the prisoners represent a demographic that isn’t reflected in a computer programmer field that’s been called out repeatedly for its lack of diversity.

“I think that’s important because these typically aren’t the people represented in building AI,” Rubins said. He noted those inmates are more representative of society “and not just white males that live in Silicon Valley.”

To be sure, Legal Robot isn’t new to automation or simplifying data. The startup's products includes a DMCA Safe Harbor Bot that autonomously checks a website’s compliance with the Digital Millennium Copyright Act and registers and maintains the site’s designated agent with the U.S. Copyright Office. Last year, the company teamed up with MuckRock Foundation to post the contracts of California government agencies for the nonprofit’s “Dark Patterns in Public Contracts” project, Rubins added.

RIT's Deepfake Detection

Harvard and MIT also gave $100,000 to the Rochester Institute of Technology (RIT) to create a deepfake-detecting software to help journalists identify fraudulent videos. Cybersecurity experts and lawyers have recently sounded the alarm over deepfakes and the litigation implications they may pose.

Matt Wright, director of RIT's Center for Cybersecurity and leader of the team creating the tool, said deepfakes are becoming “scarily good” but technology can be used to combat the impostor videos.

“Detecting deepfakes is a challenge we are up against,” Wright said. “We believe that at least with existing technology, we think there are subtle, but detectable differences with how the mouth moves and how that connects with the sound of video that we can leverage and use deep learning technology to do the detection.”

The need for detecting deepfakes can be dire, Wright said. From the national security perspective, believable deepfakes could cause a national panic.

“If you create audio or video of the president saying anything they want the president to say ... you could do the national equivalent of screaming fire in a crowded theater and creating all types of havoc or changing the course of an election,” Wright noted.

Wright said his team plans to create a website where anyone can analyze a video’s images and audio to detect if it has been doctored. The website would offer the user a timeline marking where suspicious editing may have occurred.