‘Deepfake challenge’. Who is really speaking in that video?

“…new tools are “urgently needed to detect these types of manipulated media.”

Technology firms and academics are joining together to launch a “deepfake challenge” to improve tools to detect videos and other media manipulated by AI (Artifical Intelligence).

Advertisements

The initiative includes US$10 million from Facebook and aims to curb what is seen as a major threat to the integrity of online information.

The rise of deepfakes has been driven by recent advances in machine learning. It has long been possible for movie studios to manipulate images and video with software and computers, and algorithms capable of capturing and re-creating a person’s likeness have already been used to make point-and-click tools for pasting a person’s face onto someone else (there’s a video below explaining the techniques).

So, in an election campaign, you could have someone else’s facial features ‘painted’ by AI onto a person saying, well, anything thy want, giving the impression of a politician appearing to say things.

The effort is being supported by Microsoft and the industry-backed Partnership on AI and includes academics from the Massachusetts Institute of Technology, Cornell University, University of Oxford, University of California-Berkeley, University of Maryland and University at Albany.

'Deepfake challenge'. Who is really speaking in that video? | News by Thaiger

Advertisements

It represents a broad effort to combat the dissemination of manipulated video or audio as part of a misinformation campaign.

“The goal of the challenge is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer,” said Facebook chief technical officer Mike Schroepfer.

Schroepfer said deepfake techniques, which present realistic AI-generated videos of people doing and saying fictional things, “have significant implications for determining the legitimacy of information presented online. Yet the industry doesn’t have a great data set or benchmark for detecting them.”

The challenge is the first project of a committee on AI and media integrity created by the Partnership on AI, a group whose mission is to promote beneficial uses of artificial intelligence and is backed by Apple, Amazon, IBM and other tech firms and non-governmental organizations.

Terah Lyons, executive director of the Partnership, said the new project is part of an effort to stem AI-generated fakes, which “have significant, global implications for the legitimacy of information online, the quality of public discourse, the safeguarding of human rights and civil liberties, and the health of democratic institutions.”

Facebook said it was offering funds for research collaborations and prizes for the challenge, and would also enter the competition, but not accept any of the prize money.

Oxford professor Philip Torr, one of the academics participating, said new tools are “urgently needed to detect these types of manipulated media.

“Manipulated media being put out on the internet, to create bogus conspiracy theories and to manipulate people for political gain, is becoming an issue of global importance, as it is a fundamental threat to democracy,” Torr said in a statement.

SOURCE: Agence France-Presse

Learn more about ‘deep fakes’ below…

Technology NewsWorld News
Click to comment

Leave a Reply

Thaiger

If you have story ideas, a restaurant to review, an event to cover or an issue to discuss, contact The Thaiger editorial staff.

Related Articles

Leave a Reply