Adobe’s new tool can hopefully help slow down the spread of fake news with tools to detect manipulated images. The company was working with UC Berkeley scientists to develop a neural network that can spot fake photos and work backward to restore their original appearance. Adobe and UC Berkeley claim that their artificial intelligence tool can spot doctored images 99% of the time. Last year its engineers created an AI tool that detects edited media created by splicing, cloning, and removing objects.
The team working on the project trained a convolutional neural network (CNN) to spot changes in images made with Photoshop’s Face Away Liquify feature, which was designed to change people’s eyes, mouth and other facial features. When put to the test, the neural network detected altered images up to 99 percent of the time. In comparison, people who saw the same photos only spotted the changes 53 percent of the time. The tool was also able to revert images to what it predicted was their original state.
“While we are proud of the impact that Photoshop and Adobe’s other creative tools have made on the world, we also recognize the ethical implications of our technology,” said the company in a blog post. “Fake content is a serious and increasingly pressing issue.”
Adobe says that its new research is actually a part of a wider effort to fight against manipulation across images, videos, audio and documents. “This is an important step in being able to detect certain types of image editing, and the undo capability works surprisingly well,” said head of Adobe Research, Gavin Miller. “Beyond technologies like this, the best defense will be a sophisticated public who know that content can be manipulated — often to delight them, but sometimes to mislead them.”