top of page
Search

Do you know AI sees through the looking glass?


Things are different on the other side of the mirror.


Text is backward. Clocks run counterclockwise. Cars drive on the wrong side of the road. Right hands become left hands.

Intrigued by how reflection changes images in subtle and not-so-subtle ways, a team of Cornell University researchers used artificial intelligence to investigate what sets originals apart from their reflections. Their algorithms learned to pick up on unexpected clues such as hair parts, gaze direction and, surprisingly, beards -- findings with implications for training machine learning models and detecting faked images.


"The universe is not symmetrical. If you flip an image, there are differences," said Noah Snavely, associate professor of computer science at Cornell Tech and senior author of the study, "Visual Chirality," presented at the 2020 Conference on Computer Vision and Pattern Recognition, held virtually June 14-19. "I'm intrigued by the discoveries you can make with new ways of gleaning information."


Differentiating between original images and reflections is a surprisingly easy task for AI, a basic deep learning algorithm can quickly learn how to classify if an image has been flipped with 60% to 90% accuracy, depending on the kinds of images used to train the algorithm. Many of the clues it picks up on are difficult for humans to notice.

For this study, the team developed technology to create a heat map that indicates the parts of the image that are of interest to the algorithm, to gain insight into how it makes these decisions.


The researchers were intrigued by the algorithm's tendency to focus on faces, which don't seem obviously asymmetrical. "In some ways, it left more questions than answers;

They then conducted another study focusing on faces and found that the heat map lit up on areas including hair part, eye gaze -- most people, for reasons the researchers don't know, gaze to the left in portrait photos -- and beards.


Examining how these reflected images differ from the originals could reveal information about possible biases in machine learning that might lead to inaccurate results.


"This leads to an open question for the computer vision community, which is, when is it OK to do this flipping to augment your dataset, and when is it not OK?" "I'm hoping this will get people to think more about these questions and start to develop tools to understand how it's biasing the algorithm."


Understanding how reflection changes an image could also help use AI to identify images that have been faked or doctored -- an issue of growing concern on the internet.

"This is perhaps a new tool or insight that can be used in the universe of image forensics, if you want to tell if something is real or not,"


The research was supported in part by philanthropists Eric Schmidt, former CEO of Google

 
 
 

Comments


Post: Blog2_Post
bottom of page