Try out Fawkes the new pixel blurring tool to avoid artificial intelligence and deep learning facial recognition software
Privacy is one of the fundamental rights of any individual. However, with the development of deep learning tools and facial recognition software, it has become virtually impossible to escape the prying eyes. We post our images on social media and it is instantly indexed and logged. The very image can later be used by some facial recognition software to identify us.
University of Chicago researchers have teamed to build a tool that will protect your images from being identified by any facial recognition software. The research team of Shawn Shan, Emily Wenger, Jiayun Zhang, Huiying Li, Haitao Zheng, and Ben Zhao have developed a tool called “Fawkes.” The researchers say that Fawkes software designed is to “help individuals inoculate their images against unauthorized facial recognition models.” The tool developed by the SAND Lab, from the University of Chicago, allows users to control how their images are displayed online and thus avoid facial recognition software.
Fawkes tool may have been named after famous Guy Fawkes aka Guido Fawkes who fought with the Spanish in the failed Gunpowder Plot of 1605. Guy Fawkes became famous after the Anonymous hacker group adopted his face mask as their raison d’être to signify their anonymity and privacy.
Fawkes tool works through an algorithm that slightly alters the pixels of the image. This minor alternation doesn’t change the way the image looks to a human eye but it is enough to fool the present-day facial recognition software to think it is different. Fawkes tool works at the pixel level to introduce imperceptible “cloaks” to photos before they are uploaded to the Internet. These cloaks are enough to mislead the facial models used by deep learning, image scrapers, and facial recognition software.
The researcher have noted that during experiments, Fawkes provided high levels of protection against facial recognition models, the team said, regardless of how the models were trained. We experimentally demonstrate that Fawkes provides 95+% protection against user recognition regardless of how trackers train their models. Even when clean, uncloaked images are “leaked” to the tracker and used for training, Fawkes can still maintain an 80+% protection success rate,” the researchers say.
The researchers noted that they have tested the Fawkes tool against Microsoft Azure Face API, Amazon Rekognition, and Face++. Fawkes was successful in fooling Azure Face training endpoint 100 percent of the time. It scored 34% of the time against Amazon Rekognition’s similarity score system — rising to 100% when more robust cloaking is put into place. When set against Face++, the original success rate was 0%, but when strengthened cloaking was introduced, this rose to 100%.
Fawkes tool just adds/lessens enough pixels to misdirect the facial recognition software. You can try out the Fawkes tool for free by downloading the version for your operating system.Fawkes tool for Apple macOS Fawkes tool for Windows 7/8.1 and Windows 10 Fawkes tool for Linux
The research team will be presenting a paper (.ccPDF) on Fawkes blurring tool at the USENIX Security 2020. If you want to be a part of the Fawkes project as a developer, you can visit their Github for source code.