Technology

Gender and racial bias in Amazon’s facial recognition tool

Image source : Google

Amazon’s facial analysis algorithm struggled with finding the gender of darker skinned women and women in general, according to the study published by MIT Media Lab.

MIT Media Lab conducted a test on Rekognition tool which showed gender and racial bias. They published the study last week by a test conducted in 2018. Whereas, software developed by IBM and Microsoft showed relatively better results.

Rekognition tool identified women as men one-fifth of the time and classified darker skinned women as men 31% of the time. Amazon has been the frontrunner in facial recognition algorithms and has courted powerful customers around the US including the Police department and Immigration and Customer Enforcement(ICE).

Amazon argued on MIT’s test which was conducted in 2018 saying that researchers didn’t use the updated version of Rekognition. Amazon pinpointed that MIT paper didn’t mention about the confidence threshold-i.e, the minimum precision that Rekognition’s predictions must achieve to be considered “correct”-used in the test. Paper was published by Joy Budamwini and Inioluwa Deborah Raji.

“Using an up-to-date version of Amazon Rekognition with similar data download from parliamentary websites and the Megaface dataset of [1 million] images, we found exactly zero false positive matches with the recommended 99[percent] confidence threshold,” Matt Wood, general manager of Deep Learning and AI at Amazon Web services, told VentureBeat.

He said that our algorithm makes use of facial analysis( concerned with spotting images and assigning generic attributes to them) rather than facial recognition( matches an individual face to faces in videos and images).
“Facial Analysis is usually used to help search a catalog of photographs,” he said.

“Facial recognition is a distinct and different feature from facial analysis and attempts to match faces that appear similar. This is the same approach used to unlock some phones, or authenticate somebody entering a building or by law enforcement to narrow the field when attempting to identify a person of interest.”

He further added that Amazon is continuously working towards seeking input and feedback to constantly improve this technology and support the creation of third-party evaluations, datasets, and benchmarks.”

In previous studies also, researchers have found the same problem in Amazon’s facial recognition tool.
While scanning members of Congress, Rekognition matched 28 members with police mugshots, according to a study conducted by ACLU. The faulty algorithm can be blamed on the biased dataset. IBM collected curated dataset to boost accuracy.

Paper mentioned: “The potential for weaponization and abuse of facial analysis technologies cannot be ignored nor the threats to privacy or breaches of civil liberties diminished even as accuracy disparities decrease.”

About the author

Vibhuti Kushwaha

Vibhuti Kushwaha has been writing for a long time. She enjoys writing about health and tech stuff. Her educational background of computer science helps her to write deeply about technology. She writes engaging and valuable health related articles.

Add Comment

Click here to post a comment