Algorithmic Justice League

From Wikipedia, the free encyclopedia
Algorithmic Justice League
Algorithmic Justice League Logo.png
AbbreviationAJL
Formation2016
FounderJoy Buolamwini
PurposeAI activism
Location
Websitehttps://www.ajlunited.org/

The Algorithmic Justice League (AJL) is a digital advocacy organization based in Cambridge, Massachusetts. Founded by computer scientist Joy Buolamwini in 2016, AJL aims to use research and art to raise awareness of the social implications of artificial intelligence and the implicit biases of racial-recognition technology,[1] and to drive a more ethical use of AI through open online seminars and meetings.[2] Buolamwini and AJL were featured in the 2020 Netflix documentary Coded Bias.[3]

History[]

Buolamwini founded the Algorithmic Justice League in 2016 after a personal experience with biased facial detection software: the software could not detect her "highly melanated" face until she donned a white mask. AJL was formed to expose the ubiquity of such bias in artificial intelligence and the threat it poses to civil rights.[4] Early AJL campaigns focused primarily on face recognition software, while recent campaigns have dealt more broadly with questions of equitability and accountability in AI, including algorithmic bias, algorithmic decision-making, algorithmic governance, and algorithmic auditing.

Additionally there is a community of other organizations working towards the same goals, including Data and Society, Data for Black Lives, and the Distributed Artificial Intelligence Research Institute (DAIR).[5]

Activities[]

Facial recognition[]

In 2018, founder Buolamwini collaborated with AI ethicist Timnit Gebru to release a landmark study on racial and gender bias in facial recognition algorithms. Their research, entitled Gender Shades, determined that facial analysis software released by IBM and Microsoft was less accurate when analyzing dark-skinned and feminine faces, compared to light-skinned and masculine faces.[6][7][8] The work has been frequently cited by advocates and researchers since its publication, with over 1,000 academic citations as of December 2020. Its publication was accompanied by the launch of the Safe Face Pledge, an initiative designed with the Georgetown Center on Privacy & Technology that urged face recognition developers to self-regulate.[9] The Gender Shades project and subsequent advocacy undertaken by AJL and similar groups led multiple tech companies, including Amazon and IBM, to address biases in the development of their algorithms and even temporarily ban the use of their products by police in 2020.[10][11]

A research collaboration involving AJL led to the release of a white paper in May 2020 calling for the creation of a federal office to regulate government use of face recognition.[12] In July, AJL joined the ACLU and the Georgetown University Law Center in calling for a federal moratorium on face recognition technology.[13]

Speech recognition[]

In March 2020, AJL released a spoken word project, titled Voicing Erasure, that addresses racial bias in speech recognition algorithms. The piece was performed by numerous female and non-binary researchers in the field, including Ruha Benjamin, Sasha Costanza-Chock, Safiya Noble, and Kimberlé Crenshaw.[14]

Algorithmic governance[]

In 2019, Buolamwini represented AJL at a congressional hearing of the US House Committee on Science, Space, and Technology, discussing the societal and ethical implications of AI.[15][16]

Olay Decode the Bias campaign[]

In September 2021, Olay collaborated with AJL and ORCAA to see if Olay's Skin Advisor System shows any potential harmful bias against women of color.[17] The audit revealed that the system skin age estimate was more accurate for lighter skin tones and more accurate for ages 30–39, less personalized "best zone" and "improvement zone" across all users, used the Fitzpatrick Sin Type and ITA skin classification scales that leaned towards lighter skin tones, more designed for women than other gender identities, and could improve consent of data practices.[18] In result, Olay has taken steps to be inclusive by sending 1,000 girls to Black Girls CODE Camp to pursue STEM careers.

CRASH project[]

In July 2020, the Community Reporting of Algorithmic System Harms (CRASH) Project was launched by AJL.[19] The project began in 2019 when Joy Buolamwini and Camille François met at the Bellagio Center Residency Program. The project aimed to find individuals who were interested in helping to better AI against racial bias.[19] In June 2021, AJL updated that they had taken all their findings on bug bounties research and combined it into a brief where it has been submitted to a new developing platform. AJL states that this upcoming platform would help people report algorithmic harm which would help companies develop improved AI systems.[20]

Voicing Erasure project[]

In March 2020, Algorithmic Justice League released a project titled Voicing Erasure Project.[21] A project performed by various women and led by Allison Koenecke. These Automated speech recognition (ASR) systems are known as sophisticated machines that identifies and converts spoken language into text.[21] In this project they address the notion of racial bias in speech recognition algorithms. They also examine various ASR systems that are developed by Amazon, Apple, Google, IBM, and Microsoft - through the process, they transcribe the interviews they conducted with 42 white individuals and 73 black individuals in order to demonstrate the racial disparities based on the performance of five commercial ASR systems. The end results determined racial disparities in all five commercials, with an average word error rate of 0.35 for black speakers, in comparison to 0.19 for white speakers. Therefore depicting machine learning systems as a database heavily relying on English that is spoken by white Americans.

References[]

  1. ^ "Learn More - The Algorithmic Justice League". www.ajl.org. Retrieved 12 December 2020.
  2. ^ Rachel Metz. "These high school students are fighting for ethical AI". CNN. Retrieved 2021-11-15.
  3. ^ Lee, Jennifer (8 February 2020). "When Bias Is Coded Into Our Technology". NPR.org. Retrieved 12 December 2020.
  4. ^ Trahan, Erin (18 November 2020). "Documentary 'Coded Bias' Unmasks The Racism Of Artificial Intelligence". www.wbur.org. Retrieved 12 December 2020.
  5. ^ Tiku, Nitasha (December 2, 2021). "Google fired its star AI researcher one year ago. Now she's launching her own institute". Washington Post. Retrieved 2021-12-27.
  6. ^ Buolamwini, Joy; Gebru, Timnit (2018). "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification" (PDF). Proceedings of the 1st Conference on Fairness, Accountability and Transparency. 81: 77–91. Retrieved 12 December 2020.
  7. ^ "Gender Shades". gendershades.org. Retrieved 12 December 2020.
  8. ^ Buell, Spencer (23 February 2018). "MIT Researcher: AI Has a Race Problem, and We Need to Fix It". Boston Magazine. Retrieved 12 December 2020.
  9. ^ "Announcement - Safe Face Pledge". Safe Face Pledge. 11 December 2018. Archived from the original on 20 January 2021. Retrieved 8 February 2021.
  10. ^ Hao, Karen (12 June 2020). "The two-year fight to stop Amazon from selling face recognition to the police". MIT Technology Review. Retrieved 12 December 2020.
  11. ^ Meyer, David (9 June 2020). "IBM pulls out of facial recognition, fearing racial profiling and mass surveillance". Fortune. Retrieved 12 December 2020.
  12. ^ Burt, Chris (8 June 2020). "Biometrics experts call for creation of FDA-style government body to regulate facial recognition". Biometric Update. Retrieved 12 December 2020.
  13. ^ Rodrigo, Chris Mills (2 July 2020). "Dozens of advocacy groups push for Congress to ban facial recognition technology". TheHill. Retrieved 12 December 2020.
  14. ^ Johnson, Khari (1 April 2020). "Algorithmic Justice League protests bias in voice AI and media coverage". VentureBeat. Retrieved 11 December 2020.
  15. ^ Quach, Katyanna (22 May 2019). "We listened to more than 3 hours of US Congress testimony on facial recognition so you didn't have to go through it". www.theregister.com. Retrieved 12 December 2020.
  16. ^ "Artificial Intelligence: Societal and Ethical Implications | House Committee on Science, Space and Technology". science.house.gov. Retrieved 12 December 2020.
  17. ^ "Decode the Bias & Face Anything | Women in STEM | OLAY". www.olay.com. Retrieved 2021-11-09.
  18. ^ "ORCAA's Report". www.olay.com. Retrieved 2021-11-09.
  19. ^ a b "Algorithmic Vulnerability Bounty Project (AVBP)". www.ajl.org. Retrieved 2021-11-16.
  20. ^ League, Algorithmic Justice (2021-08-04). "Happy Hacker Summer Camp Season!". Medium. Retrieved 2021-11-16.
  21. ^ a b "Voicing Erasure". www.ajl.org. Retrieved 2021-11-16.

External links[]

Retrieved from ""