Margaret Mitchell (scientist)

From Wikipedia, the free encyclopedia
Margaret Mitchell
Born
United States
Alma materUniversity of Aberdeen (PhD in Computer Science)
University of Washington (MSc in Computational Linguistics)
Known forAlgorithmic bias
Fairness in Machine Learning
Computer Vision
Natural Language Processing
Scientific career
FieldsComputer Science
InstitutionsGoogle
Microsoft Research
Johns Hopkins University
ThesisGenerating Reference to Visible Objects (2012)
WebsitePersonal website

Margaret Mitchell is a computer scientist who works on algorithmic bias and fairness in machine learning. She is most well known for her work on automatically removing undesired biases concerning demographic groups from machine learning models,[1] as well as more transparent reporting of their intended use.[2]

Education[]

Mitchell obtained a bachelor's degree in Linguistics from Reed College, Portland, Oregon, in 2005. After having worked as a research assistant at the OGI School of Science and Engineering for two years, she subsequently obtained a Master's in Computational Linguistics from the University of Washington in 2009. She enrolled in a PhD program at the University of Aberdeen, where she wrote a doctoral thesis on the topic of Generating Reference to Visible Objects,[3] graduating in 2013.

Career and research[]

In 2012, Mitchell joined the Human Language Technology Center of Excellence at Johns Hopkins University as a postdoctoral researcher, before taking up a position at Microsoft Research in 2013.[4] She was subsequently employed at Google, where she founded and co-led the Ethical Artificial Intelligence team together with Timnit Gebru until her dismissal in February 2021.

Mitchell is most well known for her work on fairness in machine learning and methods for mitigating algorithmic bias. This includes her work on introducing the concept of 'Model Cards' for more transparent model reporting,[2] and methods for debiasing machine learning models using adversarial learning.[1] Margaret Mitchell created the framework for recognizing and avoiding biases by testing with a variable for the group of interest, predictor and an adversary.[5]

At Microsoft, Mitchell was the research lead of the Seeing AI project, an app that offers support for the visually impaired by narrating texts and images.[6] In February 2018, she gave a TED talk on 'How we can build AI to help humans, not hurt us'.[7]

She was given the opportunity to begin a new position as a Senior Research Scientist at Google Research and Machine intelligence in November 2016. [1]

In February 2021, her employment at Google, where she was co-leading the Ethical Artificial Intelligence team with Timnit Gebru, was terminated.[8][9][10] Her dismissal came after a five-week investigation by Google, who claim that she violated the company's code of conduct and security policies. This came only a few weeks after the controversial departure of her former team co-lead Timnit Gebru in December 2020.[11] Prior to her dismissal, Mitchell had been a vocal advocate for diversity at Google, and had voiced concerns about research censorship at the company.[12]

Leadership[]

Mitchell is a member of the Partnership on AI and an advocate for diversity in technology. She is a co-founder of Widening NLP,[13] a community of women and other under-represented researchers working in natural language processing, and a special interest group within the Association for Computational Linguistics.

References[]

  1. ^ a b Hu Zhang, Brian; Lemoine, Blake; Mitchell, Margaret (2018-12-01). "Mitigating Unwanted Biases with Adversarial Learning". Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. AAAI/ACM Conference on AI, Ethics, and Society. pp. 220–229. doi:10.1145/3278721.3278779.
  2. ^ a b Mitchell, Margaret; Wu, Simone; Zaldivar, Andrew; Barnes, Parker; Vasserman, Lucy; Hutchinson, Ben; Spitzer, Elena; Raji, Inioluwa Deborah; Gebru, Timnit (2019-01-29). "Model Cards for Model Reporting". Proceedings of the Conference on Fairness, Accountability, and Transparency. Conference on Fairness, Accountability, and Transparency. arXiv:1810.03993. doi:10.1145/3287560.3287596.
  3. ^ Mitchell, Margaret (2013). Generating Reference to Visible Objects (PDF) (PhD). University of Aberdeen.
  4. ^ Mitchell, Margaret (February 14, 2017). "Margaret Mitchell (Google Research) "Algorithmic Bias in Artificial Intelligence: The Seen and Unseen Factors Influencing Machine Perception of Images and Language"". John Hopkins. Retrieved November 9, 2021.{{cite web}}: CS1 maint: url-status (link)
  5. ^ Zhang, Brian Hu; Lemoine, Blake; Mitchell, Margaret (2018-12-27). "Mitigating Unwanted Biases with Adversarial Learning". Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. New Orleans LA USA: ACM: 335–340. doi:10.1145/3278721.3278779. ISBN 978-1-4503-6012-8.
  6. ^ "Seeing AI in New Languages". Microsoft. Retrieved February 20, 2021.
  7. ^ "Margaret Mitchell's TED talk". TED. February 2018. Retrieved February 20, 2021.
  8. ^ "Google fires Margaret Mitchell, another top researcher on its AI ethics team". The Guardian. February 20, 2021. Retrieved February 20, 2021.
  9. ^ "Margaret Mitchell: Google fires AI ethics founder". BBC. February 20, 2021. Retrieved February 20, 2021.
  10. ^ "Google fires Ethical AI lead Margaret Mitchell". VentureBeat. February 20, 2021. Retrieved February 20, 2021.
  11. ^ Metz, Cade (3 December 2020). "Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I." The New York Times. Archived from the original on 11 December 2020. Retrieved 12 December 2020.
  12. ^ Osborne, Charlie. "Google fires top ethical AI expert Margaret Mitchell". ZDNet. Retrieved 2021-03-22.
  13. ^ Verma, Sukriti (2021-02-23). "AI Ethics Founder of Google has been Fired : The Reason and everything you need to know". Stanford Arts Review. Retrieved 2021-03-22.{{cite web}}: CS1 maint: url-status (link)
Retrieved from ""