This repository hosts code and datasets relating to Responsible NLP projects from Meta AI.
- From Eric Michael Smith, Melissa Hall, Melanie Kambadur, Eleonora Presani, Adina Williams. “I’m sorry to hear that”: finding bias in language models with a holistic descriptor dataset. 2022.
- Code to generate a dataset, HolisticBias, consisting of nearly 600 demographic terms in over 450k sentence prompts
- Code to calculate BiasDiff, a metric of the amount of bias in a language model, defined on HolisticBias demographic terms
for how to help out, and see LICENSE for license information.