Skip to content

[ACL 2020] Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation

Notifications You must be signed in to change notification settings

uvavision/Double-Hard-Debias

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tianlu Wang, Xi Victoria Lin, Nazneen Fatema Rajani, Bryan McCann, Vicente Ordonez, Caiming Xiong

Abstract

Word embeddings derived from humangenerated corpora inherit strong gender bias which can be further amplified by downstream models. Some commonly adopted debiasing approaches, including the seminal Hard Debias algorithm (Bolukbasi et al., 2016), apply post-processing procedures that project pre-trained word embeddings into a subspace orthogonal to an inferred gender subspace. We discover that semantic-agnostic corpus regularities such as word frequency captured by the word embeddings negatively impact the performance of these algorithms. We propose a simple but effective technique, Double-Hard Debias, which purifies the word embeddings against such corpus regularities prior to inferring and removing the gender subspace. Experiments on three bias mitigation benchmarks show that our approach preserves the distributional semantics of the pre-trained word embeddings while reducing gender bias to a significantly larger degree than prior approaches.

Requirements

Data

  • Word Embeddings: Please download embeddings debiased by our Double-Hard Debias and other word embeddings from (here) and save them into data folder.
  • Special word lists: You can find all word lists used in this project in data folder.

Double-Hard Debias

You can find the detailed steps to implement Double-Hard Debias in GloVe_Debias

Evaluation

  1. We evaluated Double-Hard Debias and other debiasing approaches on GloVe and Word2Vec embeddings. Please check the results in GloVe_Eval and Word2Vec_Eval.
  2. If you want to replicate results on coreference systems, we recommend you:
    • Download word embeddings.
    • Refer to e2r-coref about training a end-to-end coreference system.
    • Refer to WinoBias about using WinoBias dataset to evaluate gender bias in coreference systems.

Citing

If you find our paper/code useful, please consider citing:

@InProceedings{wang2020doublehard,
author={Tianlu Wang and Xi Victoria Lin and Nazneen Fatema Rajani and Bryan McCann and Vicente Ordonez and Caiming Xiong},
title={Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation},
booktitle = {Association for Computational Linguistics (ACL)},
month = {July},
year = {2020}
}

Kudos

This project is developed based on gender_bias_lipstick and word-embeddings-benchmarks. Thanks for their efforts!

About

[ACL 2020] Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published