Skip to content

Extracting knowledge graphs from language models as a diagnostic benchmark of model performance (NeurIPS XAI 2021).

License

Notifications You must be signed in to change notification settings

epfml/interpret-lm-knowledge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 

Repository files navigation

Interpreting Language Models Through Knowledge Graph Extraction

Idea: How do we interpret what a language model learns at various stages of training? Language models have been recently described as open knowledge bases. We can generate knowledge graphs by extracting relation triples from masked language models at sequential epochs or architecture variants to examine the knowledge acquisition process.

Dataset: Squad, Google-RE (3 flavors)

Models: BERT, RoBeRTa, DistilBert, training RoBERTa from scratch

Authors: Vinitra Swamy, Angelika Romanou, Martin Jaggi

This repository is the official implementation of the NeurIPS 2021 XAI4Debugging paper titled "Interpreting Language Models Through Knowledge Graph Extraction". Found this work useful? Please cite our paper.

Quick Start Guide

Pretrained Model (BERT, DistilBERT, RoBERTa) -> Knowlege Graph

  1. Install requirements and clone repository
git clone https://github.com/epfml/interpret-lm-knowledge.git
pip install git+https://github.com/huggingface/transformers   
pip install textacy
cd interpret-lm-knowledge/scripts
  1. Generate knowledge graphs and dataframes python run_knowledge_graph_experiments.py <dataset> <model> <use_spacy>
    e.g. squad Bert spacy
    e.g. re-place-birth Roberta

optional parameters:

dataset=squad - "squad", "re-place-birth", "re-date-birth", "re-place-death"  
model=Roberta - "Bert", "Roberta", "DistilBert"  
extractor=spacy - "spacy", "textacy", "custom"

See run_lm_experiments notebook for examples.

Train LM model from scratch -> Knowledge Graph

  1. Install requirements and clone repository
!pip install git+https://github.com/huggingface/transformers
!pip list | grep -E 'transformers|tokenizers'
!pip install textacy
  1. Run wikipedia_train_from_scratch_lm.ipynb.
  2. As included in the last cell of the notebook, you can run the KG generation experiments by:
from run_training_kg_experiments import *
run_experiments(tokenizer, model, unmasker, "Roberta3e")

Citations

@inproceedings{swamy2021interpreting,
 author = {Swamy, Vinitra and Romanou, Angelika and Jaggi, Martin},
 booktitle = {Advances in Neural Information Processing Systems (NeurIPS), 1st Workshop on eXplainable AI Approaches for Debugging and Diagnosis},
 title = {Interpreting Language Models Through Knowledge Graph Extraction},
 year = {2021}
}

About

Extracting knowledge graphs from language models as a diagnostic benchmark of model performance (NeurIPS XAI 2021).

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published