Skip to content

Commit

Permalink
Bump dev version, update tutorial
Browse files Browse the repository at this point in the history
  • Loading branch information
gsarti committed Jan 14, 2024
1 parent ca6e2e8 commit 8d89b11
Show file tree
Hide file tree
Showing 6 changed files with 1,102 additions and 1,028 deletions.
4 changes: 2 additions & 2 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@
author = "The Inseq Team"

# The short X.Y version
version = "0.5"
version = "0.6"
# The full version, including alpha/beta/rc tags
release = "0.5.0"
release = "0.6.0.dev0"


# Prefix link to point to master, comment this during version release and uncomment below line
Expand Down
8 changes: 3 additions & 5 deletions examples/inseq_tutorial.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,7 @@
"source": [
"%%capture\n",
"# Run in Colab to install local packages\n",
"%pip install bitsandbytes accelerate\n",
"%pip install git+https://github.com/huggingface/transformers.git\n",
"%pip install git+https://github.com/inseq-team/inseq.git"
"%pip install bitsandbytes accelerate transformers inseq"
]
},
{
Expand All @@ -28,7 +26,7 @@
"source": [
"This tutorial showcases how to use the [inseq](https://github.com/inseq-team/inseq) library for various interpretability analyses, with a focus on advanced use cases such as aggregation and contrastive attribution. The tutorial was adapted from the [LCL'23 tutorial on Interpretability for NLP](https://github.com/gsarti/lcl23-xnlm-lab) with the goal of updating it whenever new functionalities or breaking changes are introduced.\n",
"\n",
"⚠️ **IMPORTANT** ⚠️ : `inseq` is a very new library and under active development. Current results were obtained using the latest development versions on June 30, 2023. If you find any issue, or you are not able to reproduce the results shown here, we'd love if you could open an issue on [the inseq Github repository](https://github.com/inseq-team/inseq) so that we could update the tutorial accordingly! 🙂\n",
"⚠️ **IMPORTANT** ⚠️ : `inseq` is a very new library and under active development. Current results were obtained using the latest inseq release. If you find any issue, or you are not able to reproduce the results shown here, we'd love if you could open an issue on [the inseq Github repository](https://github.com/inseq-team/inseq) so that we could update the tutorial accordingly! 🙂\n",
"\n",
"# Introduction: Feature Attribution for NLP\n",
"\n",
Expand Down Expand Up @@ -86,7 +84,7 @@
"\n",
"[Inseq](https://github.com/inseq-team/inseq) ([Sarti et al., 2023](https://arxiv.org/abs/2302.13942)) is a toolkit based on the [🤗 Transformers](https//github.com/huggingface/transformers) and [Captum](https://captum.ai/docs/introduction) libraries for intepreting language generation models using feature attribution methods. Inseq allows you to analyze the behavior of a language generation model by computing the importance of each input token for each token in the generated output using the various categories of attribution methods like those described in the previous section (use `inseq.list_feature_attribution_methods()` to list all available methods). You can refer to the [Inseq paper](https://arxiv.org/abs/2302.13942) for an overview of the tool and some usage examples.\n",
"\n",
"The current version of the library (v0.5.0, December 2023) supports all [`AutoModelForSeq2SeqLM`](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForSeq2SeqLM) (among others, [T5](https://huggingface.co/docs/transformers/model_doc/t5), [Bart](https://huggingface.co/docs/transformers/model_doc/bart) and all >1000 [MarianNMT](https://huggingface.co/docs/transformers/model_doc/marian) MT models) and [AutoModelForCausalLM](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForCausalLM) (among others, [GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2), [Bloom](https://huggingface.co/docs/transformers/model_doc/bloom) and [LLaMa](https://huggingface.co/docs/transformers/model_doc/llama)), including advanced tools for custom attribution targets and post-processing of attribution matrices.\n",
"Inseq supports all [`AutoModelForSeq2SeqLM`](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForSeq2SeqLM) (among others, [T5](https://huggingface.co/docs/transformers/model_doc/t5), [Bart](https://huggingface.co/docs/transformers/model_doc/bart) and all >1000 [MarianNMT](https://huggingface.co/docs/transformers/model_doc/marian) MT models) and [AutoModelForCausalLM](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForCausalLM) (among others, [GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2), [Bloom](https://huggingface.co/docs/transformers/model_doc/bloom) and [LLaMa](https://huggingface.co/docs/transformers/model_doc/llama)), including advanced tools for custom attribution targets and post-processing of attribution matrices.\n",
"\n",
"The following code is a \"Hello world\" equivalent in Inseq: in three lines of code, an English-to-Italian machine translation model is loaded alongside an attribution method, attribution is performed, and results are visualized:"
]
Expand Down
Loading

0 comments on commit 8d89b11

Please sign in to comment.