From 07df15c4aac9b3e6d20f3cbea074dbd56702bce8 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Wed, 5 Oct 2022 13:00:03 -0700 Subject: [PATCH] Notebook bug fixes (#5084) (#5085) * Notebook bug fixes Signed-off-by: Virginia Adams * Turned nemo install back on Signed-off-by: Virginia Adams * reverted notebook Signed-off-by: Virginia Adams * Updated one line in entity linking nb Signed-off-by: Virginia Adams Signed-off-by: Virginia Adams Co-authored-by: Eric Harper Signed-off-by: Virginia Adams Co-authored-by: Virginia Adams <78445382+vadam5@users.noreply.github.com> Co-authored-by: Eric Harper --- tutorials/nlp/Entity_Linking_Medical.ipynb | 1262 ++++++++--------- .../nlp/Multitask_Prompt_and_PTuning.ipynb | 14 +- 2 files changed, 639 insertions(+), 637 deletions(-) diff --git a/tutorials/nlp/Entity_Linking_Medical.ipynb b/tutorials/nlp/Entity_Linking_Medical.ipynb index e3e51854194a..22ab775a790d 100644 --- a/tutorials/nlp/Entity_Linking_Medical.ipynb +++ b/tutorials/nlp/Entity_Linking_Medical.ipynb @@ -1,632 +1,632 @@ { - "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "\"\"\"\n", - "You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.\n", - "\n", - "Instructions for setting up Colab are as follows:\n", - "1. Open a new Python 3 notebook.\n", - "2. Import this notebook from GitHub (File -> Upload Notebook -> \"GITHUB\" tab -> copy/paste GitHub URL)\n", - "3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select \"GPU\" for hardware accelerator)\n", - "4. Run this cell to set up dependencies.\n", - "\"\"\"\n", - "\n", - "## Install NeMo if using google collab or if its not installed locally\n", - "BRANCH = 'main'\n", - "!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[all]" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "## Install dependencies\n", - "!pip install wget\n", - "!pip install faiss-gpu" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import faiss\n", - "import torch\n", - "import wget\n", - "import os\n", - "import numpy as np\n", - "import pandas as pd\n", - "\n", - "from omegaconf import OmegaConf\n", - "from pytorch_lightning import Trainer\n", - "from IPython.display import display\n", - "from tqdm import tqdm\n", - "\n", - "from nemo.collections import nlp as nemo_nlp\n", - "from nemo.utils.exp_manager import exp_manager" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Entity Linking" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Task Description\n", - "[Entity linking](https://en.wikipedia.org/wiki/Entity_linking) is the process of connecting concepts mentioned in natural language to their canonical forms stored in a knowledge base. For example, say a knowledge base contained the entity 'ID3452 influenza' and we wanted to process some natural language containing the sentence \"The patient has flu like symptoms\". An entity linking model would match the word 'flu' to the knowledge base entity 'ID3452 influenza', allowing for disambiguation and normalization of concepts referenced in text. Entity linking applications range from helping automate data ingestion to assisting in real time dialogue concept normalization. We will be focusing on entity linking in the medical domain for this demo, but the entity linking model, dataset, and training code within NVIDIA NeMo can be applied to other domains like finance and retail.\n", - "\n", - "Within NeMo and this tutorial we use the entity linking approach described in Liu et. al's NAACL 2021 \"[Self-alignment Pre-training for Biomedical Entity Representations](https://arxiv.org/abs/2010.11784v2)\". The main idea behind this approach is to reshape an initial concept embedding space such that synonyms of the same concept are pulled closer together and unrelated concepts are pushed further apart. The concept embeddings from this reshaped space can then be used to build a knowledge base embedding index. This index stores concept IDs mapped to their respective concept embeddings in a format conducive to efficient nearest neighbor search. We can link query concepts to their canonical forms in the knowledge base by performing a nearest neighbor search- matching concept query embeddings to the most similar concepts embeddings in the knowledge base index. \n", - "\n", - "In this tutorial we will be using the [faiss](https://github.com/facebookresearch/faiss) library to build our concept index." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Self Alignment Pretraining\n", - "Self-Alignment pretraining is a second stage pretraining of an existing encoder (called second stage because the encoder model can be further finetuned after this more general pretraining step). The dataset used during training consists of pairs of concept synonyms that map to the same ID. At each training iteration, we only select *hard* examples present in the mini batch to calculate the loss and update the model weights. In this context, a hard example is an example where a concept is closer to an unrelated concept in the mini batch than it is to the synonym concept it is paired with by some margin. I encourage you to take a look at [section 2 of the paper](https://arxiv.org/pdf/2010.11784.pdf) for a more formal and in depth description of how hard examples are selected.\n", - "\n", - "We then use a [metric learning loss](https://openaccess.thecvf.com/content_CVPR_2019/papers/Wang_Multi-Similarity_Loss_With_General_Pair_Weighting_for_Deep_Metric_Learning_CVPR_2019_paper.pdf) calculated from the hard examples selected. This loss helps reshape the embedding space. The concept representation space is rearranged to be more suitable for entity matching via embedding cosine similarity. \n", - "\n", - "Now that we have idea of what's going on, let's get started!" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Dataset Preprocessing" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Download data into project directory\n", - "PROJECT_DIR = \".\" #Change if you don't want the current directory to be the project dir\n", - "DATA_DIR = os.path.join(PROJECT_DIR, \"tiny_example_data\")\n", - "\n", - "if not os.path.isdir(os.path.join(DATA_DIR)):\n", - " wget.download('https://dldata-public.s3.us-east-2.amazonaws.com/tiny_example_data.zip',\n", - " os.path.join(PROJECT_DIR, \"tiny_example_data.zip\"))\n", - "\n", - " !unzip {PROJECT_DIR}/tiny_example_data.zip -d {PROJECT_DIR}" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "In this tutorial we will be using a tiny toy dataset to demonstrate how to use NeMo's entity linking model functionality. The dataset includes synonyms for 12 medical concepts. Entity phrases with the same ID are synonyms for the same concept. For example, \"*chronic kidney failure*\", \"*gradual loss of kidney function*\", and \"*CKD*\" are all synonyms of concept ID 5. Here's the dataset before preprocessing:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "raw_data = pd.read_csv(os.path.join(DATA_DIR, \"tiny_example_dev_data.csv\"), names=[\"ID\", \"CONCEPT\"], index_col=False)\n", - "print(raw_data)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We've already paired off the concepts for this dataset with the format `ID concept_synonym1 concept_synonym2`. Here are the first ten rows:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "training_data = pd.read_table(os.path.join(DATA_DIR, \"tiny_example_train_pairs.tsv\"), names=[\"ID\", \"CONCEPT_SYN1\", \"CONCEPT_SYN2\"], delimiter='\\t')\n", - "print(training_data.head(10))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Use the [Unified Medical Language System (UMLS)](https://www.nlm.nih.gov/research/umls/index.html) dataset for full medical domain entity linking training. The data contains over 9 million entities and is a table of medical concepts with their corresponding concept IDs (CUI). After [requesting a free license and making a UMLS Terminology Services (UTS) account](https://www.nlm.nih.gov/research/umls/index.html), the [entire UMLS dataset](https://www.nlm.nih.gov/research/umls/licensedcontent/umlsknowledgesources.html) can be downloaded from the NIH's website. If you've cloned the NeMo repo you can run the data processing script located in `examples/nlp/entity_linking/data/umls_dataset_processing.py` on the full dataset. This script will take in the initial table of UMLS concepts and produce a .tsv file with each row formatted as `CUI\\tconcept_synonym1\\tconcept_synonym2`. Once the UMLS dataset .RRF file is downloaded, the script can be run from the `examples/nlp/entity_linking` directory like so: \n", - "```\n", - "python data/umls_dataset_processing.py\n", - "```" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Model Training" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Second stage pretrain a BERT Base encoder on the self-alignment pretraining task (SAP) for improved entity linking. Using a GPU, the model should take 5 minutes or less to train on this example dataset and training progress will be output below the cell." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "#Download config\n", - "wget.download(\"https://raw.githubusercontent.com/vadam5/NeMo/main/examples/nlp/entity_linking/conf/tiny_example_entity_linking_config.yaml\",\n", - " os.path.join(PROJECT_DIR, \"tiny_example_entity_linking_config.yaml\"))\n", - "\n", - "# Load in config file\n", - "cfg = OmegaConf.load(os.path.join(PROJECT_DIR, \"tiny_example_entity_linking_config.yaml\"))\n", - "\n", - "# Set config file variables\n", - "cfg.project_dir = PROJECT_DIR\n", - "cfg.model.nemo_path = os.path.join(PROJECT_DIR, \"tiny_example_sap_bert_model.nemo\")\n", - "cfg.model.train_ds.data_file = os.path.join(DATA_DIR, \"tiny_example_train_pairs.tsv\")\n", - "cfg.model.validation_ds.data_file = os.path.join(DATA_DIR, \"tiny_example_validation_pairs.tsv\")\n", - "\n", - "# remove distributed training flags\n", - "cfg.trainer.strategy = None\n", - "cfg.trainer.accelerator = None" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Initialize the trainer and model\n", - "trainer = Trainer(**cfg.trainer)\n", - "exp_manager(trainer, cfg.get(\"exp_manager\", None))\n", - "model = nemo_nlp.models.EntityLinkingModel(cfg=cfg.model, trainer=trainer)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Train and save the model\n", - "trainer.fit(model)\n", - "model.save_to(cfg.model.nemo_path)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "You can run the script at `examples/nlp/entity_linking/self_alignment_pretraining.py` to train a model on a larger dataset. Run\n", - "\n", - "```\n", - "python self_alignment_pretraining.py project_dir=.\n", - "```\n", - "from the `examples/nlp/entity_linking` directory." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Model Evaluation\n", - "\n", - "Let's evaluate our freshly trained model and compare its performance with a BERT Base encoder that hasn't undergone self-alignment pretraining. We first need to restore our trained model and load our BERT Base Baseline model." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n", - "\n", - "# Restore second stage pretrained model\n", - "sap_model_cfg = cfg\n", - "sap_model_cfg.index.index_save_name = os.path.join(PROJECT_DIR, \"tiny_example_entity_linking_index\")\n", - "sap_model_cfg.index.index_ds.data_file = os.path.join(DATA_DIR, \"tiny_example_index_data.tsv\")\n", - "sap_model = nemo_nlp.models.EntityLinkingModel.restore_from(sap_model_cfg.model.nemo_path).to(device)\n", - "\n", - "# Load original model\n", - "base_model_cfg = OmegaConf.load(os.path.join(PROJECT_DIR, \"tiny_example_entity_linking_config.yaml\"))\n", - "\n", - "# Set train/val datasets to None to avoid loading datasets associated with training\n", - "base_model_cfg.model.train_ds = None\n", - "base_model_cfg.model.validation_ds = None\n", - "base_model_cfg.index.index_save_name = os.path.join(PROJECT_DIR, \"base_model_index\")\n", - "base_model_cfg.index.index_ds.data_file = os.path.join(DATA_DIR, \"tiny_example_index_data.tsv\")\n", - "base_model = nemo_nlp.models.EntityLinkingModel(base_model_cfg.model).to(device)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We are going evaluate our model on a nearest neighbor task using top 1 and top 5 accuracies as our metric. We will be using a tiny example test knowledge base and test queries. For this evaluation we are going to be comparing every test query with every concept vector in our test set knowledge base. We will rank each item in the knowledge base by its cosine similarity with the test query. We'll then compare the IDs of the predicted most similar test knowledge base concepts with our ground truth query IDs to calculate top 1 and top 5 accuracies. For this metric higher is better." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Helper function to get data embeddings\n", - "def get_embeddings(model, dataloader):\n", - " embeddings, cids = [], []\n", - "\n", - " with torch.no_grad():\n", - " for batch in tqdm(dataloader):\n", - " input_ids, token_type_ids, attention_mask, batch_cids = batch\n", - " batch_embeddings = model.forward(input_ids=input_ids.to(device), \n", - " token_type_ids=token_type_ids.to(device), \n", - " attention_mask=attention_mask.to(device))\n", - "\n", - " # Accumulate index embeddings and their corresponding IDs\n", - " embeddings.extend(batch_embeddings.cpu().detach().numpy())\n", - " cids.extend(batch_cids)\n", - " \n", - " return embeddings, cids" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "def evaluate(model, test_kb, test_queries, ks):\n", - " # Initialize knowledge base and query data loaders\n", - " test_kb_dataloader = model.setup_dataloader(test_kb, is_index_data=True)\n", - " test_query_dataloader = model.setup_dataloader(test_queries, is_index_data=True)\n", - " \n", - " # Get knowledge base and query embeddings\n", - " test_kb_embs, test_kb_cids = get_embeddings(model, test_kb_dataloader)\n", - " test_query_embs, test_query_cids = get_embeddings(model, test_query_dataloader)\n", - "\n", - " # Calculate the cosine distance between each query and knowledge base concept\n", - " score_matrix = np.matmul(np.array(test_query_embs), np.array(test_kb_embs).T)\n", - " accs = {k : 0 for k in ks}\n", - " \n", - " # Compare the knowledge base IDs of the knowledge base entities with \n", - " # the smallest cosine distance from the query \n", - " for query_idx in tqdm(range(len(test_query_cids))):\n", - " query_emb = test_query_embs[query_idx]\n", - " query_cid = test_query_cids[query_idx]\n", - " query_scores = score_matrix[query_idx]\n", - "\n", - " for k in ks:\n", - " topk_idxs = np.argpartition(query_scores, -k)[-k:]\n", - " topk_cids = [test_kb_cids[idx] for idx in topk_idxs]\n", - " \n", - " # If the correct query ID is amoung the top k closest kb IDs\n", - " # the model correctly linked the entity\n", - " match = int(query_cid in topk_cids)\n", - " accs[k] += match\n", - "\n", - " for k in ks:\n", - " accs[k] /= len(test_query_cids)\n", - " \n", - " return accs" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Create configs for our test data\n", - "test_kb = OmegaConf.create({\n", - " \"data_file\": os.path.join(DATA_DIR, \"tiny_example_test_kb.tsv\"),\n", - " \"max_seq_length\": 128,\n", - " \"batch_size\": 10,\n", - " \"shuffle\": False,\n", - "})\n", - "\n", - "test_queries = OmegaConf.create({\n", - " \"data_file\": os.path.join(DATA_DIR, \"tiny_example_test_queries.tsv\"),\n", - " \"max_seq_length\": 128,\n", - " \"batch_size\": 10,\n", - " \"shuffle\": False,\n", - "})\n", - "\n", - "ks = [1, 5]\n", - "\n", - "# Evaluate both models on our test data\n", - "base_accs = evaluate(base_model, test_kb, test_queries, ks)\n", - "base_accs[\"Model\"] = \"BERT Base Baseline\"\n", - "\n", - "sap_accs = evaluate(sap_model, test_kb, test_queries, ks)\n", - "sap_accs[\"Model\"] = \"BERT + SAP\"\n", - "\n", - "print(\"Top 1 and Top 5 Accuracy Comparison:\")\n", - "results_df = pd.DataFrame([base_accs, sap_accs], columns=[\"Model\", 1, 5])\n", - "results_df = results_df.style.set_properties(**{'text-align': 'left', }).set_table_styles([dict(selector='th', props=[('text-align', 'left')])])\n", - "display(results_df)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The purpose of this section was to show an example of evaluating your entity linking model. This evaluation set contains very little data, and no serious conclusions should be drawn about model performance. Top 1 accuracy should be between 0.7 and 1.0 for both models and top 5 accuracy should be between 0.8 and 1.0. When evaluating a model trained on a larger dataset, you can use a nearest neighbors index to speed up the evaluation time." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Building an Index" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "To qualitatively observe the improvement we gain from the second stage pretraining, let's build two indices. One will be built with BERT base embeddings before self-alignment pretraining and one will be built with the model we just trained. Our knowledge base in this tutorial will be in the same domain and have some overlapping concepts as the training set. This data file is formatted as `ID\\tconcept`." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The `EntityLinkingDataset` class can load the data used for training the entity linking encoder as well as for building the index if the `is_index_data` flag is set to true. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "def build_index(cfg, model):\n", - " # Setup index dataset loader\n", - " index_dataloader = model.setup_dataloader(cfg.index.index_ds, is_index_data=True)\n", - " \n", - " # Get index dataset embeddings\n", - " embeddings, _ = get_embeddings(model, index_dataloader)\n", - " \n", - " # Train IVFFlat index using faiss\n", - " embeddings = np.array(embeddings)\n", - " quantizer = faiss.IndexFlatL2(cfg.index.dims)\n", - " index = faiss.IndexIVFFlat(quantizer, cfg.index.dims, cfg.index.nlist)\n", - " index = faiss.index_cpu_to_all_gpus(index)\n", - " index.train(embeddings)\n", - " \n", - " # Add concept embeddings to index\n", - " for i in tqdm(range(0, embeddings.shape[0], cfg.index.index_batch_size)):\n", - " index.add(embeddings[i:i+cfg.index.index_batch_size])\n", - "\n", - " # Save index\n", - " faiss.write_index(faiss.index_gpu_to_cpu(index), cfg.index.index_save_name)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "scrolled": true - }, - "outputs": [], - "source": [ - "build_index(sap_model_cfg, sap_model.to(device))\n", - "build_index(base_model_cfg, base_model.to(device))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Entity Linking via Nearest Neighbor Search" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Now it's time to query our indices! We are going to query both our index built with embeddings from BERT Base, and our index with embeddings built from the SAP BERT model we trained. Our sample query phrases will be \"*high blood sugar*\" and \"*head pain*\". \n", - "\n", - "To query our indices, we first need to get the embedding of each query from the corresponding encoder model. We can then pass these query embeddings into the faiss index which will perform a nearest neighbor search, using cosine distance to compare the query embedding with embeddings present in the index. Once we get a list of knowledge base index concept IDs most closely matching our query, all that is left to do is map the IDs to a representative string describing the concept. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "def query_index(cfg, model, index, queries, id2string):\n", - " # Get query embeddings from our entity linking encoder model\n", - " query_embs = get_query_embedding(queries, model).cpu().detach().numpy()\n", - " \n", - " # Use query embedding to find closest concept embedding in knowledge base\n", - " distances, neighbors = index.search(query_embs, cfg.index.top_n)\n", - " \n", - " # Get the canonical strings corresponding to the IDs of the query's nearest neighbors in the kb \n", - " neighbor_concepts = [[id2string[concept_id] for concept_id in query_neighbor] \\\n", - " for query_neighbor in neighbors]\n", - " \n", - " # Display most similar concepts in the knowledge base. \n", - " for query_idx in range(len(queries)):\n", - " print(f\"\\nThe most similar concepts to {queries[query_idx]} are:\")\n", - " for cid, concept, dist in zip(neighbors[query_idx], neighbor_concepts[query_idx], distances[query_idx]):\n", - " print(cid, concept, 1 - dist)\n", - "\n", - " \n", - "def get_query_embedding(queries, model):\n", - " # Tokenize our queries\n", - " model_input = model.tokenizer(queries,\n", - " add_special_tokens = True,\n", - " padding = True,\n", - " truncation = True,\n", - " max_length = 512,\n", - " return_token_type_ids = True,\n", - " return_attention_mask = True)\n", - " \n", - " # Pass tokenized input into model\n", - " query_emb = model.forward(input_ids=torch.LongTensor(model_input[\"input_ids\"]).to(device),\n", - " token_type_ids=torch.LongTensor(model_input[\"token_type_ids\"]).to(device),\n", - " attention_mask=torch.LongTensor(model_input[\"attention_mask\"]).to(device))\n", - " \n", - " return query_emb" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Load indices\n", - "sap_index = faiss.read_index(sap_model_cfg.index.index_save_name)\n", - "base_index = faiss.read_index(base_model_cfg.index.index_save_name)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Map concept IDs to one canonical string\n", - "index_data = open(sap_model_cfg.index.index_ds.data_file, \"r\", encoding='utf-8-sig')\n", - "id2string = {}\n", - "\n", - "for line in index_data:\n", - " cid, concept = line.split(\"\\t\")\n", - " id2string[int(cid) - 1] = concept.strip()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "id2string" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Some sample queries\n", - "queries = [\"high blood sugar\", \"head pain\"]\n", - "\n", - "# Query BERT Base\n", - "print(\"BERT Base output before Self Alignment Pretraining:\")\n", - "query_index(base_model_cfg, base_model, base_index, queries, id2string)\n", - "print(\"\\n\" + \"-\" * 50 + \"\\n\")\n", - "\n", - "# Query SAP BERT\n", - "print(\"SAP BERT output after Self Alignment Pretraining:\")\n", - "query_index(sap_model_cfg, sap_model, sap_index, queries, id2string)\n", - "print(\"\\n\" + \"-\" * 50 + \"\\n\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Even after only training on this tiny amount of data, the qualitative performance boost from self-alignment pretraining is visible. The baseline model links \"*high blood sugar*\" to the entity \"*6 diabetes*\" while our SAP BERT model accurately links \"*high blood sugar*\" to \"*Hyperinsulinemia*\". Similarly, \"*head pain*\" and \"*Myocardial infraction*\" are not the same concept, but \"*head pain*\" and \"*Headache*\" are." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "For larger knowledge bases keeping the default embedding size might be too large and cause out of memory issues. You can apply PCA or some other dimensionality reduction method to your data to reduce its memory footprint. Code for creating a text file of all the UMLS entities in the correct format needed to build an index and creating a dictionary mapping concept ids to canonical concept strings can be found here `examples/nlp/entity_linking/data/umls_dataset_processing.py`. \n", - "\n", - "The code for extracting knowledge base concept embeddings, training and applying a PCA transformation to the embeddings, building a faiss index and querying the index from the command line is located at `examples/nlp/entity_linking/build_index.py` and `examples/nlp/entity_linking/query_index.py`. \n", - "\n", - "If you've cloned the NeMo repo, both of these steps can be run as follows on the command line from the `examples/nlp/entity_linking/` directory.\n", - "\n", - "```\n", - "python data/umls_dataset_processing.py --index\n", - "python build_index.py --restore\n", - "python query_index.py --restore\n", - "```\n", - "By default the project directory will be \".\" but can be changed by adding the flag `--project_dir=` after each of the above commands. Intermediate steps of the index building process are saved. In the occurrence of an error, previously completed steps do not need to be rerun. " - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Command Recap" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Here is a recap of the commands and steps to repeat this process on the full UMLS dataset. \n", - "\n", - "1) Download the UMLS dataset file `MRCONSO.RRF` from the NIH website and place it in the `examples/nlp/entity_linking/data` directory.\n", - "\n", - "2) Run the following commands from the `examples/nlp/entity_linking` directory\n", - "```\n", - "python data/umls_dataset_processing.py\n", - "python self_alignment_pretraining.py project_dir=. \n", - "python data/umls_dataset_processing.py --index\n", - "python build_index.py --restore\n", - "python query_index.py --restore\n", - "```\n", - "The model will take ~24hrs to train on two GPUs and ~48hrs to train on one GPU. By default the project directory will be \".\" but can be changed by adding the flag `--project_dir=` after each of the above commands and changing `project_dir=` in the `self_alignment_pretraining.py` command. If you change the project directory, you should also move the `MRCONOSO.RRF` file to a `data` sub directory within the one you've specified. " - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "As mentioned in the introduction, entity linking within NVIDIA NeMo is not limited to the medical domain. The same data processing and training steps can be applied to a variety of domains and use cases. You can edit the datasets used as well as training and loss function hyperparameters within your config file to better suit your domain." - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3 (ipykernel)", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.8.12" - } - }, - "nbformat": 4, - "nbformat_minor": 4 -} \ No newline at end of file + "cells": [ + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "\"\"\"\n", + "You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.\n", + "\n", + "Instructions for setting up Colab are as follows:\n", + "1. Open a new Python 3 notebook.\n", + "2. Import this notebook from GitHub (File -> Upload Notebook -> \"GITHUB\" tab -> copy/paste GitHub URL)\n", + "3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select \"GPU\" for hardware accelerator)\n", + "4. Run this cell to set up dependencies.\n", + "\"\"\"\n", + "\n", + "## Install NeMo if using google collab or if its not installed locally\n", + "BRANCH = 'r1.12.0'\n", + "!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[all]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "## Install dependencies\n", + "!pip install wget\n", + "!pip install faiss-gpu" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import faiss\n", + "import torch\n", + "import wget\n", + "import os\n", + "import numpy as np\n", + "import pandas as pd\n", + "\n", + "from omegaconf import OmegaConf\n", + "from pytorch_lightning import Trainer\n", + "from IPython.display import display\n", + "from tqdm import tqdm\n", + "\n", + "from nemo.collections import nlp as nemo_nlp\n", + "from nemo.utils.exp_manager import exp_manager" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Entity Linking" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Task Description\n", + "[Entity linking](https://en.wikipedia.org/wiki/Entity_linking) is the process of connecting concepts mentioned in natural language to their canonical forms stored in a knowledge base. For example, say a knowledge base contained the entity 'ID3452 influenza' and we wanted to process some natural language containing the sentence \"The patient has flu like symptoms\". An entity linking model would match the word 'flu' to the knowledge base entity 'ID3452 influenza', allowing for disambiguation and normalization of concepts referenced in text. Entity linking applications range from helping automate data ingestion to assisting in real time dialogue concept normalization. We will be focusing on entity linking in the medical domain for this demo, but the entity linking model, dataset, and training code within NVIDIA NeMo can be applied to other domains like finance and retail.\n", + "\n", + "Within NeMo and this tutorial we use the entity linking approach described in Liu et. al's NAACL 2021 \"[Self-alignment Pre-training for Biomedical Entity Representations](https://arxiv.org/abs/2010.11784v2)\". The main idea behind this approach is to reshape an initial concept embedding space such that synonyms of the same concept are pulled closer together and unrelated concepts are pushed further apart. The concept embeddings from this reshaped space can then be used to build a knowledge base embedding index. This index stores concept IDs mapped to their respective concept embeddings in a format conducive to efficient nearest neighbor search. We can link query concepts to their canonical forms in the knowledge base by performing a nearest neighbor search- matching concept query embeddings to the most similar concepts embeddings in the knowledge base index. \n", + "\n", + "In this tutorial we will be using the [faiss](https://github.com/facebookresearch/faiss) library to build our concept index." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Self Alignment Pretraining\n", + "Self-Alignment pretraining is a second stage pretraining of an existing encoder (called second stage because the encoder model can be further finetuned after this more general pretraining step). The dataset used during training consists of pairs of concept synonyms that map to the same ID. At each training iteration, we only select *hard* examples present in the mini batch to calculate the loss and update the model weights. In this context, a hard example is an example where a concept is closer to an unrelated concept in the mini batch than it is to the synonym concept it is paired with by some margin. I encourage you to take a look at [section 2 of the paper](https://arxiv.org/pdf/2010.11784.pdf) for a more formal and in depth description of how hard examples are selected.\n", + "\n", + "We then use a [metric learning loss](https://openaccess.thecvf.com/content_CVPR_2019/papers/Wang_Multi-Similarity_Loss_With_General_Pair_Weighting_for_Deep_Metric_Learning_CVPR_2019_paper.pdf) calculated from the hard examples selected. This loss helps reshape the embedding space. The concept representation space is rearranged to be more suitable for entity matching via embedding cosine similarity. \n", + "\n", + "Now that we have idea of what's going on, let's get started!" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Dataset Preprocessing" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Download data into project directory\n", + "PROJECT_DIR = \".\" #Change if you don't want the current directory to be the project dir\n", + "DATA_DIR = os.path.join(PROJECT_DIR, \"tiny_example_data\")\n", + "\n", + "if not os.path.isdir(os.path.join(DATA_DIR)):\n", + " wget.download('https://dldata-public.s3.us-east-2.amazonaws.com/tiny_example_data.zip',\n", + " os.path.join(PROJECT_DIR, \"tiny_example_data.zip\"))\n", + "\n", + " !unzip {PROJECT_DIR}/tiny_example_data.zip -d {PROJECT_DIR}" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In this tutorial we will be using a tiny toy dataset to demonstrate how to use NeMo's entity linking model functionality. The dataset includes synonyms for 12 medical concepts. Entity phrases with the same ID are synonyms for the same concept. For example, \"*chronic kidney failure*\", \"*gradual loss of kidney function*\", and \"*CKD*\" are all synonyms of concept ID 5. Here's the dataset before preprocessing:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "raw_data = pd.read_csv(os.path.join(DATA_DIR, \"tiny_example_dev_data.csv\"), names=[\"ID\", \"CONCEPT\"], index_col=False)\n", + "print(raw_data)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We've already paired off the concepts for this dataset with the format `ID concept_synonym1 concept_synonym2`. Here are the first ten rows:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "training_data = pd.read_table(os.path.join(DATA_DIR, \"tiny_example_train_pairs.tsv\"), names=[\"ID\", \"CONCEPT_SYN1\", \"CONCEPT_SYN2\"], delimiter='\\t')\n", + "print(training_data.head(10))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Use the [Unified Medical Language System (UMLS)](https://www.nlm.nih.gov/research/umls/index.html) dataset for full medical domain entity linking training. The data contains over 9 million entities and is a table of medical concepts with their corresponding concept IDs (CUI). After [requesting a free license and making a UMLS Terminology Services (UTS) account](https://www.nlm.nih.gov/research/umls/index.html), the [entire UMLS dataset](https://www.nlm.nih.gov/research/umls/licensedcontent/umlsknowledgesources.html) can be downloaded from the NIH's website. If you've cloned the NeMo repo you can run the data processing script located in `examples/nlp/entity_linking/data/umls_dataset_processing.py` on the full dataset. This script will take in the initial table of UMLS concepts and produce a .tsv file with each row formatted as `CUI\\tconcept_synonym1\\tconcept_synonym2`. Once the UMLS dataset .RRF file is downloaded, the script can be run from the `examples/nlp/entity_linking` directory like so: \n", + "```\n", + "python data/umls_dataset_processing.py\n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Model Training" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Second stage pretrain a BERT Base encoder on the self-alignment pretraining task (SAP) for improved entity linking. Using a GPU, the model should take 5 minutes or less to train on this example dataset and training progress will be output below the cell." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "#Download config\n", + "wget.download(f\"https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/entity_linking/conf/tiny_example_entity_linking_config.yaml\",\n", + " os.path.join(PROJECT_DIR, \"tiny_example_entity_linking_config.yaml\"))\n", + "\n", + "# Load in config file\n", + "cfg = OmegaConf.load(os.path.join(PROJECT_DIR, \"tiny_example_entity_linking_config.yaml\"))\n", + "\n", + "# Set config file variables\n", + "cfg.project_dir = PROJECT_DIR\n", + "cfg.model.nemo_path = os.path.join(PROJECT_DIR, \"tiny_example_sap_bert_model.nemo\")\n", + "cfg.model.train_ds.data_file = os.path.join(DATA_DIR, \"tiny_example_train_pairs.tsv\")\n", + "cfg.model.validation_ds.data_file = os.path.join(DATA_DIR, \"tiny_example_validation_pairs.tsv\")\n", + "\n", + "# remove distributed training flags\n", + "cfg.trainer.strategy = None\n", + "cfg.trainer.accelerator = None" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Initialize the trainer and model\n", + "trainer = Trainer(**cfg.trainer)\n", + "exp_manager(trainer, cfg.get(\"exp_manager\", None))\n", + "model = nemo_nlp.models.EntityLinkingModel(cfg=cfg.model, trainer=trainer)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Train and save the model\n", + "trainer.fit(model)\n", + "model.save_to(cfg.model.nemo_path)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can run the script at `examples/nlp/entity_linking/self_alignment_pretraining.py` to train a model on a larger dataset. Run\n", + "\n", + "```\n", + "python self_alignment_pretraining.py project_dir=.\n", + "```\n", + "from the `examples/nlp/entity_linking` directory." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Model Evaluation\n", + "\n", + "Let's evaluate our freshly trained model and compare its performance with a BERT Base encoder that hasn't undergone self-alignment pretraining. We first need to restore our trained model and load our BERT Base Baseline model." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n", + "\n", + "# Restore second stage pretrained model\n", + "sap_model_cfg = cfg\n", + "sap_model_cfg.index.index_save_name = os.path.join(PROJECT_DIR, \"tiny_example_entity_linking_index\")\n", + "sap_model_cfg.index.index_ds.data_file = os.path.join(DATA_DIR, \"tiny_example_index_data.tsv\")\n", + "sap_model = nemo_nlp.models.EntityLinkingModel.restore_from(sap_model_cfg.model.nemo_path).to(device)\n", + "\n", + "# Load original model\n", + "base_model_cfg = OmegaConf.load(os.path.join(PROJECT_DIR, \"tiny_example_entity_linking_config.yaml\"))\n", + "\n", + "# Set train/val datasets to None to avoid loading datasets associated with training\n", + "base_model_cfg.model.train_ds = None\n", + "base_model_cfg.model.validation_ds = None\n", + "base_model_cfg.index.index_save_name = os.path.join(PROJECT_DIR, \"base_model_index\")\n", + "base_model_cfg.index.index_ds.data_file = os.path.join(DATA_DIR, \"tiny_example_index_data.tsv\")\n", + "base_model = nemo_nlp.models.EntityLinkingModel(base_model_cfg.model).to(device)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We are going evaluate our model on a nearest neighbor task using top 1 and top 5 accuracies as our metric. We will be using a tiny example test knowledge base and test queries. For this evaluation we are going to be comparing every test query with every concept vector in our test set knowledge base. We will rank each item in the knowledge base by its cosine similarity with the test query. We'll then compare the IDs of the predicted most similar test knowledge base concepts with our ground truth query IDs to calculate top 1 and top 5 accuracies. For this metric higher is better." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Helper function to get data embeddings\n", + "def get_embeddings(model, dataloader):\n", + " embeddings, cids = [], []\n", + "\n", + " with torch.no_grad():\n", + " for batch in tqdm(dataloader):\n", + " input_ids, token_type_ids, attention_mask, batch_cids = batch\n", + " batch_embeddings = model.forward(input_ids=input_ids.to(device), \n", + " token_type_ids=token_type_ids.to(device), \n", + " attention_mask=attention_mask.to(device))\n", + "\n", + " # Accumulate index embeddings and their corresponding IDs\n", + " embeddings.extend(batch_embeddings.cpu().detach().numpy())\n", + " cids.extend(batch_cids)\n", + " \n", + " return embeddings, cids" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def evaluate(model, test_kb, test_queries, ks):\n", + " # Initialize knowledge base and query data loaders\n", + " test_kb_dataloader = model.setup_dataloader(test_kb, is_index_data=True)\n", + " test_query_dataloader = model.setup_dataloader(test_queries, is_index_data=True)\n", + " \n", + " # Get knowledge base and query embeddings\n", + " test_kb_embs, test_kb_cids = get_embeddings(model, test_kb_dataloader)\n", + " test_query_embs, test_query_cids = get_embeddings(model, test_query_dataloader)\n", + "\n", + " # Calculate the cosine distance between each query and knowledge base concept\n", + " score_matrix = np.matmul(np.array(test_query_embs), np.array(test_kb_embs).T)\n", + " accs = {k : 0 for k in ks}\n", + " \n", + " # Compare the knowledge base IDs of the knowledge base entities with \n", + " # the smallest cosine distance from the query \n", + " for query_idx in tqdm(range(len(test_query_cids))):\n", + " query_emb = test_query_embs[query_idx]\n", + " query_cid = test_query_cids[query_idx]\n", + " query_scores = score_matrix[query_idx]\n", + "\n", + " for k in ks:\n", + " topk_idxs = np.argpartition(query_scores, -k)[-k:]\n", + " topk_cids = [test_kb_cids[idx] for idx in topk_idxs]\n", + " \n", + " # If the correct query ID is amoung the top k closest kb IDs\n", + " # the model correctly linked the entity\n", + " match = int(query_cid in topk_cids)\n", + " accs[k] += match\n", + "\n", + " for k in ks:\n", + " accs[k] /= len(test_query_cids)\n", + " \n", + " return accs" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Create configs for our test data\n", + "test_kb = OmegaConf.create({\n", + " \"data_file\": os.path.join(DATA_DIR, \"tiny_example_test_kb.tsv\"),\n", + " \"max_seq_length\": 128,\n", + " \"batch_size\": 10,\n", + " \"shuffle\": False,\n", + "})\n", + "\n", + "test_queries = OmegaConf.create({\n", + " \"data_file\": os.path.join(DATA_DIR, \"tiny_example_test_queries.tsv\"),\n", + " \"max_seq_length\": 128,\n", + " \"batch_size\": 10,\n", + " \"shuffle\": False,\n", + "})\n", + "\n", + "ks = [1, 5]\n", + "\n", + "# Evaluate both models on our test data\n", + "base_accs = evaluate(base_model, test_kb, test_queries, ks)\n", + "base_accs[\"Model\"] = \"BERT Base Baseline\"\n", + "\n", + "sap_accs = evaluate(sap_model, test_kb, test_queries, ks)\n", + "sap_accs[\"Model\"] = \"BERT + SAP\"\n", + "\n", + "print(\"Top 1 and Top 5 Accuracy Comparison:\")\n", + "results_df = pd.DataFrame([base_accs, sap_accs], columns=[\"Model\", 1, 5])\n", + "results_df = results_df.style.set_properties(**{'text-align': 'left', }).set_table_styles([dict(selector='th', props=[('text-align', 'left')])])\n", + "display(results_df)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The purpose of this section was to show an example of evaluating your entity linking model. This evaluation set contains very little data, and no serious conclusions should be drawn about model performance. Top 1 accuracy should be between 0.7 and 1.0 for both models and top 5 accuracy should be between 0.8 and 1.0. When evaluating a model trained on a larger dataset, you can use a nearest neighbors index to speed up the evaluation time." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Building an Index" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To qualitatively observe the improvement we gain from the second stage pretraining, let's build two indices. One will be built with BERT base embeddings before self-alignment pretraining and one will be built with the model we just trained. Our knowledge base in this tutorial will be in the same domain and have some overlapping concepts as the training set. This data file is formatted as `ID\\tconcept`." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The `EntityLinkingDataset` class can load the data used for training the entity linking encoder as well as for building the index if the `is_index_data` flag is set to true. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def build_index(cfg, model):\n", + " # Setup index dataset loader\n", + " index_dataloader = model.setup_dataloader(cfg.index.index_ds, is_index_data=True)\n", + " \n", + " # Get index dataset embeddings\n", + " embeddings, _ = get_embeddings(model, index_dataloader)\n", + " \n", + " # Train IVFFlat index using faiss\n", + " embeddings = np.array(embeddings)\n", + " quantizer = faiss.IndexFlatL2(cfg.index.dims)\n", + " index = faiss.IndexIVFFlat(quantizer, cfg.index.dims, cfg.index.nlist)\n", + " index = faiss.index_cpu_to_all_gpus(index)\n", + " index.train(embeddings)\n", + " \n", + " # Add concept embeddings to index\n", + " for i in tqdm(range(0, embeddings.shape[0], cfg.index.index_batch_size)):\n", + " index.add(embeddings[i:i+cfg.index.index_batch_size])\n", + "\n", + " # Save index\n", + " faiss.write_index(faiss.index_gpu_to_cpu(index), cfg.index.index_save_name)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "build_index(sap_model_cfg, sap_model.to(device))\n", + "build_index(base_model_cfg, base_model.to(device))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Entity Linking via Nearest Neighbor Search" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now it's time to query our indices! We are going to query both our index built with embeddings from BERT Base, and our index with embeddings built from the SAP BERT model we trained. Our sample query phrases will be \"*high blood sugar*\" and \"*head pain*\". \n", + "\n", + "To query our indices, we first need to get the embedding of each query from the corresponding encoder model. We can then pass these query embeddings into the faiss index which will perform a nearest neighbor search, using cosine distance to compare the query embedding with embeddings present in the index. Once we get a list of knowledge base index concept IDs most closely matching our query, all that is left to do is map the IDs to a representative string describing the concept. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def query_index(cfg, model, index, queries, id2string):\n", + " # Get query embeddings from our entity linking encoder model\n", + " query_embs = get_query_embedding(queries, model).cpu().detach().numpy()\n", + " \n", + " # Use query embedding to find closest concept embedding in knowledge base\n", + " distances, neighbors = index.search(query_embs, cfg.index.top_n)\n", + " \n", + " # Get the canonical strings corresponding to the IDs of the query's nearest neighbors in the kb \n", + " neighbor_concepts = [[id2string[concept_id] for concept_id in query_neighbor] \\\n", + " for query_neighbor in neighbors]\n", + " \n", + " # Display most similar concepts in the knowledge base. \n", + " for query_idx in range(len(queries)):\n", + " print(f\"\\nThe most similar concepts to {queries[query_idx]} are:\")\n", + " for cid, concept, dist in zip(neighbors[query_idx], neighbor_concepts[query_idx], distances[query_idx]):\n", + " print(cid, concept, 1 - dist)\n", + "\n", + " \n", + "def get_query_embedding(queries, model):\n", + " # Tokenize our queries\n", + " model_input = model.tokenizer(queries,\n", + " add_special_tokens = True,\n", + " padding = True,\n", + " truncation = True,\n", + " max_length = 512,\n", + " return_token_type_ids = True,\n", + " return_attention_mask = True)\n", + " \n", + " # Pass tokenized input into model\n", + " query_emb = model.forward(input_ids=torch.LongTensor(model_input[\"input_ids\"]).to(device),\n", + " token_type_ids=torch.LongTensor(model_input[\"token_type_ids\"]).to(device),\n", + " attention_mask=torch.LongTensor(model_input[\"attention_mask\"]).to(device))\n", + " \n", + " return query_emb" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Load indices\n", + "sap_index = faiss.read_index(sap_model_cfg.index.index_save_name)\n", + "base_index = faiss.read_index(base_model_cfg.index.index_save_name)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Map concept IDs to one canonical string\n", + "index_data = open(sap_model_cfg.index.index_ds.data_file, \"r\", encoding='utf-8-sig')\n", + "id2string = {}\n", + "\n", + "for line in index_data:\n", + " cid, concept = line.split(\"\\t\")\n", + " id2string[int(cid) - 1] = concept.strip()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "id2string" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Some sample queries\n", + "queries = [\"high blood sugar\", \"head pain\"]\n", + "\n", + "# Query BERT Base\n", + "print(\"BERT Base output before Self Alignment Pretraining:\")\n", + "query_index(base_model_cfg, base_model, base_index, queries, id2string)\n", + "print(\"\\n\" + \"-\" * 50 + \"\\n\")\n", + "\n", + "# Query SAP BERT\n", + "print(\"SAP BERT output after Self Alignment Pretraining:\")\n", + "query_index(sap_model_cfg, sap_model, sap_index, queries, id2string)\n", + "print(\"\\n\" + \"-\" * 50 + \"\\n\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Even after only training on this tiny amount of data, the qualitative performance boost from self-alignment pretraining is visible. The baseline model links \"*high blood sugar*\" to the entity \"*6 diabetes*\" while our SAP BERT model accurately links \"*high blood sugar*\" to \"*Hyperinsulinemia*\". Similarly, \"*head pain*\" and \"*Myocardial infraction*\" are not the same concept, but \"*head pain*\" and \"*Headache*\" are." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For larger knowledge bases keeping the default embedding size might be too large and cause out of memory issues. You can apply PCA or some other dimensionality reduction method to your data to reduce its memory footprint. Code for creating a text file of all the UMLS entities in the correct format needed to build an index and creating a dictionary mapping concept ids to canonical concept strings can be found here `examples/nlp/entity_linking/data/umls_dataset_processing.py`. \n", + "\n", + "The code for extracting knowledge base concept embeddings, training and applying a PCA transformation to the embeddings, building a faiss index and querying the index from the command line is located at `examples/nlp/entity_linking/build_index.py` and `examples/nlp/entity_linking/query_index.py`. \n", + "\n", + "If you've cloned the NeMo repo, both of these steps can be run as follows on the command line from the `examples/nlp/entity_linking/` directory.\n", + "\n", + "```\n", + "python data/umls_dataset_processing.py --index\n", + "python build_index.py --restore\n", + "python query_index.py --restore\n", + "```\n", + "By default the project directory will be \".\" but can be changed by adding the flag `--project_dir=` after each of the above commands. Intermediate steps of the index building process are saved. In the occurrence of an error, previously completed steps do not need to be rerun. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Command Recap" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Here is a recap of the commands and steps to repeat this process on the full UMLS dataset. \n", + "\n", + "1) Download the UMLS dataset file `MRCONSO.RRF` from the NIH website and place it in the `examples/nlp/entity_linking/data` directory.\n", + "\n", + "2) Run the following commands from the `examples/nlp/entity_linking` directory\n", + "```\n", + "python data/umls_dataset_processing.py\n", + "python self_alignment_pretraining.py project_dir=. \n", + "python data/umls_dataset_processing.py --index\n", + "python build_index.py --restore\n", + "python query_index.py --restore\n", + "```\n", + "The model will take ~24hrs to train on two GPUs and ~48hrs to train on one GPU. By default the project directory will be \".\" but can be changed by adding the flag `--project_dir=` after each of the above commands and changing `project_dir=` in the `self_alignment_pretraining.py` command. If you change the project directory, you should also move the `MRCONOSO.RRF` file to a `data` sub directory within the one you've specified. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As mentioned in the introduction, entity linking within NVIDIA NeMo is not limited to the medical domain. The same data processing and training steps can be applied to a variety of domains and use cases. You can edit the datasets used as well as training and loss function hyperparameters within your config file to better suit your domain." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.8.13" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/tutorials/nlp/Multitask_Prompt_and_PTuning.ipynb b/tutorials/nlp/Multitask_Prompt_and_PTuning.ipynb index 3986e0d864d4..b03316bfce02 100644 --- a/tutorials/nlp/Multitask_Prompt_and_PTuning.ipynb +++ b/tutorials/nlp/Multitask_Prompt_and_PTuning.ipynb @@ -88,7 +88,7 @@ "# The Best of Both\n", "A single pretrained GPT model can use both p-tuning and prompt-tuning. While you must decide to use either p-tuning or prompt-tuning for each task you want your model to perform, you can p-tune your model on a set of tasks A, then prompt tune your same model on a different set of tasks B, then finally run inference on tasks from both A and B at the same time. During prompt-tuning or p-tuning, tasks tuned at the same time must use the same number of virtual tokens. During inference, tasks using differing amounts of virtual tokens can be run at the same time.\n", "\n", - "Please see our [docs for more comparisons between prompt and p-tuning](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/prompt_learning.html). \n", + "Please see our [docs for more comparisons between prompt and p-tuning](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/nemo_megatron/prompt_learning.html). \n", "\n", "With all that covered, let's get started!\n" ] @@ -307,8 +307,10 @@ "os.makedirs(SQUAD_DIR, exist_ok=True)\n", "\n", "# Download the SQuAD dataset\n", - "!wget https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json\n", - "!mv train-v2.0.json {SQUAD_DIR}" + "!wget https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json\n", + "!wget https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json\n", + "!mv train-v1.1.json {SQUAD_DIR}\n", + "!mv dev-v1.1.json {SQUAD_DIR}" ] }, { @@ -845,7 +847,7 @@ "os.environ[\"RANK\"] = '0'\n", "os.environ[\"WORLD_SIZE\"] = '1'\n", "\n", - "strategy = NLPDDPStrategy(find_unused_parameters=False,no_ddp_communication_hook=True)\n", + "strategy = NLPDDPStrategy(find_unused_parameters=False, no_ddp_communication_hook=True)\n", "plugins = [TorchElasticEnvironment()]\n", "trainer = pl.Trainer(plugins= plugins, strategy=strategy, **config.trainer)\n", "\n", @@ -1261,7 +1263,7 @@ "source": [ "For squad, remember we only trained our model on ~29% of the training examples (20k instead of ~70k) and for only 1 epoch. Results will improve if the full training set is used and the model is tuned for more training steps.\n", "\n", - "This concludes our tutorial! For command line and script usage demos, [please see our docs](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/prompt_learning.html) " + "This concludes our tutorial! For command line and script usage demos, [please see our docs](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/nemo_megatron/prompt_learning.html) " ] } ], @@ -1281,7 +1283,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.12" + "version": "3.8.13" } }, "nbformat": 4,