Drug Recommendation System based on the Condition of the Patient using Bidirectional Encoder Representations from Transformers
Patients are recommended drugs based on their condition and reviews of the drugs from other patients. They can submit their own review of the drug as well that would help future patients. A BERT model fine-tuned on text classification was employed for this objective where it identifies whether the reviewer recommends or does not recommend a particular medicine.
BERT is the Bidirectional Encoder Representations from Transformers developed by Google. It is a attention mechanism that unlike a regular transformer consists only of an encoder network. It learns the correlation between words by ingesting the entire sequence of tokens at once. BERT is pre-trained on two NLP tasks namely Masked Langauge Modelling (MLM) and Next Sentence Prediction (NSP). In simple words, MLM facilitates BERT to understand the relationship between words while NSP facilitates BERT to understand the relationship between sentences. BERT can be used to finetune on a wide range of NLP applications such as classification, Question Answering, and Named Entity Recognition to name a few.
Install required libraries using the following command:
$ pip install -r requirements.txt
/notebooks
: Notebooks on BERT and MongoDB/assets
: All images and GIFs displayed on this README.mdutils.py
: Consists of helper functions for MongoDB and Streamlitmodel.py
: BERT modelrun.py
: Streamlit app
Check out the notebook BERT.ipynb to train, evaluate, and test your own model on custom datasets.
- Install MongoDB
- Secure your MongoDB Deployment
- Connect to MongoDB
- Refer this notebook to create the collection.
Read this tutorial to connect MongoDB to Streamlit.
Use the following command to run Streamlit App:
$ streamlit run run.py
You can now view your Streamlit app in your browser.
Local URL: http://localhost:8501
Network URL: http://192.168.1.4:8501
Check out the GIFs below for demonstrations!