The Toxic Text Classifier is a user-friendly web application built with Streamlit that is designed to analyze and categorize text through toxicity classification and sentiment analysis. This project utilizes a fine-tuned BERT model for text classification, leveraging TensorFlow and Hugging Face Transformers for NLP capabilities.
Toxic | Severe Toxic | Obscene | Threat | Insult | Identity Hate |
---|
The site includes various pre-trained models with different levels of training and weighting. This shows my development progress and the implementation of class weights for higher accuracy:
-
Toxicity - 1 Epoch
-
Toxicity - 8 Epochs

- Navigate to the web app homepage
Note: It may take a moment for the streamlit app to load
- Select a demo to view an example of a certain type of toxicity in action
OR
-
Type the text you want to analyze for toxicity
-
Select a model for either toxicity classification or sentiment analysis
-
Hit the submit button to view the output
This Hugging Face Space is best used within my web app. The models can also be downloaded from my Hugging Face Profile.