Comparing Deep learning model with Pytorch – transformation, logistic regression with CV hyper-parameter optimization, and Graph based methods on Emotions DataSet
For this project, algorithms are applied to Emotion Dataset. The corpus is gathered from English tweets. Tweets are mostly in Spoken Language. So, there will be some phrases like 10/10, OOOMG, and WHAT A WONDERFUL DAY that do not have a meaning in a dictionary or mean something different from the definition in the dictionary. For example, in the first case, 10/10 means perfect but semantically it is a mathematical operation. Therefore, the model needs features more than semantic meaning to predict an emotion for unseen tweets. The paper is using a graph-based algorithm and extracts the word embeddings for model training. Also, there are lots of results published by different papers with high accuracies and they all used deep learning. Hence, we decided to use a non-graph-based algorithm beside a deep algorithm and test the papers' hypothesis.
In this project logistic regression with CV hyper-parameter optimization as the simple machine learning model is implemented. Also, a deep learning model with Pytorch – transformation was added for comparison purposes.
The accuracy of this method is 68%.
The accuracy of this method is 92%.
The first method is using the semantic features only. The accuracy is low, and the model is not very efficient. The paper uses a graph-based embedding, and the best accuracy is 81%. The deep learning model's feature extraction method is not clear, but the accuracy is the highest, 92%. This proves that the model needs more features than the simple semantic ones but extracting graphs might not be enough too. The deep learning model extracted the best features based on the results.
References
https://paperswithcode.com/sota/text-classification-on-emotion
https://huggingface.co/docs/transformers/tasks/sequence_classification