notebooks to finetune `bert-small-amharic`, `bert-mini-amharic`, and `xlm-roberta-base` models using an Amharic text classification dataset and the transformers library
-
Updated
May 10, 2024 - Jupyter Notebook
notebooks to finetune `bert-small-amharic`, `bert-mini-amharic`, and `xlm-roberta-base` models using an Amharic text classification dataset and the transformers library
NLP notebooks
This repository contains Jupyter notebooks detailing the experiments conducted in our research paper on Ukrainian news classification. We introduce a framework for simple classification dataset creation with minimal labeling effort, and further compare several pretrained models for the Ukrainian language.
This repository provides code to fine-tune four multi-lingual language models (MBERT, XLM-RoBERTa, DistilmBERT, and MDeBERTa) on AraStance dataset (Alhindi et al., 2021). The repository includes notebooks for training, evaluation, and making predictions with the fine-tuned models.
Add a description, image, and links to the xlm-roberta topic page so that developers can more easily learn about it.
To associate your repository with the xlm-roberta topic, visit your repo's landing page and select "manage topics."