You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This repository contains the official release of the model "BanglaBERT" and associated downstream finetuning code and datasets introduced in the paper titled "BanglaBERT: Language Model Pretraining and Benchmarks for Low-Resource Language Understanding Evaluation in Bangla" accpeted in Findings of the Annual Conference of the North American Chap…
This research examines the performance of Large Language Models (GPT-3.5 Turbo and Gemini 1.5 Pro) in Bengali Natural Language Inference, comparing them with state-of-the-art models using the XNLI dataset. It explores zero-shot and few-shot scenarios to evaluate their efficacy in low-resource settings.