LightSeq: A High Performance Library for Sequence Processing and Generation
-
Updated
May 16, 2023 - C++
LightSeq: A High Performance Library for Sequence Processing and Generation
Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo
Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
Self-contained Machine Learning and Natural Language Processing library in Go
Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo
Multilingual/multidomain question generation datasets, models, and python library for question generation.
Cybertron: the home planet of the Transformers in Go
MinT: Minimal Transformer Library and Tutorials
Build and train state-of-the-art natural language processing models using BERT
code for AAAI2022 paper "Open Vocabulary Electroencephalography-To-Text Decoding and Zero-shot Sentiment Classification"
Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper
Calculate perplexity on a text with pre-trained language models. Support MLM (eg. DeBERTa), recurrent LM (eg. GPT3), and encoder-decoder LM (eg. Flan-T5).
BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese (INTERSPEECH 2022)
Abstractive and Extractive Text summarization using Transformers.
NAACL 2021 - Progressive Generation of Long Text
Codes for our paper "JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs" (ACL 2021 Findings)
Official implementation of the paper "IteraTeR: Understanding Iterative Revision from Human-Written Text" (ACL 2022)
Automated Categorization: Utilizing the power of neural networks, this project offers an automated solution to categorize bank descriptions, reducing manual effort and enhancing efficiency while maintaining privacy.
Add a description, image, and links to the bart topic page so that developers can more easily learn about it.
To associate your repository with the bart topic, visit your repo's landing page and select "manage topics."