Skip to content
forked from microsoft/unilm

Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities

License

Notifications You must be signed in to change notification settings

takemobiteam/unilm

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Hiring

We are hiring at all levels (including FTE researchers and interns)! If you are interested in working with us on NLP and large-scale pre-trained models, please send your resume to fuwei@microsoft.com.

AI Fundamentals

Foundation of Extremely Deep/Large Models

Transformers at Scale = DeepNet + X-MoE

DeepNet: scaling Transformers to 1,000 Layers and beyond

X-MoE: scalable & finetunable sparse Mixture-of-Experts (MoE)

Foundation (aka Pre-trained) Models

General-purpose Foundation Model

MetaLM: Language Models are General-Purpose Interfaces

The Big Convergence - Large-scale self-supervised pre-training across tasks (predictive and generative), languages (100+ languages), and modalities (language, image, audio, layout/format + language, vision + language, audio + language, etc.)

Language & Multilingual

UniLM: unified pre-training for language understanding and generation

InfoXLM/XLM-E: multilingual/cross-lingual pre-trained models for 100+ languages

DeltaLM/mT6: encoder-decoder pre-training for language generation and translation for 100+ languages

MiniLM: small and fast pre-trained models for language understanding and generation

AdaLM: domain, language, and task adaptation of pre-trained models

EdgeLM(NEW): small pre-trained models on edge/client devices

SimLM (NEW): similarity matching with language model pre-training

Vision

BEiT: generative self-supervised pre-training for vision / BERT Pre-Training of Image Transformers

DiT (NEW): self-supervised pre-training for Document Image Transformers

Speech

WavLM: speech pre-training for full stack tasks

Multimodal (X + Language)

LayoutLM/LayoutLMv2/LayoutLMv3: multimodal (text + layout/format + image) pre-training for Document AI (e.g. scanned documents, PDF, etc.)

LayoutXLM: multimodal (text + layout/format + image) pre-training for multilingual document understanding

MarkupLM: markup language model pre-training for visually-rich document understanding

UniSpeech: unified pre-training for self-supervised learning and supervised learning for ASR

UniSpeech-SAT: universal speech representation learning with speaker-aware pre-training

SpeechT5: encoder-decoder pre-training for spoken language processing

VLMo: Unified vision-language pre-training

VL-BEiT (NEW): Generative Vision-Language Pre-training - evolution of BEiT to multimodal

Toolkits

s2s-ft: sequence-to-sequence fine-tuning toolkit

Aggressive Decoding (NEW): lossless and efficient sequence-to-sequence decoding algorithm

Applications

TrOCR: transformer-based OCR w/ pre-trained models

LayoutReader: pre-training of text and layout for reading order detection

XLM-T: multilingual NMT w/ pretrained cross-lingual encoders

News

  • July, 2022: SimLM - Large-scale self-supervised pre-training for similarity matching
  • June, 2022: DiT and LayoutLMv3 were accepted by ACM Multimedia 2022
  • June, 2022: MetaLM - Language models are general-purpose interfaces to foudation models (language/multilingual, vision, speech, and multimodal)
  • June, 2022: VL-BEiT - bidirectional multimodal Transformer learned from scratch with one unified pretraining task, one shared backbone, and one-stage training, supporting both vision and vision-language tasks.
  • [Model Release] June, 2022: LayoutLMv3 Chinese - Chinese version of LayoutLMv3
  • [Code Release] May, 2022: Aggressive Decoding - Lossless Speedup for Seq2seq Generation
  • April, 2022: Transformers at Scale = DeepNet + X-MoE
  • [Model Release] April, 2022: LayoutLMv3 - Pre-training for Document AI with Unified Text and Image Masking
  • [Model Release] March, 2022: EdgeFormer - Parameter-efficient Transformer for On-device Seq2seq Generation
  • [Model Release] March, 2022: DiT - Self-supervised Document Image Transformer. Demos: Document Layout Analysis, Document Image Classification
  • Januray, 2022: BEiT was accepted by ICLR 2022 as Oral presentation (54 out of 3391).
  • [Model Release] December 16th, 2021: TrOCR small models for handwritten and printed texts, with 3x inference speedup.
  • November 24th, 2021: VLMo as the new SOTA on the VQA Challenge
  • November, 2021: Multilingual translation at scale: 10000 language pairs and beyond
  • [Model Release] November, 2021: MarkupLM - Pre-training for text and markup language (e.g. HTML/XML)
  • [Model Release] November, 2021: VLMo - Unified vision-language pre-training w/ BEiT
  • October, 2021: WavLM Large achieves state-of-the-art performance on the SUPERB benchmark
  • [Model Release] October, 2021: WavLM - Large-scale self-supervised pre-trained models for speech.
  • [Model Release] October 2021: TrOCR is on HuggingFace
  • September 28th, 2021: T-ULRv5 (aka XLM-E/InfoXLM) as the SOTA on the XTREME leaderboard. // Blog
  • [Model Release] September, 2021: LayoutLM-cased are on HuggingFace
  • [Model Release] September, 2021: TrOCR - Transformer-based OCR w/ pre-trained BEiT and RoBERTa models.
  • August 2021: LayoutLMv2 and LayoutXLM are on HuggingFace
  • [Model Release] August, 2021: LayoutReader - Built with LayoutLM to improve general reading order detection.
  • [Model Release] August, 2021: DeltaLM - Encoder-decoder pre-training for language generation and translation.
  • August 2021: BEiT is on HuggingFace
  • [Model Release] July, 2021: BEiT - Towards BERT moment for CV
  • [Model Release] June, 2021: LayoutLMv2, LayoutXLM, MiniLMv2, and AdaLM.
  • May, 2021: LayoutLMv2, InfoXLMv2, MiniLMv2, UniLMv3, and AdaLM were accepted by ACL 2021.
  • April, 2021: LayoutXLM is coming by extending the LayoutLM into multilingual support! A multilingual form understanding benchmark XFUND is also introduced, which includes forms with human labeled key-value pairs in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese).
  • March, 2021: InfoXLM was accepted by NAACL 2021.
  • December 29th, 2020: LayoutLMv2 is coming with the new SOTA on a wide varierty of document AI tasks, including DocVQA and SROIE leaderboard.
  • October 8th, 2020: T-ULRv2 (aka InfoXLM) as the SOTA on the XTREME leaderboard. // Blog
  • September, 2020: MiniLM was accepted by NeurIPS 2020.
  • July 16, 2020: InfoXLM (Multilingual UniLM) arXiv
  • June, 2020: UniLMv2 was accepted by ICML 2020; LayoutLM was accepted by KDD 2020.
  • April 5, 2020: Multilingual MiniLM released!
  • September, 2019: UniLMv1 was accepted by NeurIPS 2019.

Release

***** New May, 2022: Aggressive Decoding release *****

  • Aggressive Decoding (May 20, 2022): Aggressive Decoding, a novel decoding paradigm for lossless speedup for seq2seq generation. Unlike the previous efforts (e.g., non-autoregressive decoding) speeding up seq2seq generation at the cost of quality loss, Aggressive Decoding aims to yield the identical (or better) generation compared with autoregressive decoding but in a significant speedup: For the seq2seq tasks characterized by highly similar inputs and outputs (e.g., Grammatical Error Correction and Text Simplification), the Input-guided Aggressive Decoding can introduce a 7x-9x speedup for the popular 6-layer Transformer on GPU with the identical results as greedy decoding; For other general seq2seq tasks (e.g., Machine Translation and Abstractive Summarization), the Generalized Aggressive Decoding can have a 3x-5x speedup with the identical or even better quality. "Lossless Acceleration for Seq2seq Generation with Aggressive Decoding"

***** New April, 2022: LayoutLMv3 release *****

  • LayoutLM 3.0 (April 19, 2022): LayoutLMv3, a multimodal pre-trained Transformer for Document AI with unified text and image masking. Additionally, it is also pre-trained with a word-patch alignment objective to learn cross-modal alignment by predicting whether the corresponding image patch of a text word is masked. The simple unified architecture and training objectives make LayoutLMv3 a general-purpose pre-trained model for both text-centric and image-centric Document AI tasks. Experimental results show that LayoutLMv3 achieves state-of-the-art performance not only in text-centric tasks, including form understanding, receipt understanding, and document visual question answering, but also in image-centric tasks such as document image classification and document layout analysis. "LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking ACM MM 2022"

***** March, 2022: EdgeFormer release *****

  • EdgeFormer (March 18, 2022): EdgeFormer, the first publicly available pretrained parameter-efficient Transformer for on-device seq2seq generation. EdgeFormer has only 11 million parameters, taking up less than 15MB disk size after int8 quantization and compression, which can process a sentence of the length of 20-30 tokens with acceptable latency on two middle-to-high end CPU cores and less than 50MB memory footprint. The pretrained EdgeFormer can be fine-tuned to English seq2seq tasks and achieve promising results -- significantly better than the strong paramter-efficient Transformer baseline (pretrained Universal Transformer) and full-parameterized Transformer-base model without pretraining, which we believe can largely facilitate on-device seq2seq generation in practice. "EdgeFormer: A Parameter-Efficient Transformer for On-Device Seq2seq Generation"

***** March, 2022: DiT release *****

  • DiT (March 4, 2022): DiT, a self-supervised pre-trained Document Image Transformer model using large-scale unlabeled text images for Document AI tasks, which is essential since no supervised counterparts ever exist due to the lack of human labeled document images. We leverage DiT as the backbone network in a variety of vision-based Document AI tasks, including document image classification, document layout analysis, table detection as well as text detection for OCR. Experiment results have illustrated that the self-supervised pre-trained DiT model achieves new state-of-the-art results on these downstream tasks, e.g. document image classification (91.11 → 92.69), document layout analysis (91.0 → 94.9), table detection (94.23 → 96.55) and text detection for OCR (93.07 → 94.29). "DiT: Self-supervised Pre-training for Document Image Transformer ACM MM 2022"

***** October, 2021: WavLM release *****

  • WavLM (October 27, 2021): WavLM, a new pre-trained speech model, to solve full-stack downstream speech tasks. WavLM integrates the gated relative position embedding structure and the utterance mixing method, to model both spoken content and speaker identity preservation. WavLM is trained on 94k hours of public audio data, which is larger than other released checkpoints for English Speech modeling. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks. "WavLM: Large-Scale Self-Supervised Pre-training for Full Stack Speech Processing"

***** October, 2021: MarkupLM release *****

  • MarkupLM (October 19, 2021): MarkupLM, a simple yet effective pre-training approach for text and markup language. With the Transformer architecture, MarkupLM integrates different input embeddings including text embeddings, position embeddings, and XPath embeddings. Furthermore, we also propose new pre-training objectives that are specially designed for understanding the markup language. We evaluate the pre-trained MarkupLM model on the WebSRC and SWDE datasets. Experiments show that MarkupLM significantly outperforms several SOTA baselines in these tasks. "MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding ACL 2022"

***** September, 2021: TrOCR release *****

  • TrOCR (September 22, 2021): Transformer-based OCR with pre-trained models, which leverages the Transformer architecture for both image understanding and bpe-level text generation. The TrOCR model is simple but effective (convolution free), and can be pre-trained with large-scale synthetic data and fine-tuned with human-labeled datasets. "TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models"

***** August, 2021: LayoutReader release *****

***** August, 2021: DeltaLM release *****

***** July, 2021: BEiT release *****

***** June, 2021: LayoutXLM | AdaLM | MiniLMv2 release *****

***** May, 2021: LayoutLMv2 | LayoutXLM release *****

  • LayoutLM 2.0 (December 29, 2020): multimodal pre-training for visually-rich document understanding by leveraging text, layout and image information in a single framework. It is coming with new SOTA on a wide range of document understanding tasks, including FUNSD (0.7895 -> 0.8420), CORD (0.9493 -> 0.9601), SROIE (0.9524 -> 0.9781), Kleister-NDA (0.834 -> 0.852), RVL-CDIP (0.9443 -> 0.9564), and DocVQA (0.7295 -> 0.8672). "LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding ACL 2021"

***** February, 2020: UniLM v2 | MiniLM v1 | LayoutLM v1 | s2s-ft v1 release *****

***** October 1st, 2019: UniLM v1 release *****

License

This project is licensed under the license found in the LICENSE file in the root directory of this source tree. Portions of the source code are based on the transformers project.

Microsoft Open Source Code of Conduct

Contact Information

For help or issues using the pre-trained models, please submit a GitHub issue.

For other communications, please contact Furu Wei (fuwei@microsoft.com).

About

Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 96.2%
  • Shell 2.2%
  • Cuda 0.8%
  • C++ 0.4%
  • Cython 0.3%
  • Lua 0.1%