Skip to content

Commit

Permalink
Add LLama32 Vision Model Support in Nemo 2.0 (#10763)
Browse files Browse the repository at this point in the history
* add initial code for llama vlm

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* some restructure

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* add mock data placeholder

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Fix some importing

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* add language component for vlm llama

* update code

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* now match num of params

* update language part and fix vision part

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* minor fix

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* model can now init

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* minor update for llama32 text config

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* make checkpoint loading work

* missing import

* match vision part tensor shapes with configs

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* solve some fwd issues and mismatch issues

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* add vision import

* fixes

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* update importer to convert both text and image weights

* importer typos and reduce clutter

* fix import qkv

* some fixes for LLM

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Add embedding

* some updates

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* enable loading only text or only vision

* add example script

* TP fix

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* update

* upload examples

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* update generate

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* update to newer version

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* upload for sharing

* update to new pyt ckpt

* xattn_caches matches (except small differences due to TE RMSNorm)

* cleanup

* embeddings match

* match precision of weights

* update sharded state dict

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* change xattn layer num to 3 7 11 etc

* upload llama generation

* minor fix

* fix dummy layer input format

* fix vision qkv order

* fix shareded state dict

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* fix vision precision

* fix rope

* match cross attn layer

* remove nrep

* Remove cross attention in ImageTransformerLayer and fix _gate_ffn

* PP draft

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Fix intermediate tensor

* temp save for pp2 is working

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* fix pp issues

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* merge

* update mcore parallelism initialization

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* small update to pretrain script

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* update mcore parallelism initialization

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Apply isort and black reformatting

Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>

* added energon dataloader for neva training (#10451)

* added energon dataloader for neva training

* Apply isort and black reformatting

Signed-off-by: yashaswikarnati <yashaswikarnati@users.noreply.github.com>

* specify global batch size to support grad accumulation

* adding neva pretrain example

* Apply isort and black reformatting

Signed-off-by: yashaswikarnati <yashaswikarnati@users.noreply.github.com>

* change pretraine example to handle new ckpt reloading

* fixed code quality warnings and unused imports

Signed-off-by: ykarnati <ykarnati@nvidia.com>

* minor changes for PR comments

* Apply isort and black reformatting

Signed-off-by: yashaswikarnati <yashaswikarnati@users.noreply.github.com>

* refactor conversation template config

* Apply isort and black reformatting

Signed-off-by: yashaswikarnati <yashaswikarnati@users.noreply.github.com>

* remove optional import

---------

Signed-off-by: yashaswikarnati <yashaswikarnati@users.noreply.github.com>
Signed-off-by: ykarnati <ykarnati@nvidia.com>
Co-authored-by: yashaswikarnati <yashaswikarnati@users.noreply.github.com>
(cherry picked from commit 7354740)

* llama energon dataloader

* have tokenizer for base task encoder class

* Update megatron_init.py

Signed-off-by: Yu Yao <54727607+yaoyu-33@users.noreply.github.com>

* Add simple inference

* evian3 update

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* add encoder parallel default config

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* add encoder parallel default config

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* clean up

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* add aspect ratio in model

* support energon dataloader

* some pp update

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* fixes

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* fix kv merging

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* fix get_key_value_tensors

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* rename files

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* update to HF style position embedding

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* fix energon dataloader and support batching

* update forward args

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* clean up and move to aspect_ratio_ids

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* rename back to language.py

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* fix loss function

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* update and fix energon

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Add hf import

* Fix type

* Change config

* update energon pretrain

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* clean up

* clean up

* reformat

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* update inference files for new code

* update to instruct

* update to instruct

* update few names

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* update generation

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* fix importer embedding.weight

* few fixes

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* add hf script

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* fix kv import

* remove interleaved

* fixes and updates

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* lora fixes

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* some code clean ups

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* update training scripts

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* refactors

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* add LoRA finetuning

* fixes and nemo update

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* fix importer registering issue by adding 11B and 90B configs

* update `decoder_seq_len`

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* science vqa script

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* clean up script name

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* fix ckpt save serialization issue

* fix predefined config classes

* add num_chunks in input

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* fix format

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* update finetuning scripts for PEFT

* add 11b recipe (need #10645 to test)

* fix mask generation

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* minor fix code style

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Apply isort and black reformatting

Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>

* Support no image inference

* add llama svqa eval

* fix masking

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Apply isort and black reformatting

Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>

* fix generation

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Apply isort and black reformatting

Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>

* add 90b recipe and revise 11b recipe

* Apply isort and black reformatting

Signed-off-by: cuichenx <cuichenx@users.noreply.github.com>

* clean up typing

* add option to disable vision padding

* Apply isort and black reformatting

Signed-off-by: cuichenx <cuichenx@users.noreply.github.com>

* base model finetuning (does not work yet)

* Apply isort and black reformatting

Signed-off-by: cuichenx <cuichenx@users.noreply.github.com>

* fixed default conversation template config for MLLama

* Update svqa

* add multinode

* bot happy

* Apply isort and black reformatting

Signed-off-by: cuichenx <cuichenx@users.noreply.github.com>

* Apply isort and black reformatting

Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>

* Apply isort and black reformatting

Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>

* Apply isort and black reformatting

Signed-off-by: artbataev <artbataev@users.noreply.github.com>

* Perf improvements. Mainly from XAttn mask calculation (#10901)

* Perf improvements. Mainly from XAttn mask calculation

* Apply isort and black reformatting

Signed-off-by: parthmannan <parthmannan@users.noreply.github.com>

---------

Signed-off-by: parthmannan <parthmannan@users.noreply.github.com>
Co-authored-by: parthmannan <parthmannan@users.noreply.github.com>

* fix existing issues

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* fix scripts

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Apply isort and black reformatting

Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>

* fix lora

* few fixes for non image support

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* update masking gen

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* update lazy dataset

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* fix data sampler and loading issue

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Add vlm generation

* Apply isort and black reformatting

Signed-off-by: meatybobby <meatybobby@users.noreply.github.com>

* Apply isort and black reformatting

Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>

* generation update

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* update lazy dataset

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Fix _strategy_lib.py

Signed-off-by: Yu Yao <54727607+yaoyu-33@users.noreply.github.com>

* Apply isort and black reformatting

Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>

* fix warning

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* hide vlm examples

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Revert "Add vlm generation"

This reverts commit 4711c75

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Fix VisionEncoder multi-batch bug

* update mcore parallelism initialization

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Apply isort and black reformatting

Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>

* Update megatron_init.py

Signed-off-by: Yu Yao <54727607+yaoyu-33@users.noreply.github.com>

* add encoder parallel default config

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Fix _strategy_lib.py

Signed-off-by: Yu Yao <54727607+yaoyu-33@users.noreply.github.com>

* llm.generate fixes (#10983)

* fix context path, disable optimizer init, add tp

Signed-off-by: HuiyingLi <willwin.lee@gmail.com>

* format

Signed-off-by: HuiyingLi <willwin.lee@gmail.com>

* address comments, require user to provide trainer

Signed-off-by: HuiyingLi <willwin.lee@gmail.com>

* minor fix

Signed-off-by: HuiyingLi <willwin.lee@gmail.com>

* minor fixes

Signed-off-by: HuiyingLi <willwin.lee@gmail.com>

---------

Signed-off-by: HuiyingLi <willwin.lee@gmail.com>

* use __dict__ in check (#11012)

* check is_hf_model in leaf module

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: akoumpa <akoumpa@users.noreply.github.com>

* disable getattr alternative path

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* fix

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* undo;

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

---------

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: akoumpa <akoumpa@users.noreply.github.com>
Co-authored-by: akoumpa <akoumpa@users.noreply.github.com>

* LoRA support for HF::AutoModelForCausalLM (#10982)

* add LinearAdapter

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* add hf lora example

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* remove unused imports

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* fix

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* fix

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* subclass mixin

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* remove stale imports

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* undo

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* fix scale

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* regex selector for peft

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* move lora

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* fmt

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* hf_auto_model_for_causal_lm finetune recipe

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: akoumpa <akoumpa@users.noreply.github.com>

---------

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: akoumpa <akoumpa@users.noreply.github.com>
Co-authored-by: akoumpa <akoumpa@users.noreply.github.com>

* Change default for always_save_context to True (#11014)

Signed-off-by: Abhishree <abhishreetm@gmail.com>
Co-authored-by: Pablo Garay <pagaray@nvidia.com>

* Add a build option to load_context (#10713)

* Add a build option to load_context

Signed-off-by: Marc Romeijn <mromeijn@nvidia.com>
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* Adding test

Signed-off-by: Marc Romeijn <mromeijn@nvidia.com>
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* Trying to fix failing CPU test

Signed-off-by: Marc Romeijn <mromeijn@nvidia.com>
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* cherry-pick fix

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

---------

Signed-off-by: Marc Romeijn <mromeijn@nvidia.com>
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Co-authored-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* Fix pip install (#11026)

* Move AutoTokenizer inline

Signed-off-by: Marc Romeyn <mromeijn@nvidia.com>

* Move einops to common requirements

Signed-off-by: Marc Romeyn <mromeijn@nvidia.com>

* Move AutoTokenizer import to top-level again in fine_tuning

Signed-off-by: Marc Romeyn <mromeijn@nvidia.com>

* Move megatron init inside nemo.lightning

Signed-off-by: Marc Romeyn <mromeijn@nvidia.com>

* Make megatron_lazy_init_context work when transformer-engine is not installed

Signed-off-by: Marc Romeyn <mromeijn@nvidia.com>

* Only import get_nmt_tokenizer when needed

Signed-off-by: Marc Romeyn <mromeijn@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: marcromeyn <marcromeyn@users.noreply.github.com>

---------

Signed-off-by: Marc Romeyn <mromeijn@nvidia.com>
Signed-off-by: marcromeyn <marcromeyn@users.noreply.github.com>
Co-authored-by: marcromeyn <marcromeyn@users.noreply.github.com>

* [WIP] Add docs for NEST SSL (#10804)

* add docs

Signed-off-by: stevehuang52 <heh@nvidia.com>

* update doc and fix missing param

Signed-off-by: stevehuang52 <heh@nvidia.com>

---------

Signed-off-by: stevehuang52 <heh@nvidia.com>

* Change dist ckpt defaults (#10913)

* Enable ckpt features by default (async ckpt), ckpt every 15mins and reduce preemption time to 1min

Signed-off-by: Shriya Palsamudram <spalsamudram@nvidia.com>

* fix ssm tests

Signed-off-by: Shriya Palsamudram <spalsamudram@nvidia.com>

* Make note that ckpt_async_save is disabled for SSMs

Signed-off-by: Shriya Palsamudram <spalsamudram@nvidia.com>

* Enable async ckpt for SSMs with fix

Signed-off-by: Shriya Palsamudram <spalsamudram@nvidia.com>

* Disable async ckpt in the peft test as it is a known bug, add note.

Signed-off-by: Shriya Palsamudram <spalsamudram@nvidia.com>

* Fix failing unit tests

Signed-off-by: Shriya Palsamudram <spalsamudram@nvidia.com>

* Ashors/peft async ckpt (#11010)

* [WIP] prototype for supporting async checkpointing with peft

Signed-off-by: ashors1 <ashors@nvidia.com>
Signed-off-by: Shriya Palsamudram <spalsamudram@nvidia.com>

* Enable async ckpt for the peft test

Signed-off-by: Shriya Palsamudram <spalsamudram@nvidia.com>

* Fix peft setup test

Signed-off-by: Shriya Palsamudram <spalsamudram@nvidia.com>

---------

Signed-off-by: Shriya Palsamudram <spalsamudram@nvidia.com>
Signed-off-by: ashors1 <ashors@nvidia.com>
Co-authored-by: ataghibakhsh <ataghibakhsh@nvidia.com>

* Akoumparouli/mixtral recipe fix r2.0.0 (#10994)

* Mixtral TP8 EP1

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: akoumpa <akoumpa@users.noreply.github.com>

---------

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: akoumpa <akoumpa@users.noreply.github.com>
Co-authored-by: akoumpa <akoumpa@users.noreply.github.com>

* Fix _strategy_lib tests (#11033)

* fix world size and don't mock

Signed-off-by: Maanu Grover <maanug@nvidia.com>

* cleanup global state

Signed-off-by: Maanu Grover <maanug@nvidia.com>

* check app state instead

Signed-off-by: Maanu Grover <maanug@nvidia.com>

* fix syntax nemo logger test

Signed-off-by: Maanu Grover <maanug@nvidia.com>

---------

Signed-off-by: Maanu Grover <maanug@nvidia.com>

* Update `BaseMegatronSampler` for compatibility with PTL's `_BatchProgress` (#11016)

* Revert "[NeMo-UX] Use custom `BatchProgress` class which does not restore states (#10383)"

This reverts commit b5798de.

* make megatron sampler return the total number of batches in the dataset

Signed-off-by: ashors1 <ashors@nvidia.com>

---------

Signed-off-by: ashors1 <ashors@nvidia.com>

* PTQ example for NeMo 2.0 (#10642)

* initial commit

Signed-off-by: Piotr Kaminski <pikaminski@nvidia.com>

* create Quantizer for NeMo 2.0

Signed-off-by: Piotr Kaminski <pikaminski@nvidia.com>

* refactor

Signed-off-by: Piotr Kaminski <pikaminski@nvidia.com>

* Call quantize on an unwrapped mcore model

Signed-off-by: Piotr Kaminski <pikaminski@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Laplasjan107 <Laplasjan107@users.noreply.github.com>

* Add tests, adjust unwrapping

Signed-off-by: Piotr Kaminski <pikaminski@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Laplasjan107 <Laplasjan107@users.noreply.github.com>

* fix export

Signed-off-by: Piotr Kaminski <pikaminski@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Laplasjan107 <Laplasjan107@users.noreply.github.com>

* Apply isort and black reformatting

Signed-off-by: artbataev <artbataev@users.noreply.github.com>

* Fix output_path argument for HF import

Signed-off-by: Piotr Kamiński <67481570+Laplasjan107@users.noreply.github.com>

* fix fabric ckpt loading

Signed-off-by: Piotr Kaminski <pikaminski@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Laplasjan107 <Laplasjan107@users.noreply.github.com>

* code review suggestions

Signed-off-by: Piotr Kaminski <pikaminski@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Laplasjan107 <Laplasjan107@users.noreply.github.com>

* remove unused import

Signed-off-by: Piotr Kaminski <pikaminski@nvidia.com>

* use cnn dataset in github ci

Signed-off-by: Piotr Kaminski <pikaminski@nvidia.com>

* applied code review

Signed-off-by: Piotr Kaminski <pikaminski@nvidia.com>

* code review changes

Signed-off-by: Piotr Kaminski <pikaminski@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Laplasjan107 <Laplasjan107@users.noreply.github.com>

* simplify interface for data iterator

Signed-off-by: Piotr Kaminski <pikaminski@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Laplasjan107 <Laplasjan107@users.noreply.github.com>

* (partial) PP fix

Signed-off-by: Piotr Kaminski <pikaminski@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Laplasjan107 <Laplasjan107@users.noreply.github.com>

---------

Signed-off-by: Piotr Kaminski <pikaminski@nvidia.com>
Signed-off-by: Laplasjan107 <Laplasjan107@users.noreply.github.com>
Signed-off-by: Piotr Kamiński <67481570+Laplasjan107@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: Piotr Kaminski <pikaminski@nvidia.com>
Co-authored-by: Laplasjan107 <Laplasjan107@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>

* TDT compute timestamps option and Extra Whitespace handling for SPE (#10875)

* add token duration

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* revert rnnt change

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* add remove_extra_whitespaces arg to spe tokenizer

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* add token duration retrieval

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* add ignore_extra_whitespace to spe

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* add compute_timestamp support for tdt

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* fix config field name

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* add refinement for tdt timestamps

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* add segments timestamp support and  refinement for ctc

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* modify tests for ctc decoding timestamps

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* add rnnt timestamp tests

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* updated doc

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* fix in test

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: monica-sekoyan <monica-sekoyan@users.noreply.github.com>

* fix of unicode char

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* fix rnnt_decoding test

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* workaround for tesst tokenizer

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: monica-sekoyan <monica-sekoyan@users.noreply.github.com>

* modify segments formation

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* modify segments for ctc

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* fix in ctc refinement

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: monica-sekoyan <monica-sekoyan@users.noreply.github.com>

* minor changes

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* reverse offset change

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: monica-sekoyan <monica-sekoyan@users.noreply.github.com>

* warning mode=once

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: monica-sekoyan <monica-sekoyan@users.noreply.github.com>

* make ignore_extrawhitespaces false

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* minor changes

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* adjust changes to the tests

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* modify prompt_formatter tests

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: monica-sekoyan <monica-sekoyan@users.noreply.github.com>

---------

Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>
Signed-off-by: monica-sekoyan <monica-sekoyan@users.noreply.github.com>
Co-authored-by: monica-sekoyan <monica-sekoyan@users.noreply.github.com>

* Basic online dynamic FP8 quantization with vLLM (#10904)

* Basic online dynamic quantization with vLLM

Signed-off-by: Jan Lasek <janek.lasek@gmail.com>

* Apply isort and black reformatting

Signed-off-by: janekl <janekl@users.noreply.github.com>

* vllm 0.6.3 updates

Signed-off-by: Jan Lasek <janek.lasek@gmail.com>

* Pass quantization param in deploy_vllm_triton.py script

Signed-off-by: Jan Lasek <janek.lasek@gmail.com>

---------

Signed-off-by: Jan Lasek <janek.lasek@gmail.com>
Signed-off-by: janekl <janekl@users.noreply.github.com>
Co-authored-by: janekl <janekl@users.noreply.github.com>

* ci: Improve VM maintenance (#10758)

* ci: Improve VM maintenance

Signed-off-by: Oliver Koenig <okoenig@nvidia.com>

* rename stuff

Signed-off-by: Oliver Koenig <okoenig@nvidia.com>

* title

Signed-off-by: Oliver Koenig <okoenig@nvidia.com>

* use team

Signed-off-by: Oliver Koenig <okoenig@nvidia.com>

* run on failure too

Signed-off-by: Oliver Koenig <okoenig@nvidia.com>

* fix

Signed-off-by: Oliver Koenig <okoenig@nvidia.com>

* yrdy

Signed-off-by: Oliver Koenig <okoenig@nvidia.com>

* f

Signed-off-by: Oliver Koenig <okoenig@nvidia.com>

* test

Signed-off-by: Oliver Koenig <okoenig@nvidia.com>

* fix

Signed-off-by: Oliver Koenig <okoenig@nvidia.com>

* f

Signed-off-by: Oliver Koenig <okoenig@nvidia.com>

* f

Signed-off-by: Oliver Koenig <okoenig@nvidia.com>

* f

Signed-off-by: Oliver Koenig <okoenig@nvidia.com>

---------

Signed-off-by: Oliver Koenig <okoenig@nvidia.com>

* Add comment for vision transpose

* update megatron_init.py inside lightning

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* rename llama to mllama folder name

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* update to attention bias

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Apply isort and black reformatting

Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>

* update dropout to 0

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* fix attention bias

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* remove disable_vision_padding since we now have a fix

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Apply isort and black reformatting

Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>

* Update init for mllama

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Apply isort and black reformatting

Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>

* Address comments

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Apply isort and black reformatting

Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>

* fix copyright title

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* fix code scan

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* update vision code

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* revert attention bias changes until latest MLM code got merged

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* fix warning

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Turn off system message check, as it's "" now

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Rolllback megatron_parallel.py

Signed-off-by: Yu Yao <54727607+yaoyu-33@users.noreply.github.com>

---------

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>
Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>
Signed-off-by: Yu Yao <54727607+yaoyu-33@users.noreply.github.com>
Signed-off-by: cuichenx <cuichenx@users.noreply.github.com>
Signed-off-by: Chen Cui <chcui@nvidia.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Signed-off-by: parthmannan <parthmannan@users.noreply.github.com>
Signed-off-by: meatybobby <meatybobby@users.noreply.github.com>
Signed-off-by: HuiyingLi <willwin.lee@gmail.com>
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: akoumpa <akoumpa@users.noreply.github.com>
Signed-off-by: Abhishree <abhishreetm@gmail.com>
Signed-off-by: Marc Romeijn <mromeijn@nvidia.com>
Signed-off-by: Marc Romeyn <mromeijn@nvidia.com>
Signed-off-by: marcromeyn <marcromeyn@users.noreply.github.com>
Signed-off-by: stevehuang52 <heh@nvidia.com>
Signed-off-by: Shriya Palsamudram <spalsamudram@nvidia.com>
Signed-off-by: ashors1 <ashors@nvidia.com>
Signed-off-by: Maanu Grover <maanug@nvidia.com>
Signed-off-by: Piotr Kaminski <pikaminski@nvidia.com>
Signed-off-by: Laplasjan107 <Laplasjan107@users.noreply.github.com>
Signed-off-by: Piotr Kamiński <67481570+Laplasjan107@users.noreply.github.com>
Signed-off-by: monica-sekoyan <msekoyan@nvidia.com>
Signed-off-by: monica-sekoyan <monica-sekoyan@users.noreply.github.com>
Signed-off-by: Jan Lasek <janek.lasek@gmail.com>
Signed-off-by: janekl <janekl@users.noreply.github.com>
Signed-off-by: Oliver Koenig <okoenig@nvidia.com>
Co-authored-by: Ao Tang <aot@nvidia.com>
Co-authored-by: Chen Cui <chcui@nvidia.com>
Co-authored-by: Bobby Chen <bobchen@nvidia.com>
Co-authored-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>
Co-authored-by: Yashaswi Karnati <144376261+yashaswikarnati@users.noreply.github.com>
Co-authored-by: ykarnati <ykarnati@nvidia.com>
Co-authored-by: cuichenx <cuichenx@users.noreply.github.com>
Co-authored-by: Yashaswi Karnati <ykarnati@login-eos02.eos.clusters.nvidia.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: Parth Mannan <38387286+parthmannan@users.noreply.github.com>
Co-authored-by: parthmannan <parthmannan@users.noreply.github.com>
Co-authored-by: meatybobby <meatybobby@users.noreply.github.com>
Co-authored-by: Huiying <willwin.lee@gmail.com>
Co-authored-by: Alexandros Koumparoulis <153118171+akoumpa@users.noreply.github.com>
Co-authored-by: akoumpa <akoumpa@users.noreply.github.com>
Co-authored-by: Abhishree Thittenamane <47577437+athitten@users.noreply.github.com>
Co-authored-by: Pablo Garay <pagaray@nvidia.com>
Co-authored-by: Marc Romeyn <mromeijn@nvidia.com>
Co-authored-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Co-authored-by: marcromeyn <marcromeyn@users.noreply.github.com>
Co-authored-by: He Huang (Steve) <105218074+stevehuang52@users.noreply.github.com>
Co-authored-by: Shriya Rishab <69161273+ShriyaPalsamudram@users.noreply.github.com>
Co-authored-by: ataghibakhsh <ataghibakhsh@nvidia.com>
Co-authored-by: Maanu Grover <109391026+maanug-nv@users.noreply.github.com>
Co-authored-by: Anna Shors <71393111+ashors1@users.noreply.github.com>
Co-authored-by: Piotr Kamiński <67481570+Laplasjan107@users.noreply.github.com>
Co-authored-by: Piotr Kaminski <pikaminski@nvidia.com>
Co-authored-by: Laplasjan107 <Laplasjan107@users.noreply.github.com>
Co-authored-by: monica-sekoyan <166123533+monica-sekoyan@users.noreply.github.com>
Co-authored-by: monica-sekoyan <monica-sekoyan@users.noreply.github.com>
Co-authored-by: Jan Lasek <janek.lasek@gmail.com>
Co-authored-by: janekl <janekl@users.noreply.github.com>
Co-authored-by: oliver könig <okoenig@nvidia.com>
  • Loading branch information
1 parent 124aa06 commit 4afa427
Show file tree
Hide file tree
Showing 31 changed files with 3,986 additions and 52 deletions.
14 changes: 12 additions & 2 deletions nemo/collections/multimodal/data/energon/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,9 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from copy import deepcopy
from typing import TYPE_CHECKING, Any, Dict, Literal, Optional
from typing import Any, Dict, Literal, Optional

import fiddle as fdl
import pytorch_lightning as pl
Expand Down Expand Up @@ -66,6 +67,7 @@ def __init__(
pin_memory: bool = True,
multimodal_sample_config: Optional[MultiModalSampleConfig] = MultiModalSampleConfig(),
task_encoder: Optional[MultiModalTaskEncoder] = None,
decoder_seq_length: Optional[int] = None,
) -> None:
"""
Initialize the SimpleMultiModalDataModule.
Expand All @@ -87,6 +89,7 @@ def __init__(
self.tokenizer = tokenizer
self.image_processor = image_processor
self.seq_length = seq_length
self.decoder_seq_length = decoder_seq_length
self.micro_batch_size = micro_batch_size
self.global_batch_size = global_batch_size
self.num_workers = num_workers
Expand All @@ -99,13 +102,18 @@ def __init__(
)
self.init_global_step = 0
self.data_sampler = SequentialMegatronSampler(
seq_len=self.seq_length, micro_batch_size=self.micro_batch_size, global_batch_size=self.global_batch_size
seq_len=self.seq_length,
decoder_seq_len=self.decoder_seq_length,
micro_batch_size=self.micro_batch_size,
global_batch_size=self.global_batch_size,
)
self.train_dataloader_object = None
self.val_dataloader_object = None

def io_init(self, **kwargs) -> fdl.Config[Self]:
# (pleasefixme) image_processor and task_encoder are problematic with Fiddle so we skip serializing them for now
cfg_kwargs = {k: deepcopy(v) for k, v in kwargs.items() if k not in ['image_processor', 'task_encoder']}

for val in cfg_kwargs.values():
if not serialization.find_node_traverser(type(val)):
track_io(type(val))
Expand Down Expand Up @@ -323,6 +331,7 @@ def __init__(
micro_batch_size: int = 4,
global_batch_size: int = 8,
init_consumed_samples: int = 0,
decoder_seq_len: Optional[int] = None,
init_global_step=0,
):
"""
Expand All @@ -336,6 +345,7 @@ def __init__(
"""
super().__init__(
seq_len=seq_len,
decoder_seq_len=decoder_seq_len,
micro_batch_size=micro_batch_size,
global_batch_size=global_batch_size,
init_consumed_samples=init_consumed_samples,
Expand Down
8 changes: 1 addition & 7 deletions nemo/collections/multimodal/data/energon/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
from dataclasses import dataclass, field
from typing import List
import torch
from nemo.collections.multimodal.data.energon.conversation import BaseConversationTemplateConfig
from nemo.collections.multimodal.data.energon.conversation import LLaVATemplateConfig


@dataclass
Expand Down Expand Up @@ -56,12 +56,6 @@ class ImageTextRawBatch:
loss_mask: torch.Tensor = field(default_factory=lambda: torch.empty(0, dtype=torch.float))


class LLaVATemplateConfig(BaseConversationTemplateConfig):
"""LLava specific template configuration which extends the base config"""

pass


@dataclass
class MultiModalSampleConfig:
image_token: ImageToken = field(default_factory=ImageToken)
Expand Down
20 changes: 20 additions & 0 deletions nemo/collections/multimodal/data/energon/conversation.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,15 @@
class BaseConversationTemplateConfig:
"""Conversation template config related parameters"""

system: Optional[str] = "".format() # fmt: off
roles: List[str] = field(default_factory=lambda: ['user', 'assistant'])
stop_string: Optional[str] = None
chat_template = None


class LLaVATemplateConfig(BaseConversationTemplateConfig):
"""LLava specific template configuration which extends the base config"""

system: Optional[str] = (
"A chat between a curious user and artificial assistant agent. The assistant gives helpful, detailed and polite answers to user's questions.".format()
) # fmt: off
Expand All @@ -36,3 +45,14 @@ class BaseConversationTemplateConfig:
{%- endif %}
{%- endfor -%}
"""


class MLlamaTemplateConfig(BaseConversationTemplateConfig):
"""LLava specific template configuration which extends the base config"""

system: Optional[str] = None
roles: List[str] = field(default_factory=lambda: ['user', 'assistant'])
stop_string: str = None
chat_template = """
'{{- bos_token }}\n{%- if custom_tools is defined %}\n {%- set tools = custom_tools %}\n{%- endif %}\n{%- if not tools_in_user_message is defined %}\n {%- set tools_in_user_message = true %}\n{%- endif %}\n{%- if not date_string is defined %}\n {%- if strftime_now is defined %}\n {%- set date_string = strftime_now("%d %b %Y") %}\n {%- else %}\n {%- set date_string = "26 Jul 2024" %}\n {%- endif %}\n{%- endif %}\n{%- if not tools is defined %}\n {%- set tools = none %}\n{%- endif %}\n\n{#- This block extracts the system message, so we can slot it into the right place. #}\n{%- if messages[0][\'role\'] == \'system\' %}\n {%- set system_message = messages[0][\'content\']|trim %}\n {%- set messages = messages[1:] %}\n{%- else %}\n {%- set system_message = "" %}\n{%- endif %}\n\n{#- Find out if there are any images #}\n{% set image_ns = namespace(has_images=false) %} \n{%- for message in messages %}\n {%- for content in message[\'content\'] %}\n {%- if content[\'type\'] == \'image\' %}\n {%- set image_ns.has_images = true %}\n {%- endif %}\n {%- endfor %}\n{%- endfor %}\n\n{#- Error out if there are images and system message #}\n{%- if image_ns.has_images and not system_message == "" %}\n {{- raise_exception("Prompting with images is incompatible with system messages.") }}\n{%- endif %}\n\n{#- System message if there are no images #}\n{%- if not image_ns.has_images %}\n {{- "<|start_header_id|>system<|end_header_id|>\\n\\n" }}\n {%- if tools is not none %}\n {{- "Environment: ipython\\n" }}\n {%- endif %}\n {{- "Cutting Knowledge Date: December 2023\\n" }}\n {{- "Today Date: " + date_string + "\\n\\n" }}\n {%- if tools is not none and not tools_in_user_message %}\n {{- "You have access to the following functions. To call a function, please respond with JSON for a function call." }}\n {{- \'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.\' }}\n {{- "Do not use variables.\\n\\n" }}\n {%- for t in tools %}\n {{- t | tojson(indent=4) }}\n {{- "\\n\\n" }}\n {%- endfor %}\n {%- endif %}\n {{- system_message }}\n {{- "<|eot_id|>" }}\n{%- endif %}\n\n{#- Custom tools are passed in a user message with some extra guidance #}\n{%- if tools_in_user_message and not tools is none %}\n {#- Extract the first user message so we can plug it in here #}\n {%- if messages | length != 0 %}\n {%- set first_user_message = messages[0][\'content\']|trim %}\n {%- set messages = messages[1:] %}\n {%- else %}\n {{- raise_exception("Cannot put tools in the first user message when there\'s no first user message!") }}\n{%- endif %}\n {{- \'<|start_header_id|>user<|end_header_id|>\\n\\n\' -}}\n {{- "Given the following functions, please respond with a JSON for a function call " }}\n {{- "with its proper arguments that best answers the given prompt.\\n\\n" }}\n {{- \'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.\' }}\n {{- "Do not use variables.\\n\\n" }}\n {%- for t in tools %}\n {{- t | tojson(indent=4) }}\n {{- "\\n\\n" }}\n {%- endfor %}\n {{- first_user_message + "<|eot_id|>"}}\n{%- endif %}\n\n{%- for message in messages %}\n {%- if not (message.role == \'ipython\' or message.role == \'tool\' or \'tool_calls\' in message) %}\n {{- \'<|start_header_id|>\' + message[\'role\'] + \'<|end_header_id|>\\n\\n\' }}\n {%- if message[\'content\'] is string %}\n {{- message[\'content\'] }}\n {%- else %}\n {%- for content in message[\'content\'] %}\n {%- if content[\'type\'] == \'image\' %}\n {{- \'<|image|>\' }}\n {%- elif content[\'type\'] == \'text\' %}\n {{- content[\'text\'] }}\n {%- endif %}\n {%- endfor %}\n {%- endif %}\n {{- \'<|eot_id|>\' }}\n {%- elif \'tool_calls\' in message %}\n {%- if not message.tool_calls|length == 1 %}\n {{- raise_exception("This model only supports single tool-calls at once!") }}\n {%- endif %}\n {%- set tool_call = message.tool_calls[0].function %}\n {{- \'<|start_header_id|>assistant<|end_header_id|>\\n\\n\' -}}\n {{- \'{"name": "\' + tool_call.name + \'", \' }}\n {{- \'"parameters": \' }}\n {{- tool_call.arguments | tojson }}\n {{- "}" }}\n {{- "<|eot_id|>" }}\n {%- elif message.role == "tool" or message.role == "ipython" %}\n {{- "<|start_header_id|>ipython<|end_header_id|>\\n\\n" }}\n {%- if message.content is mapping or message.content is iterable %}\n {{- message.content | tojson }}\n {%- else %}\n {{- message.content }}\n {%- endif %}\n {{- "<|eot_id|>" }}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- \'<|start_header_id|>assistant<|end_header_id|>\\n\\n\' }}\n{%- endif %}\n'
"""
2 changes: 1 addition & 1 deletion nemo/collections/multimodal/data/energon/task_encoder.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ def __init__(self, tokenizer, image_processor, multimodal_sample_config):
image_processor (ImageProcessor): The image processor used for preprocessing images across different sample types.
multimodal_sample_config (MultiModalSampleConfig): Configuration object for multimodal samples, including tokens and placeholders.
"""

self.tokenizer = tokenizer
self.encoders: Dict[str, SampleEncoder] = {
VQASample.__name__: VQASampleEncoder(
tokenizer=tokenizer,
Expand Down
51 changes: 45 additions & 6 deletions nemo/collections/vlm/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,30 @@
# Copyright (c) 2024, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from nemo.collections.vlm.mllama.data import MLlamaLazyDataModule, MLlamaMockDataModule
from nemo.collections.vlm.mllama.model.base import (
CrossAttentionTextConfig,
CrossAttentionVisionConfig,
MLlamaModel,
MLlamaModelConfig,
)
from nemo.collections.vlm.mllama.model.mllama import (
MLlamaConfig11B,
MLlamaConfig11BInstruct,
MLlamaConfig90B,
MLlamaConfig90BInstruct,
)
from nemo.collections.vlm.neva.data import (
DataConfig,
ImageDataConfig,
Expand All @@ -6,24 +33,26 @@
MockDataModule,
MultiModalToken,
NevaLazyDataModule,
NevaMockDataModule,
VideoDataConfig,
VideoToken,
)
from nemo.collections.vlm.neva.model import (
from nemo.collections.vlm.neva.model.base import (
CLIPViTConfig,
HFCLIPVisionConfig,
Llava1_5Config7B,
Llava1_5Config13B,
LlavaConfig,
LlavaModel,
MultimodalProjectorConfig,
NevaConfig,
NevaModel,
)
from nemo.collections.vlm.neva.model.llava import Llava1_5Config7B, Llava1_5Config13B, LlavaConfig, LlavaModel
from nemo.collections.vlm.peft import LoRA
from nemo.collections.vlm.recipes import *

__all__ = [
"MockDataModule",
"NevaMockDataModule",
"NevaLazyDataModule",
"MLlamaMockDataModule",
"MLlamaLazyDataModule",
"DataConfig",
"ImageDataConfig",
"VideoDataConfig",
Expand All @@ -39,5 +68,15 @@
"Llava1_5Config7B",
"Llava1_5Config13B",
"LlavaModel",
"MLlamaModel",
"MLlamaModelConfig",
"CrossAttentionTextConfig",
"CrossAttentionVisionConfig",
"MLlamaConfig11B",
"MLlamaConfig11BInstruct",
"MLlamaConfig90B",
"MLlamaConfig90BInstruct",
"mllama_11b",
"mllama_90b",
"LlavaNextTaskEncoder",
]
17 changes: 17 additions & 0 deletions nemo/collections/vlm/mllama/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Copyright (c) 2024, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from transformers import PreTrainedTokenizerFast
from nemo.lightning.io import track_io

track_io(PreTrainedTokenizerFast)
21 changes: 21 additions & 0 deletions nemo/collections/vlm/mllama/data/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# Copyright (c) 2024, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from nemo.collections.vlm.mllama.data.lazy import MLlamaLazyDataModule
from nemo.collections.vlm.mllama.data.mock import MockDataModule as MLlamaMockDataModule

__all__ = [
"MLlamaMockDataModule",
"MLlamaLazyDataModule",
]
Loading

0 comments on commit 4afa427

Please sign in to comment.