[FEA] Support packing to a max input sequence length with cudf-subword tokenizer #6089
Labels
feature request
New feature or request
libcudf
Affects libcudf (C++/CUDA) code.
Python
Affects Python cuDF API.
strings
strings issues (C++ and Python)
Is your feature request related to a problem? Please describe.
Currently, the tokenized string is shorter than max_length, output is be padded with 0s. So if
max( tokenized string lengths)
<max_length
, it leads to performance penalties as the compute time forTransformer
models is often proportional to the sequence length of the input .HuggingFace's tokenizer defaults to padding to max input sequence length if
max_length
andpad_to_max_length
are not provided . We should try to follow that, this is especially beneficial for streaming cases that feature #5868 will help.See below example:
Padding to max sequence length.(Proposed Default Behaviour)
Padding to max_length (Current Default Behavior)
Related Implications:
a. We might have to switch from returning one-dimensional cupy arrays to 2-dimensional arrays for token-ids and attention masks which we allready do for most workflow cases so should not have performance penalties.
Describe alternatives you've considered
Currently, a user can do the tokenization twice.
to_dlpack
call.dlpack
I do above for gpu-bdb q27 HF.
), As most of the time is spent doing
to_dlpack
so this workaround should not have big performance implications.CC: @raykallen , @randerzander , @davidwendt
The text was updated successfully, but these errors were encountered: