You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I see that the multi-model models in the example all use TensorRT directly to deploy vision encoders, why not use TensorRT-LLM? Are there known issues or challenges associated with integrating Context FMHA into visual encoders?
The text was updated successfully, but these errors were encountered:
Yes, you can try to use TensorRT-LLM for the vision encoders. We have Bert example, DiT example, and community also contribute a SDXL model. I think it's not hard to develop a ViT model.
I see that the multi-model models in the example all use TensorRT directly to deploy vision encoders, why not use TensorRT-LLM? Are there known issues or challenges associated with integrating Context FMHA into visual encoders?
The text was updated successfully, but these errors were encountered: