Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement] Add default multi-modal process function in ml-commons #2364

Closed
zane-neo opened this issue Apr 26, 2024 · 2 comments
Closed

[Enhancement] Add default multi-modal process function in ml-commons #2364

zane-neo opened this issue Apr 26, 2024 · 2 comments
Labels
enhancement New feature or request

Comments

@zane-neo
Copy link
Collaborator

Is your feature request related to a problem?
multi-modal is an already supported feature in neural-search and ml-commons, but now in ml-commons we need to write painless scripts to pre/post-process functions the inputs/outputs of multi-modal requests which is inconvenient for users.

What solution would you like?
We can add default pre/post process functions for multi-modal case so user can use very simple string to configure their connectors.

What alternatives have you considered?
A clear and concise description of any alternative solutions or features you've considered.

Do you have any additional context?
Add any other context or screenshots about the feature request here.

@dhrubo-os
Copy link
Collaborator

@zane-neo are you planning to release this enhancement in 2.15? What's the plan?

@zane-neo
Copy link
Collaborator Author

@zane-neo are you planning to release this enhancement in 2.15? What's the plan?

This will be released in 2.16.

@github-project-automation github-project-automation bot moved this from On-deck to Done in ml-commons projects Jul 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Development

No branches or pull requests

3 participants