Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: Does the ai-rag-chat-evaluator now support non-English QA pair generation using the new azure-ai-evaluation package? #114

Closed
EMjetrot opened this issue Dec 12, 2024 · 2 comments
Assignees

Comments

@EMjetrot
Copy link

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [x] feature request
- [x] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

I'm very interested in being able to generated QA pairs in non-English languages and I can see that the result of a previous question (#35) related to this, led to the documentation being changed to state that only English QA pairs can be generated, due to limitations in the azure-ai-generative package.

limitations

It seems like azure-ai-generative package has been replaced by the new azure-ai-evaluation package which supports Spanish, Italian, French, Japanese, Portuguese, Simplified Chinese and German.

comment
(Azure/azure-sdk-for-python#34099 (comment))

Expected/desired behavior

Does the current coding supports the new azure-ai-evaluation multi-language QA generator/simulator and we can update the documentation, or is a rewrite of the code needed to use the new azure-ai-evaluation-package?

Mention any other details that might be useful

Thank you for this great repo! :)


Thanks! We'll be in touch soon.

@pamelafox
Copy link
Contributor

Hi @EMjetrot, you're using this repo alongside azure-search-openai-demo, correct?

If so, please see the branch I'm currently working on:
https://github.com/Azure-Samples/azure-search-openai-demo/compare/main...pamelafox:azure-search-openai-demo:evals?expand=1

That uses the Simulator approach for generating QA data in the form expected by this repo.
I am going to continue working on that branch to show the full evaluation workflow.

I think I will move ground truth generation out of this repo, as I've found it's so very specific to how a RAG has been set up, and this repo will continue to contain the CLI tools and custom evaluation metrics.

@pamelafox pamelafox self-assigned this Dec 12, 2024
@EMjetrot
Copy link
Author

Hi @pamelafox - Yes, I'm trying to use this repo alongside azure-search-openai-demo and I'm very happy to hear that you are already working on evaluation in a branch on that repo. I'll close the issue here and keep an eye on that branch instead.

Thank you for all the great work! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants