Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make LLM mention extraction more robust #5230

Closed
reckart opened this issue Jan 15, 2025 · 0 comments
Closed

Make LLM mention extraction more robust #5230

reckart opened this issue Jan 15, 2025 · 0 comments
Assignees
Labels
Milestone

Comments

@reckart
Copy link
Member

reckart commented Jan 15, 2025

Is your feature request related to a problem? Please describe.
Mention extraction is currently a bit flaky.

Describe the solution you'd like
Support more response formats and/or provide better guidance on how the response should be be structured.

@reckart reckart added this to the 36.0 milestone Jan 15, 2025
@reckart reckart self-assigned this Jan 15, 2025
@reckart reckart added this to Kanban Jan 15, 2025
@github-project-automation github-project-automation bot moved this to 🔖 To do in Kanban Jan 15, 2025
@reckart reckart changed the title Make mention extraction more robust Make LLM mention extraction more robust Jan 15, 2025
reckart added a commit that referenced this issue Jan 15, 2025
- Add support for another response format
reckart added a commit that referenced this issue Jan 15, 2025
- Add support for another response format
reckart added a commit that referenced this issue Jan 15, 2025
…-mention-extraction-more-robust

#5230 - Make LLM mention extraction more robust
reckart added a commit that referenced this issue Feb 2, 2025
- Use structured output from the LLM when available
- Use chat-based APIs when talking to LLM
- Simplify the LLM presets and the interactive LLM sidebar
- Remove option to configure the response format for the LLM (text vs. JSON) and instead let the extraction mode determine that
- Introduce paragraph-level prompt context
- Normalize newlines in prompt context
- URL for ChatGPT recommenders should no longer include the "v1" - we add that internally
- Switch default ChatGPT model to o1-mini
reckart added a commit that referenced this issue Feb 2, 2025
…-mention-extraction-more-robust

#5230 - Make LLM mention extraction more robust
@reckart reckart closed this as completed Feb 2, 2025
@github-project-automation github-project-automation bot moved this from 🔖 To do to 🍹 Done in Kanban Feb 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: 🍹 Done
Development

No branches or pull requests

1 participant