Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QB Reader AI #10

Closed
geoffrey-wu opened this issue Jun 29, 2022 · 5 comments
Closed

QB Reader AI #10

geoffrey-wu opened this issue Jun 29, 2022 · 5 comments
Labels
enhancement New feature or request project Long-term project to work on wontfix This will not be worked on

Comments

@geoffrey-wu
Copy link
Member

Create an AI that the player can play against that would buzz against the player

Potential resources to check out:

Or, the AI can just buzz within a certain range (range would change based on difficulty)

@geoffrey-wu geoffrey-wu added the enhancement New feature or request label Jun 29, 2022
@geoffrey-wu geoffrey-wu added the wontfix This will not be worked on label Jul 13, 2022
@geoffrey-wu geoffrey-wu added the project Long-term project to work on label Feb 6, 2023
@derikk
Copy link

derikk commented Dec 11, 2023

I've tested LLMs for this purpose with good results; here is Zephyr 7B (running locally) buzzing on a Science Bowl question. You could feed in the question a few words at a time to a model and see where it buzzes in. Each buzz attempt would cost only a hundredth of a cent on gpt-3.5-turbo.
image

@geoffrey-wu
Copy link
Member Author

Some other options could be: buzzing against average buzzpoint of a tossup across all recorded stats and buzzing against stats uploaded from a mirror

@alopezlago
Copy link
Collaborator

If you use an LLM you'll want to dumb it down or handicap it since it'll be much better than most players and won't be enjoyable to play against. There are some techniques you can do to help (for example, give the player a 10-15 word handicap).

The cheapest option is likely a probabilistic model that randomly buzzes in and has a certain chance to convert. This also lets you do things like make the player stronger in certain categories, etc.

@geoffrey-wu geoffrey-wu pinned this issue Dec 31, 2023
@SamarthP5
Copy link

From a fellow quizbowler - Another option could be expanding the context window for the LLM and train it on contextual clues. This could be branched off for a learning experience, i.e giving the context of these clues upon a successful answer, so the human player can learn these clues. As difficulty increases, the context gets more obscure. You can also combine this with the probabilistic model proposed before. One helpful paper that I had found a while ago about MemGPT does exactly this, https://arxiv.org/abs/2310.08560.

@Captain-Quack
Copy link
Contributor

It could also adjust its abilities based on the performance of the player.

@JeliHacker JeliHacker mentioned this issue Feb 20, 2024
@geoffrey-wu geoffrey-wu unpinned this issue Nov 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request project Long-term project to work on wontfix This will not be worked on
Projects
None yet
Development

No branches or pull requests

5 participants