The project aims to enhance the capabilities of Large Language Models by enabling them to generate a specific set of tools in response to a given query or task. This challenge was presented as a problem statement during the DevRev at Inter IIT Tech Meet 12.0.
A Language model L has a set of tools T, and a user query Q is given. To answer query Q, we need to use existing tools. You need to output the subset of tools to be used to answer the query, the arguments that these tools should be called with, and how to compose the tools to answer the query.
The repository Report contains the details and approaches used for the project.
code for prompting technique using two steps
final code for dataset generation for fine tuning
Training dataset
Testing dataset
code containing RAG model
general code for fine tuning models
specific code for gpt 3.5 finetuning
code for metric
final pipeline compiling all above codes
code for inference on test data and real word usage