Skip to content

[ICLR 2025] ELICIT: LLM Augmentation Via External In-context Capability

Notifications You must be signed in to change notification settings

LINs-lab/ELICIT

Repository files navigation

If our project helps you, please give us a star ⭐ and cite our paper!

Overview

overview We propose ELICIT, improving language model performance by:

  1. Building a Capability Library: A collection of task-specific capabilities from in-domain datasets.
  2. Dynamic Capability Elicitation: Using a trained retriever to dynamically select relevant capabilities for aribitary query.

Experiment Codes

You can find our results here.

Building the Capability Library

The Capability Library is constructed using validation sets from in-domain tasks with 16-shot examples. Follow these steps:

  1. Prepare datasets:

    python process_data.py --task <task_name>

    Example:

    python process_data.py --task arc_challenge
  2. Collect libraries for different models:

    ./scripts/collect_tv.sh

Dynamic Capability Elicitation

Train the Retriever

We provide a balanced dataset of 10,000 samples to train the retriever:

python train_retriever.py --output_model prompt_classifier_model.pth

Evaluate ELICIT

Once the retriever is trained, you can evaluate ELICIT using the collected library:

./scripts/eval_elicit.sh

To analyze results:

  1. Update the evaluation directory in analysis_results.py.
  2. Run the script:
    python analysis_results.py

Citation

If you find this project helpful, please consider citing our work:

@article{wang2024elicit,
  title={ELICIT: LLM Augmentation via External In-Context Capability},
  author={Wang, Futing and Yan, Jianhao and Zhang, Yue and Lin, Tao},
  journal={arXiv preprint arXiv:2410.09343},
  year={2024}
}

About

[ICLR 2025] ELICIT: LLM Augmentation Via External In-context Capability

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published