Skip to content

Commit

Permalink
Add a new task HaeRae-Bench (EleutherAI#1445)
Browse files Browse the repository at this point in the history
* haerae_reimplementation

* edited Readme and add few_shot settings

* edited readme

* newlines at end of each files

* Modifying the README file

* applied pre-commit
  • Loading branch information
h-albert-lee authored and wx-zhang committed Mar 13, 2024
1 parent 5e5b910 commit bd6fb40
Show file tree
Hide file tree
Showing 7 changed files with 81 additions and 0 deletions.
49 changes: 49 additions & 0 deletions lm_eval/tasks/haerae/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# HAE-RAE BENCH

### Paper

Title: `HAE-RAE Bench: Evaluation of Korean Knowledge in Language Models`

Abstract: `Large Language Models (LLMs) trained on massive corpora demonstrate impressive capabilities in a wide range of tasks. While there are ongoing efforts to adapt these models to languages beyond English, the attention given to their evaluation methodologies remains limited. Current multilingual benchmarks often rely on back translations or re-implementations of English tests, limiting their capacity to capture unique cultural and linguistic nuances. To bridge this gap for the Korean language, we introduce HAE-RAE Bench, a dataset curated to challenge models lacking Korean cultural and contextual depth. The dataset encompasses six downstream tasks across four domains: vocabulary, history, general knowledge, and reading comprehension. Contrary to traditional evaluation suites focused on token or sequence classification and specific mathematical or logical reasoning, HAE-RAE Bench emphasizes a model's aptitude for recalling Korean-specific knowledge and cultural contexts. Comparative analysis with prior Korean benchmarks indicates that the HAE-RAE Bench presents a greater challenge to non-native models, by disturbing abilities and knowledge learned from English being transferred.`

Homepage: https://huggingface.co/datasets/HAERAE-HUB/HAE_RAE_BENCH

### Citation

@misc{son2023haerae,
title={HAE-RAE Bench: Evaluation of Korean Knowledge in Language Models},
author={Guijin Son and Hanwool Lee and Suwan Kim and Huiseo Kim and Jaecheol Lee and Je Won Yeom and Jihyu Jung and Jung Woo Kim and Songseong Kim},
year={2023},
eprint={2309.02706},
archivePrefix={arXiv},
primaryClass={cs.CL}
}

### Groups and Tasks

#### Groups

* `haerae`: 'It consists of five tasks provided in the HAERAE-BENCH paper. 'Reading Comprehension' was excluded from the implementation due to copyright issues. We will include it in the next haerae update. For other tasks, some part of data may be replaced or increased with the production of Haerae v1.1. Please note this when using it.'

#### Tasks

The following tasks evaluate subjects in the HaeRae dataset

- `haerae_standard_nomenclature`
- `haerae_loan_word`
- `haerae_rare_word`
- `haerae_general_knowledge`
- `haerae_history`

### Checklist

For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?


If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
17 changes: 17 additions & 0 deletions lm_eval/tasks/haerae/_default_haerae_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
group: haerae
dataset_path: HAERAE-HUB/HAE_RAE_BENCH
test_split: test
fewshot_split: test
output_type: multiple_choice
doc_to_text: "{{query}}"
doc_to_choice: ["(A)", "(B)", "(C)", "(D)", "(E)"]
doc_to_target: "{{answer}}"
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
- metric: acc_norm
aggregation: mean
higher_is_better: true
metadata:
version: 1.0
3 changes: 3 additions & 0 deletions lm_eval/tasks/haerae/haerae_gk.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
"dataset_name": "general_knowledge"
"include": "_default_haerae_yaml"
"task": "haerae_general_knowledge"
3 changes: 3 additions & 0 deletions lm_eval/tasks/haerae/haerae_hi.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
"dataset_name": "history"
"include": "_default_haerae_yaml"
"task": "haerae_history"
3 changes: 3 additions & 0 deletions lm_eval/tasks/haerae/haerae_lw.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
"dataset_name": "loan_words"
"include": "_default_haerae_yaml"
"task": "haerae_loan_word"
3 changes: 3 additions & 0 deletions lm_eval/tasks/haerae/haerae_rw.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
"dataset_name": "rare_words"
"include": "_default_haerae_yaml"
"task": "haerae_rare_word"
3 changes: 3 additions & 0 deletions lm_eval/tasks/haerae/haerae_sn.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
"dataset_name": "standard_nomenclature"
"include": "_default_haerae_yaml"
"task": "haerae_standard_nomenclature"

0 comments on commit bd6fb40

Please sign in to comment.