|
# HELMET: How to Evaluate Long-context Language Models Effectively and Thoroughly |
|
|
|
[[Paper](https://arxiv.org/abs/2410.02694)][[Code](https://github.com/princeton-nlp/HELMET)] |
|
|
|
HELMET is a comprehensive benchmark for long-context language models covering seven diverse categories of tasks. |
|
The datasets are application-centric and are designed to evaluate models at different lengths and levels of complexity. |
|
Please check out the paper for more details, and the code repo for how to process the data and run the evaluations |
|
|
|
|
|
|