Abstract
We describe the second edition of the LongEval CLEF 2024 shared task. This lab evaluates the temporal persistence of Information Retrieval (IR) systems and Text Classifiers. Task 1 requires IR systems to run on corpora acquired at several timestamps, and evaluates the drop in system quality (NDCG) along these timestamps. Task 2 tackles binary sentiment classification at different points in time, and evaluates the performance drop for different temporal gaps. Overall, 37 teams registered for Task 1 and 25 for Task 2. Ultimately, 14 and 4 teams participated in Task 1 and Task 2, respectively.
| Original language | English |
|---|---|
| Pages (from-to) | 2267-2289 |
| Number of pages | 23 |
| Journal | CEUR Workshop Proceedings |
| Volume | 3740 |
| State | Published - 2024 |
| Event | 25th Working Notes of the Conference and Labs of the Evaluation Forum, CLEF 2024 - Grenoble, France Duration: 9 Sep 2024 → 12 Sep 2024 |
Keywords
- Evaluation
- Information Retrieval
- Temporal Generalisability
- Temporal Persistence
- Text Classification
Fingerprint
Dive into the research topics of 'Extended overview of the CLEF 2024 LongEval Lab on Longitudinal Evaluation of Model Performance'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver