- ๐ Extract title, abstract, citations
- ๐ Parse introduction with GROBID
- ๐ Identify citation contexts
Novelty assessment is a central yet understudied aspect of peer review, particularly in high-volume fields like NLP where reviewer capacity is increasingly strained. We present a structured approach for automated novelty evaluation that models expert reviewer behavior through three stages: content extraction from submissions, retrieval and synthesis of related work, and structured comparison for evidence-based assessment. Our method is informed by a large-scale analysis of human-written novelty reviews and captures key patterns such as independent claim verification and contextual reasoning. Evaluated on 182 ICLR 2025 submissions with human annotated reviewer novelty assessments, the approach achieves 86.5% alignment with human reasoning and 75.3% agreement on novelty conclusionsโsubstantially outperforming existing LLM-based baselines. The method produces detailed, literature-aware analyses and improves consistency over ad hoc reviewer judgments. These results highlight the potential for structured LLM-assisted approaches to support more rigorous and transparent peer review without displacing human expertise.
Side-by-side comparison of novelty assessments on actual ICLR 2025 submissions
Key Insight: Our system consistently aligns better with human expert assessments, correctly identifying incremental contributions and citing specific prior work, while baselines often overstate novelty or miss critical context.
We built a three-stage pipeline that mimics expert reviewer behavior:
Evaluated on 182 ICLR 2025 submissions:
Human Reasoning
86.5%
Agreement Rate
75.3%
Analyzed 182 human reviews to understand:
Our pipeline incorporates:
Success factors:
Join us in building more rigorous, evidence-based scholarly critique
@misc{afzal2025notnovelenoughenriching,
title={Beyond "Not Novel Enough": Enriching Scholarly Critique with LLM-Assisted Feedback},
author={Osama Mohammed Afzal and Preslav Nakov and Tom Hope and Iryna Gurevych},
year={2025},
eprint={2508.10795},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.10795},
}