Overview
Join us at the "From Rules to Language Models: Comparative Performance Evaluation" workshop, held alongside RANLP 2025 in Bulgaria. This event explores the evolving landscape of Natural Language Processing (NLP), comparing rule-based, knowledge-based, and modern deep learning approaches, including large language models (LLMs).
Highlights include:
- Insights from invited speaker Tharindu Ranasinghe on ML for NLP and social good
- Comparative analyses of methodologies across tasks like text classification and anaphora resolution
- Discussions on emerging trends, such as neurosymbolic AI and retrieval-augmented generation
Workshop Description & Topics
Workshop Description
Deep Learning (DL) and Large Language Models (LLMs) have significantly advanced many Natural Language Processing (NLP) tasks (Zhao et al., 2025). However, these models still have limitations, particularly in complex linguistic scenarios such as multiword expressions or long-context reasoning (Miletić & Walde, 2024; Cheng et al., 2024). Moreover, critical concerns remain about scalability, interpretability, and robustness to adversarial inputs (Barone et al., 2017; Anjum & Krestel, 2024).
This workshop responds to a recent interest in revisiting rule-based and knowledge-based methods that often offer high precision, better explainability, and domain adaptability—features essential in applications like grammar checking, legal document analysis, and medical NLP. Despite the dominance of end-to-end neural models, recent comparative evaluations show that symbolic approaches still perform competitively (or even do better) on specific tasks, especially when training data is scarce or interpretability is critical (Mitkov et al., 2024; Vastl et al., 2024).
In addition to exploring these traditional methodologies, the workshop will examine emerging trends that offer alternatives or complements to LLMs. These include retrieval-augmented generation (RAG) techniques (Su et al., 2025; Wu et al., 2025), neurosymbolic AI that integrates neural and symbolic reasoning (Sheth et al., 2023; Bhuyan et al., 2024), and few-shot or zero-shot learning approaches (Zeng & Xiao, 2024). General-purpose foundation models (Church & Alonso, 2024) and knowledge-graph–powered systems (Chen et al., 2025; Vidal et al., 2025) also represent promising directions.
This workshop aims to encourage a dialogue between advocates of symbolic and statistical models and to critically evaluate how both approaches can complement each other in building robust, efficient, and explainable NLP systems.
Topics
This workshop invites research papers on comparing rule-based and knowledge-based methods across various NLP applications, highlighting the emerging trends and challenges for the current NLP world. Therefore, we expect to cover the following topics:
- The role of rule-based and knowledge-based approaches in modern NLP. Despite the dominance of ML and DL, hybrid models, rule-based approaches, and knowledge-based approaches remain relevant in areas requiring high precision and explainability. Traditional techniques are still valuable for specific domains, especially when data is scarce, interpretability is required, or robustness to adversarial inputs is essential.
- Comparative analysis of rule-based, machine-learning, deep-learning and large language models for different NLP tasks. Evaluating different methodologies across various NLP tasks is crucial for understanding their strengths, limitations, and best applications. We also seek to explore rule-based techniques for modern NLP applications since rules and heuristics remain crucial for preprocessing, domain-specific NLP, and hybrid systems.
- Emerging trends in NLP research beyond deep learning and Large Language Models. General purpose models, neurosymbolic AI, retrieval-augmented generation (RAG), and few-shot learning are some examples of methods for advancing NLP.
- Limitations and performance bottlenecks in scalability and accuracy of deep learning models. Large-scale models require vast training data, extensive compute power, and struggle with long-context reasoning. Overfitting when fine-tuning or lack of interpretability remain key challenges.
Submissions
We invite submissions that compare rule-based, knowledge-based, and modern neural approaches to NLP tasks. Research exploring hybrid models or evaluating LLMs against traditional methods is particularly welcome.
Submission Guidelines
Submissions must follow the RANLP 2025 submission guidelines, using ACL-style templates (LaTeX or MS Word).
- Long papers: Up to 8 pages (excluding references)
- Short papers: Up to 4 pages (excluding references)
- Publication: Accepted papers will be included in the workshop proceedings which are part of the ACL Anthology.
Important Dates
Important Dates
-
Workshop paper submission deadline:
6 July 2025
🆕 NEW DATE: 15 July 2025 - Workshop paper acceptance notification: 31 July 2025
Submission Deadlines
- Camera-ready papers due: 20 August 2025
- Workshop camera-ready proceedings ready: 8 September 2025
- Workshop dates: 11, 12 or 13 September 2025
Schedule
The detailed schedule will be announced closer to the workshop date.
Organizing Committee

Alicia Picazo-Izquierdo
University of Alicante, Spain

Ernesto Luis Estevanell-Valladares
University of Alicante, Spain

Ruslan Mitkov
Lancaster University, UK

Rafael Muñoz Guillena
University of Alicante, Spain

Raúl García Cerdá
University of Alicante, Spain
Programme Committee
Paul Greaney
Department of Computing
Atlantic Technological
University
Letterkenny, Co. Donegal, Ireland
Constantin Orăsan
Centre for Translation Studies
University of Surrey, UK
Robiert Sepulveda-Torres
Department of Languages & Information Systems
University
of Alicante, Spain
Sandra Kuebler
Department of Linguistics
Indiana University, USA
Preslav Nakov
Department of Natural Language Processing
Mohamed bin
Zayed University of Artificial Intelligence
Abu Dhabi, UAE
Pablo Gervás
Department of Software Engineering and Artificial
Intelligence
Complutense University of Madrid, Spain
Antonio Toral
Department of Languages & Information Systems
University
of Alicante, Spain
Aleksei Dorkin
Institute of Computer Science
University of Tartu,
Estonia
Sina Ahmadi
Department of Computational Linguistics
University of
Zurich, Switzerland
Heili Orav
Lecturer in Natural Language Processing
University of Tartu, Estonia
Additional Programme Committee members will be announced soon. Stay tuned for updates!