With the growing complexity of modern software systems, software engineers need to cope with the so-called information overloading along the whole development lifecycle, spanning from the require- ment elicitation to the development of the actual system. In addition, fast-evolving technologies and frameworks are emerging daily. Therefore, non-expert users may struggle to express the requirements properly or select the proper third-party software libraries needed to implement a specific functionality. Such inconvenience impacts mainly the software design and construction phases, which means more than 50% of the effort made by the software engineers during the life of the project. Current software projects need to be easily scalable in order to reduce such maintenance costs. To address these challenges, intelligent software assistants have been proposed to ease the burden of choice by providing a set of automated capabilities to help developers in several tasks, e.g., debugging, testing, navigating Q&A forums, and extracting information from open-source repositories. After an inference phase, the system can provide a set of valuable items, namely recommendations, according to the current task. While traditional systems rely on curated knowledge bases as the primary foundation for their recommendation processes, the advent of cutting-edge AI models, or the so-called Large Language Models (LLMs) like those of the GPT family, is dramatically changing how these systems are designed, developed, and evaluated. Currently, IDEs like Visual Studio and Eclipse are being extended with LLM-based assistants, e.g., Copilot or Caret. In this respect, a key point is to ensure a set of qualitative aspects that go beyond the accuracy of those assistants. Concretely, the provided items must be free from any kind of bias, ensure the user’s privacy, adhering software licenses, and, overall, contribute to building reliable and trustworthy software projects. This objective has been recently recognized by the European Commission, which proposed the AI Act, a dedicated standard for prominent AI-intensive systems providing a wide range of requirements, methodologies, and metrics focused on ensuring the mentioned qualitative aspects. Thus, there is a need to assess the outcomes of intelligent software assistants by adhering to rigorous protocols and methodologies provided by the empirical software engineering field of study. To this end, we propose a special issue for Automated Software Engineering that focuses on evaluating qualitative aspects in intelligent software assistants, with the aim of attracting researchers to this research field and creating a community to share and discuss new ideas and start collaborations.
Topics of interest include, but are not limited to, the following:
• Re-usage of AI-based tools, techniques, and methodologies in developing intelligent software assistants.
• Foundational theories for automated software assistants to understand the underlying principles that can drive the development of more robust and generalizable recommendation systems in software engineering, with a focus on their evaluation.
• Evaluating quality aspects of software assistants, e.g., explainability, transparency, and fairness, ensuring that software assistants produce reliable results.
• New methods, tools, and frameworks to support development tasks, e.g., code-related tasks, automated classification of software artifacts, or code generation leveraging generative AI models.
• Designing specific prompt engineering techniques for intelligent software assistants based on large language models to ensure quality aspects.
• Data-driven approaches for software assistants: Leveraging large-scale data from open-source software (OSS) repositories, Q&A forums, and issue trackers to enhance the effectiveness of software assistants.
• Integration with human-in-the-loop systems: Balancing automated recommendations with human expertise to improve decision-making in complex SE scenarios.
• Adoption of advanced generative AI models, including LLMs, pre-trained models (PTMs) for software assistance, particularly emphasizing the quality effects.
• Empirical studies and controlled experiments to assess qualitative aspects of intelligent systems.
• Evolution of software systems and long-term recommendations, e.g., how software assistants can cope with the evolving nature of software systems and provide recommendations that consider long-term system maintainability and evolution.
• Cross-disciplinary applications of software assistant: Studying how techniques from other do- mains, e.g., human-computer interaction, natural language processing, and social network analysis, can enhance their effectiveness and usability.
• Surveys and experience reports on software assistants to support software engineering tasks, both in academic and industry use cases.
Workshop Information
The 1st edition of the workshop on Evaluation of Qualitative Aspects of Intelligent Software Assistants (EQUISA - https://conf.researchr.org/home/ease-2025/equisa-2025) will be held in Istanbul, Turkey, on June 17th, 2025, and co-located with the 29th International Conference on Evaluation and Assessment in Software Engineering (EASE - https://conf.researchr.org/home/ease-2025). The primary goal of EQUISA 2025 is to provide a dedicated forum for exploring and discussing the qualitative dimensions of intelligent software assistants, encompassing their design, development, and deployment in real-world applications. EQUISA solicits two categories of contributions: full research (up to 10 pages) and ongoing re- research (up to 5 pages) papers. Full research papers can describe empirical research (i.e., quantitative, qualitative, and mixed research) on intelligent software systems. We also welcome replication studies and negative results papers if they can support advice or lessons learned. Ongoing research papers should aim at communicating new ideas in the context of developing intelligent software assistants for which the authors want to obtain early feedback from the workshop community, especially on the evaluation and assessment strategies. Such papers must describe the idea and the proposed evaluation and assessment strategy, possibly (but not necessarily) with some preliminary results. We expect to receive at least 10 submissions in total. Each submission will be reviewed by at least three workshop program committee members. This year, the workshop received 4 submissions, and three of them have been accepted as full papers after a single-blind review process. Thus, we will be invited to submit an extended version of their manuscripts.
Deadline
Submission opens: July 1, 2025;
Submission deadline: December 31, 2025;
Date first review round completed: March 15, 2026
Date revised manuscripts due: July 31, 2026;
Date completion of the review and revision process (final notification): November 31, 2026.
How to Submit
All submitted papers will undergo a rigorous peer-review process and should adhere to the general principles of the Automated Software Engineering Journal, prepared according to the Guide for Authors https://ause-journal.github.io/cfp.html . The authors of the papers accepted for the 1st International Workshop on Evaluation of Qualitative Aspects of Intelligent Software Assistant (EQUISA) will be invited to substantially extend and submit their work to the special issue. The workshop is in its first edition and is co-located with the 29th International Conference on Evaluation and Assessment in Software Engineering (EASE 2025). Submitted papers must be original, must not have been previously published, or be under consideration for publication elsewhere. In case a paper has already been presented at a conference, it should be extended by at least 30% new material before being submitted for this special issue. Authors must provide any previously published material relevant to their submission and describe the additions.