Empirical Software Engineering - Call for Papers: When Software Security Meets Large Language Models: Opportunities and Challenges
Guest Editors
Vincent Sheng Wen (swen@swin.edu.au), Swinburne University of Technology
Chunyang Chen (chun-yang.chen@tum.de), Technical University of Munich
James Xi Zheng (james.zheng@mq.edu.au), Macquarie University
Aims and Scope
Software plays an indispensable role in our daily lives, underpinning critical systems in sectors such as healthcare, finance, and transportation. However, as software systems become more complex, the risks associated with their vulnerabilities also escalate. Cybercrime, fueled by security flaws in software, poses a significant global threat, with projected damages expected to reach $10.5 trillion annually by 2025—up sharply from $3 trillion in 2015. Over the years, researchers have developed a range of techniques to ensure software quality and security, spanning from requirement extraction and fault-tolerant design to bug detection and program repair. Among these, software testing and analysis remain central to identifying defects that could lead to serious security breaches.
While traditional deep learning has made notable strides in improving techniques like fuzzing, bug detection, and program repair, it still faces inherent limitations, particularly in understanding complex code and generating high-quality training data. This is where Large Language Models (LLMs) represent a groundbreaking opportunity. By leveraging their advanced natural language processing capabilities, LLMs (e.g., GPT, Gemini series) hold the potential to reduce manual labor in software security processes and enhance the accuracy of bug detection, program repair, and vulnerability assessment. This special issue seeks to explore the intersection of software security and LLMs, focusing on how LLMs can mitigate existing challenges and unlock new possibilities for securing modern software systems.
Hence, we are motivated to organize this special issue focused on the opportunities and challenges that arise when software security intersects with Large Language Models (LLMs). The aim of this special issue is to promote novel, transformative, and multidisciplinary approaches that enhance the efficiency and effectiveness of current software security solutions. Additionally, this issue seeks to build a research community dedicated to advancing knowledge and education at the crossroads of cybersecurity, privacy, and LLMs, with a strong emphasis on translating these insights into practical applications for software security.
Topics
The proposed Special Issue of JISA will promote research and reflect the most recent advances of software security, with emphasis on the following aspects, but certainly not limited to:
- LLM-Assisted Vulnerability Detection and Mitigation
- LLMs in Automated Software Testing
- Program Repair via LLMs
- LLMs for Secure Code Generation
- LLM-Augmented Static and Dynamic Code Analysis
- LLMs for Security Policy Generation and Compliance
- Zero-Shot and Few-Shot Learning for Software Security
- LLMs for Malware Detection and Analysis
- Data Privacy in LLM-Assisted Software Security
- LLMs in Code Auditing and Compliance Verification
- LLMs for Secure DevOps (DevSecOps)
- Phishing and Social Engineering Detection Using LLMs
- Security Awareness and Training with LLMs
- LLMs for Risk Assessment in Software Security
- Adversarial Attacks on LLMs in Software Security
- Human Factors When Using LLM in Software Security
- Usable Security of LLM Applications
Manuscript Submission Information
Submitted papers should present original, unpublished work, relevant to one of the topics of the Special Issue. All submitted papers will be evaluated on the basis of relevance, significance of contribution, technical quality, scholarship, and quality of presentation, by at least two independent reviewers. It is the policy of the journal that no submission, or substantially overlapping submission, be published or be under review at another journal or conference at any time during the review process.
Authors are responsible for understanding and adhering to the submission guidelines. Papers are expected to have substantial scientific contribution, e.g., in the form of new algorithms, experiments or qualitative/quantitative comparisons, and neither verbatim transfer of large parts of the conference paper nor reproduction of already published figures will be tolerated.
Important Dates
-
Paper submission deadline: August 15th, 2025
-
All reviews back and first-round notification: September 15th, 2025
-
Revised submission deadline: October 15th, 2025
-
All reviews back and final notification: November 15th, 2025