About the workshop
As Artificial Intelligence (AI) continues to drive innovation, Large Language Models (LLMs) are transforming the landscape of Software Engineering (SE). These models are not only enhancing capabilities in automated code generation but are also pivotal in addressing complex software development challenges such as bug detection and fixing, vulnerability detection and remediation, automated testing, and many other challenging tasks. While these solutions have shown impressive results in the SE field, their application poses reliability and security concerns.
Addressing the reliability and security aspects of AI-based solutions for SE involves careful consideration of data quality, model performance, monitoring, privacy measures, security practices, and ongoing maintenance to ensure reliable and secure AI systems.
The workshop aims to bring together researchers and industrial practitioners from both AI and SE communities to collaborate, share experiences, provide directions for future research, and encourage the use of reliable and secure AI solutions for addressing software engineering-specific challenges. We encourage submissions targeting interdisciplinary research, in particular those listed in the topics of interest.
The workshop is partially supported by the FLEGREA project of the MUR PRIN 2022 program. To find more information about the workshop, visit the webpage at: https://resaise.github.io/2024/