In this epoch of technological evolution, data concerning problems of different scientific areas are steeply increasing in volume, requiring hundreds of PB of storage (Big Data). Specifically, these data are extensive in both single file size and number of files in many fields, such as astronomy, cosmology, biology, and meteorology. Maintaining a proper performance trend towards pre-Exascale systems requires a specific codesign between hardware and software, exploiting High-Performance Computing (HPC) techniques. On the hardware side, increasingly heterogeneous architectures with multiple nodes and accelerators linked with high-bandwidth bridges to the single node are required. On the software side, applications have to be written with programming languages that allow portability among diverse architectures while not losing performance and minimizing the time required by the programmer to adapt the application. Other important aspects are the maintainability of the numerical stability of a problem solution while increasing the system size and the number of computational resources and the requirement of a "green" solution, that is, the ability to build infrastructures and applications to compute operations with Big Data volumes without excessively increasing the energetic consumptions.
Each participant must send a title, abstract, and proceeding, either in short (5 to 9 pages) or in regular (at least 10 pages) format (~2500 characters per page = 380-400 words per page).
Each regular paper will undergo peer review by two members of the program committee, which are selected experts in the workshop’s topics, ensuring a fair and impartial reviewing process. Submitted papers must present original work relevant to the topics of the workshop. Submissions must not be published, nor be under review, elsewhere, during any stage of the review process. All submissions will be selected based on relevance, significance of contribution, technical soundness, scholar quality, and clarity of presentation. Invited papers will not undergo peer review.
Valentina
Cesare
valentina.cesare@inaf.it
Valentina Cesare is a fixed-term technologist at INAF - IRA (starting date 15/05/2025), where she is about to begin a work concerning GPU porting of scientific applications related to NGCroce project. From 01/12/2020 to 14/05/2025, she worked at INAF - OACT, firstly as fellowship student, then as research associate, and at last as fixed-term technologist, within a project about GPU porting of scientific applications related to the Gaia space mission, within the framework of the ICSC - National Center for Research in HPC, Big Data, and Quantum; Computing (PNRR - Future Computing initiative). A future involvement in the Euclid Consortium is planned. She received her Ph.D. in Physics and Astrophysics in March 2021 from the Physics Department of the University of Turin, with a thesis focused on galaxy dynamics in the framework of the modified gravity theory Refracted Gravity.
Alberto
Vecchiato
vecchiat@oato.inaf.it
Alberto Vecchiato is working in software development as responsible of the AVU-GSR pipeline within the Gaia mission at INAF-Astrophysical Observatory of Torino, where he has held a permanent position since 2007. Generally, he is mainly working in the fields of astrometry, physics of gravitation, and tests of gravity physics theories. Since 2012, he has developed an interest for archaeoastronomy and the history of astronomy. He got his master thesis in Physics in 1996 and his PhD in Physics in 2001 at the University of Padova. A future involvement in the Euclid Consortium is planned.
Gianluca
Mittone
gianluca.mittone@unito.it
Gianluca Mittone is a postdoctoral researcher in computer science at the University of Turin, and his research is focused on the convergence between High-Performance Computing (HPC) and Artificial Intelligence (AI) techniques. In less than 5 years of research activity, he achieved 16 scientific publications and an H-index of 9 (source Google Scholar). His works are mainly related to the use of AI in medicine and Federated Learning (FL). Specifically, he is currently investigating the deployment of cross-HPC FL workloads through workflow-based approaches; and the use of FL as a tool to allow AI-based computation to scale efficiently for HPC benchmarking purposes. He is currently co-principal investigator in a joint research effort between the University of Turin and Telecom Italia (TIM) to develop an FL-as-a-Service platform for the "TIM Edge & Cloud Continuum" IPCEI European Project. His achievements rewarded him with an HPC-Europa3 scholarship and an EuroPar foundation studentship, together with the 'Best PhD Symposium Award' during the 2023 edition of the conference. His PRAISE Score, an AI-based diagnostic tool, has been awarded as an officially recommended diagnostic software" by the European Society of Cardiology in their 2023 guidelines.
Bruno
Casella
bruno.casella@unito.it
Bruno Casella is a research associate at the Computer Science Department of the University of Turin. He got the Ph.D. in Modeling and Data Science in June 2025 at the same department, financed by Leonardo Company. He graduated in Computer Engineering in 2020 with a thesis on the performances of AlphaZero, an artificial intelligence method based on reinforcement learning for the game of chess, that is able to win against the human world champion, in different scenarios. He also received the Master's Degree in Data Science for management in 2021 with a thesis on Federated Transfer Learning.
Ensuring data security in current and future ICT systems requires coordinated efforts across cryptographic research, software engineering, and institutional support. This workshop presents integrated strategies that address emerging threats, such as those posed by quantum computing, and practical challenges in secure software deployment. Experimental results on post-quantum TLS and digital signatures show the performance trade-offs of adopting quantum-resistant algorithms in real-world settings. These findings complement efforts to improve container security in CI/CD pipelines through automated threat analysis and enforcement mechanisms. The workshop also highlights the role of public funding and regional initiatives—such as those led by CERICT—in enabling collaborative research and innovation. This also draws attention to challenges at the network edge—where distributed, resource-constrained systems expand the attack surface and demand tailored security for edge-cloud environments. Together, the contributions provide a structured view of how secure architectures can be designed and deployed in scalable, future-ready environments.
The purpose of this workshop is to connect researchers, practitioners, and institutional actors working on secure digital infrastructures with a focus on post-quantum cryptography, software supply chain security, and edge-cloud systems. The workshop provides a space where academic results, experimental validations, and technology transfer models can be shared and discussed. Experts from research institutions and industry will present their ongoing work on post-quantum communication protocols, container security mechanisms, and proactive defense strategies for distributed systems. Representatives from public initiatives will outline how funding programs can support collaborative development and enable the integration of these technologies in production environments. The workshop is intended to support dialogue across sectors and to promote practical adoption of secure and scalable ICT solutions.
Securing modern digital infrastructures requires a multidisciplinary approach that connects cryptographic innovation, secure software engineering, and institutional support. As technology evolves—from the emergence of quantum computing to the widespread use of microservice architectures—security strategies must adapt at both the algorithmic and system levels. At the same time, public and regional initiatives play a critical role in sustaining applied research and enabling real-world deployment of advanced solutions. This presentation brings together different key efforts that reflect this intersection: the implementation of post-quantum secure communication protocols, the integration of security mechanisms into containerized software pipelines, and the role of regional funding initiatives in enabling scalable and collaborative innovation. Together, they outline a framework for designing secure and resilient digital ecosystems, capable of withstanding emerging threats and supporting sustainable development.
Namirial has performed tests in different areas impacted by the quantum threat; we have investigated how a quantum-safe version of TLS differs from the current version and have also taken into account a "hybrid" scenario. Moreover, we have investigated the difference in performances between new, quantum-safe signature algorithms, such as Dilithium and Falcon, and the well known and widely adopted RSA-based signatures.
Public and regional funding plays a key role in supporting research, innovation, and technology transfer. The Competence Center on ICT of the Campania Region (CERICT) has been actively involved in leveraging national and regional funding programs to promote initiatives in the field of Information and Communication Technologies. This presentation highlights CERICT’s approach to identifying and exploiting available funding tools to support collaborative research and industrial innovation. Through targeted calls and structured partnerships, the center has contributed to the development of projects involving universities, research institutions, and local enterprises. Several notable initiatives will be presented, with a focus on their objectives, involved stakeholders, and achieved results. Particular attention will be given to how funding mechanisms have been used not only to support technical development, but also to strengthen regional cooperation and the growth of a sustainable innovation ecosystem.
Modern applications increasingly rely on containers to support the microservices development model. Containers simplify deployment and integration, especially when used in CI/CD pipelines, where developers focus mainly on automating delivery workflows. However, security tasks are often neglected in this process, raising the risk of introducing vulnerabilities into the application, the platform, or the underlying framework. SecCo-OC addresses this gap by designing a container security architecture that can be integrated directly into the CI/CD workflow. The goal is to automate the identification and mitigation of security threats at both development time (static and dynamic analysis) and runtime (through enforcement mechanisms), ensuring that containers are securely built and maintained before being deployed. The solution also focuses on extending the capabilities of container technology. It explores virtualization techniques, controlled access to specific hardware resources, and the embedding of security services and policies inside the container. These elements contribute to a container model that balances security with functionality and performance. To enable adoption in different deployment scenarios, including edge and pervasive computing, the SecCo-OC architecture is built to support scalability, flexibility, and reliability. It leverages cloud and edge infrastructures to extend DevOps practices to security enforcement in distributed environments.
The DEFEDGE project aims to define a set of techniques for the development of secure and resilient edge-cloud systems and for their assessment based on a threat-driven approach. The main idea is to leverage the results of a guided threat modeling process to derive both the security controls and mechanisms to enforce as a mitigation for these threats and the security tests to perform in order to verify the effectiveness of controls in place. In particular, security controls selection and enforcement will follow Moving Target Defense principles, according to which the attack surface of a system is continually and proactively changed to reduce attack success probability. Security testing will exploit existing threat intelligence and attack patterns knowledge bases to derive a set of general-purpose attack procedures that can be suitably customized to test a target system. For the generation of attack procedures and their customization, the project will also explore machine learning techniques to infer new attack patterns and scenarios, in order to improve overall testing effectiveness.