Michelangelo Ceci, Università degli Studi di Bari "Aldo Moro"
Nicola Bena, Università degli Studi di Milano
Claudia Diamantini, Università Politecnica delle Marche
Lorenzo Colombi, Università degli Studi di Ferrara
Chiara Boldrini, CNR, IIT
Matteo Zignani, Università degli Studi di Milano
Antonio Maratea, Università degli Studi di Napoli "Parthenope"
Roberto Esposito, Università degli Studi di Torino
Domenico Beneventano, Università degli Studi di Modena e Reggio Emilia
Valerio Bellandi, Università degli Studi di Milano
Nicola Bena, Università degli Studi di Milano
Alessia Antelmi and Massimo Cafaro, Università degli Studi di Torino and Università del Salento
Alessia Antelmi and Massimo Cafaro, Università degli Studi di Torino and Università del Salento
Alessia Antelmi and Massimo Cafaro, Università degli Studi di Torino and Università del Salento
In many real-world systems, the relationships between entities are not static but evolve over time and across multiple dimensions. Examples include interactions in social media platforms, financial transactions, communication networks, and decentralized blockchain infrastructures. These systems are best modeled as temporal networks, where nodes and edges are associated with time-varying properties, and often as heterogeneous networks, where multiple types of nodes and relations coexist. Modeling these dynamics is crucial for uncovering the underlying mechanisms that govern the evolution of complex systems.
This tutorial is structured in three interconnected parts, each balancing theoretical foundations with practical implementation. It presents a progression from fundamental modeling of temporal dynamics to interpretable pattern mining, and finally to advanced machine learning techniques for prediction tasks.
This section introduces the foundations of temporal networks and their practical analysis using the Raphtory framework, a scalable and expressive library for analyzing evolving graphs. Participants will learn how to model systems where edges appear, disappear, or change attributes over time, and how to compute metrics that reflect the temporal evolution of network structure.
Topics include:
The hands-on session guides participants through end-to-end workflows, including data ingestion, time slicing, and metric computation, using open datasets.
Many existing approaches to modeling network dynamics rely on simplistic assumptions — often attributing structural changes to a single parameter or rule. However, empirical systems exhibit heterogeneous and multifaceted behaviors, making such models insufficient.
Graph Evolution Rules (GERs) offer a novel and interpretable framework for discovering multiple, overlapping mechanisms that govern temporal graph changes without assuming a priori models.
This section presents GERs as a formalism to:
Participants will use the Geranio library to extract GERs from real datasets, validate them against null models, and visualize the impact of different rules. Use cases will include communication platforms, dynamic communities in social networks, and behavioral patterns in blockchain transactions.
The final part of the tutorial focuses on predictive modeling of temporal graphs using machine learning, particularly deep learning methods such as Graph Neural Networks (GNNs). The tutorial addresses the unique challenges posed by temporal and heterogeneous data, including variable-length sequences, evolving topologies, and the need to generalize across time. Key topics include:
Participants will engage in a practical session using PyTorch Geometric (PyG), implementing link prediction models on annotated temporal datasets. Emphasis will be placed on reproducibility, interpretability, and evaluation of predictive performance under realistic temporal constraints.
This comprehensive tutorial provides participants with a modern and integrated perspective on the challenges and opportunities in analyzing temporal and heterogeneous networks. By connecting graph theory, pattern mining, and deep learning, it enables attendees to effectively model complex systems and to extract actionable insights from their dynamic structure.
Alessia Galdeman. She received a degree in Computer Science for digital communication in 2018, and a master's degree in Computer Science in 2021. She is currently a PostDoc at IT University of Copenhagen in the NEtwoRks, Data, and Society — NERDS — research group on the COllective COordination through Online Social media (COCOONS) project, supervised by Prof. Luca Maria Aiello. She is a member of the Executive Committee of NetPlace. Her current research interests include temporal networks, subgraph mining, Web3 platforms.
Manuel Dileo. He received a master's degree in Computer Science in 2022. He is currently a senior Ph.D. student at the Computer Science Department of the University of Milan, where he is also a tutor of machine learning courses. He has published works on machine learning for temporal networks, link prediction in online social networks, and temporal knowledge graphs. He also contributed to PyTorch Geometric, the main library for the development of machine learning on graph architectures.
Matteo Zignani. Matteo Zignani is an associate professor of the Connets Lab in the Department of Computer Science at the University of Milan, where he teaches machine learning and network analysis. His research activity takes place within data and network science, with a focus on data mining and machine learning applications on different networked systems, from blockchain and online social networks to human mobility and communication networks.
Sabrina Gaito. Sabrina Gaito is a full professor in the Department of Computer Science at the University of Milan, where she leads the Connets Lab and teaches machine learning, network science, and social media mining. Her research activity takes place within the network science and machine learning fields with applications to social, economic, and blockchain-based networks. She is regularly a member of the program committee of main international conferences and of the editorial board of prominent journals.
Urban mobility planning is evolving towards a more sustainable, health-conscious, and data-driven approach. This hands-on tutorial will guide participants through a practical methodology to integrate heterogeneous open data sources (GTFS, OpenStreetMap, air quality APIs) into actionable insights for personalized active mobility routing. Attendees will learn to model multi-layer transport networks, design adaptable routing algorithms, and evaluate trade-offs between safety, pollution exposure, and quality of experience specifically for pedestrians and cyclists.
Live Coding Demo: Build a proof-of-concept routing system using Python (NetworkX, OSMnx) through a pre-configured Jupyter notebook.
Participant Interaction: Modify costs to define new objectives (e.g., 'Avoid schools during pickup hours') or integrate new data layers.
Open discussion on scalability, integration with municipal services, and future research directions.
Laura Po, PhD. Laura Po is a Web and Data Scientist based in Modena, Italy. Since 2020, she has served as an Associate Professor at the 'Enzo Ferrari' Engineering Department of the University of Modena and Reggio Emilia (UNIMORE), and is a senior researcher in the Big Data Research Group (http://www.dbgroup.unimo.it/). Her research focuses on information representation and management, with particular interest in Big Data analytics, urban and environmental data integration, Web and Information Systems, Knowledge Engineering, AI and Machine Learning, Digital Twins, and Graph-based models. She has also worked extensively on Word Sense Disambiguation, Data Streams, Linked Open Data, and Smart Cities. She earned a Ph.D. in Computer Engineering and Science from UNIMORE in 2009 and holds a cum laude MSc in Computer Engineering (2005). She obtained her professional engineering license in 2006. She is co-founder and scientific committee member of DataRiver srl, a research spin-off offering semantic-based data integration solutions, recognized as a Qualified Research Organization by the Emilia-Romagna High Technology Network since 2009. She is a member of the Board of the International Doctoral School in ICT at UNIMORE (since 2011), the Interdepartmental Research Center on AI (AIRI) (since 2021), and the DHMoRe Center for Digital Humanities (since 2020). She has been a visiting professor at Université Claude Bernard Lyon 1 and CPE Lyon (France, 2023), and at Universidad de Zaragoza (Spain, 2022). She has authored over 70 publications (Scopus), with an H-index of 17 and over 800 citations. Her work appears in high-impact journals such as Environmental Modelling & Software, Ecological Informatics, Knowledge Based Systems, and Future Generation Computer Systems. She currently leads or contributes to several national and European projects: AIQS, a national project on AI-based sensor data correction for green urban routing (ECOSISTER, 2024-25); Lively Ageing, on active ageing and predictive monitoring (Ministry of Health, 2022-27); ECOSISTER (2022-26), supporting carbon-neutral urban mobility planning in Emilia-Romagna (NextGeneration EU). Previously, she was Project Leader of the European project TRAFAIR - Understanding Traffic Flows to Improve Air Quality (2018-2021), and Task Leader in Re-Search Alps (2017-2019).
Federica Rollo, PhD. Federica Rollo is an Assistant Professor at the University of Modena and Reggio Emilia (Italy), specializing in AI for smart cities, NLP, and knowledge graphs. She holds a PhD in Computer Engineering (2021) and has worked on national and European projects like TRAFAIR (traffic/air quality analytics), ECOSISTER (smart mobility), and AIQS (air quality integration in mobility solutions). Her research focuses on event extraction from text, sensor data analysis, Italian-language NLP, urban analytics, graph-based road network modeling, and multi-objective routing. She is the author of 28 publications (Scopus H-index: 11) and has supervised multiple MSc/PhD theses. She was a visiting researcher at the University of Galway (Ireland, 2021) and the Universidade da Coruña (Spain, 2025).
In its European Strategy for Data, the European Commission emphasized that citizens, businesses, and public authorities can be empowered to make better decisions through the effective use of data. To unlock the full potential of the vast and growing volume of data generated daily across Europe, it is essential to build trust in the ways data are collected, accessed, and used.
In this context, the development of a robust European Data Spaces ecosystem is seen as a key opportunity to ensure better access to data and its responsible use. A Data Space is a structured framework that facilitates secure and trustworthy data sharing within a defined data ecosystem. It provides clear rules and technical means for participants to exchange, trade, and collaborate on data assets in a manner that complies with relevant legal and regulatory requirements, while guaranteeing fair treatment for all stakeholders (Source: Data Spaces Support Centre (DSSC) website).
The establishment of Common European Data Spaces across key sectors enables the seamless sharing and reuse of data both within and across domains. This contributes to the emergence of a competitive and dynamic European data market, underpinned by the EU's strong legal foundations in personal data protection, intellectual property rights, safety, and cybersecurity. In turn, this facilitates greater demand for data-driven products and services across the continent.
By promoting standardized technologies and legal frameworks, Data Spaces aim to overcome barriers to data sharing among organizations and foster the creation of innovative, value-added services. Through a focus on interoperability and data sovereignty, Data Spaces play a pivotal role in advancing collaboration and innovation across the European data economy.
The aim of this tutorial is to provide an overview of the current landscape of Data Spaces initiatives and to introduce selected open-source platforms that support their practical deployment.
By the end of the tutorial, participants will be able to:
Antonella Longo. Antonella Longo is an Associate Professor at the Department of Engineering for Innovation at the University of Salento, Italy, where she gained her Ph.D. in Information Engineering. She is the scientific coordinator of DataLab, the University Lab which promotes research and innovation related to big data management and governance in the AI era. Her research interests focus on data management, big data analytics, digital twins, and data spaces, with applications in smart cities, Industry 4.0, and digital transformation. She has been involved in numerous national and European research projects addressing data-driven innovation and the design of interoperable, secure digital ecosystems. Prof. Longo is also active in promoting data governance strategies that ensure data quality, interoperability, and sovereignty across distributed platforms. She regularly publishes in international journals and conferences and contributes to the academic community as a reviewer and member of scientific committees. Prof. Antonella Longo is also one of the founders of the Women in Big Data – Italy, the Italian chapter of the international association which promotes the use of data to reduce the gender gap.
Angelo Martella. Ph.D., Eng. Angelo Martella is a junior researcher at the DataLab of the Department of Engineering for Innovation, University of Salento. He also serves as an academic lecturer in Big Data Management and Data Engineering topics at the University of Salento. His research focuses on Digital Twins and Data Spaces. His areas of expertise therefore span all phases of the design, development, and deployment of digital twins for smart cities, with a particular emphasis on the use of data spaces as critical enablers of data ecosystems—managing data flows in distributed edge-cloud environments using machine learning and AI techniques. He is a Fiware expert and he is currently involved in several research projects within the National Center of High Performance Computing, Big Data, and Quantum Computing.
Cristian Martella. Eng. Cristian Martella received his M.Sc. in Computer Engineering from the University of Salento, where he is currently pursuing a Ph.D. in Engineering of Complex Systems. He is affiliated at the DataLab of the University of Salento. His research focuses on data spaces for digital twins, particularly in domains such as smart energy and smart cities. He explores federated learning architectures for data preparation across the edge-cloud continuum, aiming to enhance interoperability among heterogeneous distributed systems while ensuring data protection and sovereignty. By leveraging advanced machine learning techniques and digital twin technology, he seeks to enable seamless data exchange and collaboration in urban environments, ultimately fostering innovation in resource management and decision-making. Dedicated to promoting sustainable and resilient smart city infrastructures and smart grids, he is committed to developing solutions that empower stakeholders while safeguarding their data rights.
The tutorial aims to explore the activities of Intesa Sanpaolo Anti-Financial Crime Digital Hub.
Silvia Ronchiadin, Intesa Sanpaolo Innovation Center - AFC Digital Hub
The tutorial is structured as follows:
The tutorial aims to showcase recent contributions in the field of data analysis, where statistical modeling and machine learning intersect to address real-world challenges. Emphasis will be placed on the interpretability and practical implications of the proposed techniques, particularly in the analysis of high-dimensional, complex, and multisource datasets. Participants will gain insights into how advanced statistical tools and AI-driven models can enhance understanding and decision-making across domains such as healthcare, finance, and social dynamics.
The tutorial is organized into three 15-minute presentations, followed by a 15-minute discussion.
In this work we present a methodology involving Natural Language Processing (NLP) and Artificial Intelligence models which automatically assign standardized medical codes to the given clinical reports, both in English and Italian. The objective is to provide a useful pipeline leveraging external knowledge as the standardized medical annotations in the ICD (International Classification of Diseases) from the World Health Organization. On doing this, the overall accuracy and efficiency in medical billing, reporting, and data analysis is significantly improved. In addition, they are essential for modern healthcare systems aiming to reduce manual workload and errors.
Speaker: Irene Siragusa (with Marco Speciale, Roberto Pirrone, Antonella Plaia). Irene Siragusa is a postdoctoral fellow at the department of Engineering of the University of Palermo. Her PhD in Information and Communication Technologies focused on NLP techniques towards Information Retrieval involving generative Large Language Models and architectures based on the Retrieval Augmented Generation approach in domain-specific applications, such as the biomedical one.
In this contribution, we propose to quantify global instability by designing and implementing a comprehensive framework of indicators that systematically track and capture unrest-related events across countries and over time. The primary objective is to distill the complexity, intensity, and social impact of such events into a set of clear, up-to-date, and interpretable measures. These indicators are intended to serve as practical tools for policymakers, analysts, and stakeholders, helping them anticipate potential crises, formulate effective preventive strategies, and make more informed decisions in response to evolving geopolitical challenges. To build this framework, we develop three composite indicators, each combining a variety of data sources to ensure robustness and breadth. Specifically, we integrate large-scale, time-sensitive data from the ACLED (Armed Conflict Location and Event Data) project—known for its detailed records of conflict and political violence—with a dedicated dataset curated by our research team. This custom dataset documents a wide spectrum of unrest events, including wars, armed conflicts, civil wars, violent demonstrations, and peaceful protests, covering the period from 2010 to the present. Through this integrated approach, we aim to provide a nuanced and dynamic view of global instability that can support both academic research and real-world decision-making. A dashboard, defined by the authors, will be present to the participants.,/p>
Speakers: Marco Tagliapietra and Luca Macis (with Elena Siletti and Paola Pisano). Marco Tagliapietra and Luca Macis are PhD students in Economics at the Department of Economics and Statistics 'Cognetti de Martiis', University of Turin, and co-founders of Deeplomacy, a startup platform for geopolitical risk forecasting through AI and predictive analytics. They collaborate with the HighESt laboratory at the Department of Economics and Statistics 'Cognetti de Martiis', University of Turin on applied research in machine learning and early warning systems. Especially, Marco has been involved in international research competitions and interdisciplinary projects of high scientific impact, with a focus on innovation in data integration and AI for peacebuilding. Luca is an expert in artificial intelligence applied to geopolitical analysis and has authored several scientific publications in the field. Moreover, he collaborates with academic and institutional partners on research projects that combine theoretical insights with applied methodologies, contributing to high-impact solutions for early warning and international security.
High-frequency trading (HFT) represents a rapidly evolving area in financial markets, where decision making and execution occur at high speeds. However, exploiting high-frequency data (HFD) requires innovative methods to handle inherent complexities, including noise and nonlinear dynamics. Deep learning (DL) models have emerged in HFT, outperforming traditional statistical methods in tasks such as forecasting and classification. In this study, we focus on convolutional neural networks (CNNs), particularly 2D CNNs, to evaluate different preprocessing and feature engineering approaches, with the goal of identifying which approach can lead to better accuracy. We will compare the results with each other and also with more traditional models such as Logistic Regression and Random Forest thanks to the innovative S.A.F.E. methodology, which allows us to compare even very different models. The use of 2D CNNs involves transforming datasets into image representations, allowing an alternative pattern recognition technique that uncovers relationships and structures not readily apparent in raw datasets.
Speaker: Jacopo Chiapparino (with Paola Cerchiello). Jacopo Chiapparino is a highly skilled economist and data scientist with expertise in Python, R, machine learning, and financial econometrics, currently pursuing a PhD in Artificial Intelligence applied to finance. Experienced in AI engineering, risk management, and quantitative modeling, with a strong academic background and ongoing research in high-frequency trading and pattern recognition. Recognized for innovative contributions, including publications and awards in financial and technical analysis fields.
Nel contesto attuale, in cui l'Intelligenza Artificiale pervade applicazioni sempre più critiche, la qualità dei dati emerge come fattore abilitante per sviluppare sistemi affidabili, equi e trasparenti. Questo panel affronta il paradigma della Data-centric AI, che sposta l'attenzione dall'ottimizzazione dei modelli all'ingegneria dei dati, esplorando l'importanza della curazione, dell'annotazione e del monitoraggio continuo dei dataset. Attraverso contributi accademici e industriali, il panel discuterà approcci, sfide e strumenti per garantire coerenza, rappresentatività e robustezza dei dati, ponendo la qualità come leva per un'AI realmente responsabile. Particolare attenzione sarà data alle implicazioni metodologiche, alle applicazioni in contesti reali e alle prospettive di collaborazione tra ricerca e impresa.
Introduction: Donato Malerba
Moderator: Claudia Diamantini
Panelists: Riccardo Torlone, Angela Bonifati, Antonella Poggi, Silvia Ronchiadin, Davide Dalle Carbonare
In this epoch of technological evolution, data concerning problems of different scientific areas are steeply increasing in volume, requiring hundreds of PB of storage (Big Data). Specifically, these data are extensive in both single file size and number of files in many fields, such as astronomy, cosmology, biology, and meteorology. Maintaining a proper performance trend towards pre-Exascale systems requires a specific codesign between hardware and software, exploiting High-Performance Computing (HPC) techniques. On the hardware side, increasingly heterogeneous architectures with multiple nodes and accelerators linked with high-bandwidth bridges to the single node are required. On the software side, applications have to be written with programming languages that allow portability among diverse architectures while not losing performance and minimizing the time required by the programmer to adapt the application. Other important aspects are the maintainability of the numerical stability of a problem solution while increasing the system size and the number of computational resources and the requirement of a "green" solution, that is, the ability to build infrastructures and applications to compute operations with Big Data volumes without excessively increasing the energetic consumptions.
Each participant must send a title, abstract, and proceeding, either in short (5 to 9 pages) or in regular (at least 10 pages) format (~2500 characters per page = 380-400 words per page).
Each regular paper will undergo peer review by two members of the program committee, which are selected experts in the workshop’s topics, ensuring a fair and impartial reviewing process. Submitted papers must present original work relevant to the topics of the workshop. Submissions must not be published, nor be under review, elsewhere, during any stage of the review process. All submissions will be selected based on relevance, significance of contribution, technical soundness, scholar quality, and clarity of presentation. Invited papers will not undergo peer review.
Valentina
Cesare
valentina.cesare@inaf.it
Valentina Cesare is a fixed-term technologist at INAF - IRA (starting date 15/05/2025), where she is about to begin a work concerning GPU porting of scientific applications related to NGCroce project. From 01/12/2020 to 14/05/2025, she worked at INAF - OACT, firstly as fellowship student, then as research associate, and at last as fixed-term technologist, within a project about GPU porting of scientific applications related to the Gaia space mission, within the framework of the ICSC - National Center for Research in HPC, Big Data, and Quantum; Computing (PNRR - Future Computing initiative). A future involvement in the Euclid Consortium is planned. She received her Ph.D. in Physics and Astrophysics in March 2021 from the Physics Department of the University of Turin, with a thesis focused on galaxy dynamics in the framework of the modified gravity theory Refracted Gravity.
Alberto
Vecchiato
vecchiat@oato.inaf.it
Alberto Vecchiato is working in software development as responsible of the AVU-GSR pipeline within the Gaia mission at INAF-Astrophysical Observatory of Torino, where he has held a permanent position since 2007. Generally, he is mainly working in the fields of astrometry, physics of gravitation, and tests of gravity physics theories. Since 2012, he has developed an interest for archaeoastronomy and the history of astronomy. He got his master thesis in Physics in 1996 and his PhD in Physics in 2001 at the University of Padova. A future involvement in the Euclid Consortium is planned.
Gianluca
Mittone
gianluca.mittone@unito.it
Gianluca Mittone is a postdoctoral researcher in computer science at the University of Turin, and his research is focused on the convergence between High-Performance Computing (HPC) and Artificial Intelligence (AI) techniques. In less than 5 years of research activity, he achieved 16 scientific publications and an H-index of 9 (source Google Scholar). His works are mainly related to the use of AI in medicine and Federated Learning (FL). Specifically, he is currently investigating the deployment of cross-HPC FL workloads through workflow-based approaches; and the use of FL as a tool to allow AI-based computation to scale efficiently for HPC benchmarking purposes. He is currently co-principal investigator in a joint research effort between the University of Turin and Telecom Italia (TIM) to develop an FL-as-a-Service platform for the "TIM Edge & Cloud Continuum" IPCEI European Project. His achievements rewarded him with an HPC-Europa3 scholarship and an EuroPar foundation studentship, together with the 'Best PhD Symposium Award' during the 2023 edition of the conference. His PRAISE Score, an AI-based diagnostic tool, has been awarded as an officially recommended diagnostic software" by the European Society of Cardiology in their 2023 guidelines.
Bruno
Casella
bruno.casella@unito.it
Bruno Casella is a research associate at the Computer Science Department of the University of Turin. He got the Ph.D. in Modeling and Data Science in June 2025 at the same department, financed by Leonardo Company. He graduated in Computer Engineering in 2020 with a thesis on the performances of AlphaZero, an artificial intelligence method based on reinforcement learning for the game of chess, that is able to win against the human world champion, in different scenarios. He also received the Master's Degree in Data Science for management in 2021 with a thesis on Federated Transfer Learning.
Ensuring data security in current and future ICT systems requires coordinated efforts across cryptographic research, software engineering, and institutional support. This workshop presents integrated strategies that address emerging threats, such as those posed by quantum computing, and practical challenges in secure software deployment. Experimental results on post-quantum TLS and digital signatures show the performance trade-offs of adopting quantum-resistant algorithms in real-world settings. These findings complement efforts to improve container security in CI/CD pipelines through automated threat analysis and enforcement mechanisms. The workshop also highlights the role of public funding and regional initiatives—such as those led by CERICT—in enabling collaborative research and innovation. This also draws attention to challenges at the network edge—where distributed, resource-constrained systems expand the attack surface and demand tailored security for edge-cloud environments. Together, the contributions provide a structured view of how secure architectures can be designed and deployed in scalable, future-ready environments.
The purpose of this workshop is to connect researchers, practitioners, and institutional actors working on secure digital infrastructures with a focus on post-quantum cryptography, software supply chain security, and edge-cloud systems. The workshop provides a space where academic results, experimental validations, and technology transfer models can be shared and discussed. Experts from research institutions and industry will present their ongoing work on post-quantum communication protocols, container security mechanisms, and proactive defense strategies for distributed systems. Representatives from public initiatives will outline how funding programs can support collaborative development and enable the integration of these technologies in production environments. The workshop is intended to support dialogue across sectors and to promote practical adoption of secure and scalable ICT solutions.
Securing modern digital infrastructures requires a multidisciplinary approach that connects cryptographic innovation, secure software engineering, and institutional support. As technology evolves—from the emergence of quantum computing to the widespread use of microservice architectures—security strategies must adapt at both the algorithmic and system levels. At the same time, public and regional initiatives play a critical role in sustaining applied research and enabling real-world deployment of advanced solutions. This presentation brings together different key efforts that reflect this intersection: the implementation of post-quantum secure communication protocols, the integration of security mechanisms into containerized software pipelines, and the role of regional funding initiatives in enabling scalable and collaborative innovation. Together, they outline a framework for designing secure and resilient digital ecosystems, capable of withstanding emerging threats and supporting sustainable development.
Namirial has performed tests in different areas impacted by the quantum threat; we have investigated how a quantum-safe version of TLS differs from the current version and have also taken into account a "hybrid" scenario. Moreover, we have investigated the difference in performances between new, quantum-safe signature algorithms, such as Dilithium and Falcon, and the well known and widely adopted RSA-based signatures.
Public and regional funding plays a key role in supporting research, innovation, and technology transfer. The Competence Center on ICT of the Campania Region (CERICT) has been actively involved in leveraging national and regional funding programs to promote initiatives in the field of Information and Communication Technologies. This presentation highlights CERICT’s approach to identifying and exploiting available funding tools to support collaborative research and industrial innovation. Through targeted calls and structured partnerships, the center has contributed to the development of projects involving universities, research institutions, and local enterprises. Several notable initiatives will be presented, with a focus on their objectives, involved stakeholders, and achieved results. Particular attention will be given to how funding mechanisms have been used not only to support technical development, but also to strengthen regional cooperation and the growth of a sustainable innovation ecosystem.
Modern applications increasingly rely on containers to support the microservices development model. Containers simplify deployment and integration, especially when used in CI/CD pipelines, where developers focus mainly on automating delivery workflows. However, security tasks are often neglected in this process, raising the risk of introducing vulnerabilities into the application, the platform, or the underlying framework. SecCo-OC addresses this gap by designing a container security architecture that can be integrated directly into the CI/CD workflow. The goal is to automate the identification and mitigation of security threats at both development time (static and dynamic analysis) and runtime (through enforcement mechanisms), ensuring that containers are securely built and maintained before being deployed. The solution also focuses on extending the capabilities of container technology. It explores virtualization techniques, controlled access to specific hardware resources, and the embedding of security services and policies inside the container. These elements contribute to a container model that balances security with functionality and performance. To enable adoption in different deployment scenarios, including edge and pervasive computing, the SecCo-OC architecture is built to support scalability, flexibility, and reliability. It leverages cloud and edge infrastructures to extend DevOps practices to security enforcement in distributed environments.
The DEFEDGE project aims to define a set of techniques for the development of secure and resilient edge-cloud systems and for their assessment based on a threat-driven approach. The main idea is to leverage the results of a guided threat modeling process to derive both the security controls and mechanisms to enforce as a mitigation for these threats and the security tests to perform in order to verify the effectiveness of controls in place. In particular, security controls selection and enforcement will follow Moving Target Defense principles, according to which the attack surface of a system is continually and proactively changed to reduce attack success probability. Security testing will exploit existing threat intelligence and attack patterns knowledge bases to derive a set of general-purpose attack procedures that can be suitably customized to test a target system. For the generation of attack procedures and their customization, the project will also explore machine learning techniques to infer new attack patterns and scenarios, in order to improve overall testing effectiveness.
Angela Bonifati is a Distinguished Professor of Computer Science at Lyon 1 University and at the CNRS Liris research lab, where she leads the Database Group. She is also an Adjunct Professor at the University of Waterloo in Canada from 2020 and a Senior member of the French University Institute (IUF) from 2023. Her current research interests are on several aspects of data management, including graph databases, knowledge graphs, data integration and their applications to data science and artificial intelligence. She has co-authored more than 200 publications in top venues of the data management field, including five Best Paper Awards, two books and an invited paper in ACM Sigmod Record 2018. She is a recipient of an ERC Advanced Grant 2024 dedicated to leading researchers in Europe. She is the youngest recipient of the prestigious IEEE TCDE Impact Award 2023 and a co-recipient of an ACM Research Highlights Award 2023. She is the General Chair of VLDB 2026 and was the Program Chair of IEEE ICDE 2025, ACM Sigmod 2022 and EDBT 2020. She is currently an Associate Editor for the Proceedings of VLDB Vol. 19 and for IEEE TKDE and ACM TODS. She is the Chair of the ACM Sigmod Executive Committee (2025-2029) and was the President of the EDBT Executive Board and Association (2020-2024). She is a member of the IEEE Technical Committee on Data Engineering (2024-2029) and a member of the PVLDB Board of Trustees (2024-2029).
Graphs are powerful abstractions for modeling relationships across data and enabling complex data science tasks. In this talk, I will highlight powerful declarative graph operations and delineate their counterparts in causal inference, to support data-driven, personalized decision-making across several scientific domains. I will present our work to align causal analysis with property graphs—the foundation of modern graph databases—by rethinking graph models to incorporate hypernodes, structural equations, and causality-aware query semantics. By unifying graph databases with causal reasoning, causal tasks such as interventions and counterfactuals are mapped to property graph manipulation and transformation, combining expressiveness with computational efficiency.
Dr. Horst D. Simon is an internationally recognized expert in high-performance computing and computational science, with over four decades of experience in parallel algorithms and large-scale numerical methods. After completing his Ph.D. in Mathematics at UC Berkeley, he held leadership roles across academia (Stony Brook, UC Berkeley), industry (Boeing, SGI), and national research labs (NASA Ames, Lawrence Berkeley Lab. At Berkeley Lab, he directed NERSC and served as Deputy Lab Director, earning two prestigious Gordon Bell Prizes and co-editing the biannual TOP500 list of supercomputers. Since 2023, Dr. Simon has been the founding Director of ADIA Lab in Abu Dhabi, spearheading cutting-edge research in computational and data science. He continues to advance scalable algorithms that address complex scientific and societal challenges.
This presentation begins with a brief introduction to ADIA Lab, an independent research institute based in Abu Dhabi. ADIA Lab's mission is to advance fundamental and applied research in computational and data science, with a focus on addressing complex real-world challenges across domains such as climate, finance, and health. By fostering interdisciplinary collaborations and developing scalable algorithms and models, the lab seeks to drive innovation at the intersection of theory and practice. As an example of the projects underway at ADIA Lab, we then present our research on climate networks, structured around three complementary paradigms: (a) networks of data, where connections between geographic nodes are derived from statistical relationships, commonly referred to as Tsonis networks; (b) climate data over networks, where climate variables are defined on fixed topologies such as river basins or atmospheric grids—termed geophysical networks; and (c) networks for data, which leverages machine learning and statistical models grounded in network theory to analyze and interpret climate information. Special emphasis is placed on the first two frameworks. We examine how these network types are constructed, the insights they offer for understanding climate variability and model output, and their implications for climate governance. Finally, we discuss how integrating these perspectives can inform more robust analytical tools and policy strategies in the face of climate change.
Professor Emilio Porcu is a distinguished expert in spatial-temporal statistics and data science. He earned his Ph.D. in statistics in 2005 and became a full professor by 2012 while holding Chair positions at Newcastle University and Trinity College Dublin. Since August 2020, he has served as Professor of Statistics & Data Science at Khalifa University in Abu Dhabi, and he continues as an adjunct professor at Trinity College Dublin. Dr. Porcu leads a research group specializing in space-time covariance modeling, producing nearly 180 papers and advancing theory and application in areas such as Gaussian random fields on non-Euclidean domains, covariance functions on spheres and networks, and scalable kernel-based methods. He is also a Research Fellow at ADIA Lab, contributing to innovative data science initiatives in climate, complex systems, and computational statistics.
This presentation begins with a brief introduction to ADIA Lab, an independent research institute based in Abu Dhabi. ADIA Lab's mission is to advance fundamental and applied research in computational and data science, with a focus on addressing complex real-world challenges across domains such as climate, finance, and health. By fostering interdisciplinary collaborations and developing scalable algorithms and models, the lab seeks to drive innovation at the intersection of theory and practice. As an example of the projects underway at ADIA Lab, we then present our research on climate networks, structured around three complementary paradigms: (a) networks of data, where connections between geographic nodes are derived from statistical relationships, commonly referred to as Tsonis networks; (b) climate data over networks, where climate variables are defined on fixed topologies such as river basins or atmospheric grids—termed geophysical networks; and (c) networks for data, which leverages machine learning and statistical models grounded in network theory to analyze and interpret climate information. Special emphasis is placed on the first two frameworks. We examine how these network types are constructed, the insights they offer for understanding climate variability and model output, and their implications for climate governance. Finally, we discuss how integrating these perspectives can inform more robust analytical tools and policy strategies in the face of climate change.
Dr. Horst D. Simon is an internationally recognized expert in high-performance computing and computational science, with over four decades of experience in parallel algorithms and large-scale numerical methods. After completing his Ph.D. in Mathematics at UC Berkeley, he held leadership roles across academia (Stony Brook, UC Berkeley), industry (Boeing, SGI), and national research labs (NASA Ames, Lawrence Berkeley Lab. At Berkeley Lab, he directed NERSC and served as Deputy Lab Director, earning two prestigious Gordon Bell Prizes and co-editing the biannual TOP500 list of supercomputers. Since 2023, Dr. Simon has been the founding Director of ADIA Lab in Abu Dhabi, spearheading cutting-edge research in computational and data science. He continues to advance scalable algorithms that address complex scientific and societal challenges.
Professor Emilio Porcu is a distinguished expert in spatial-temporal statistics and data science. He earned his Ph.D. in statistics in 2005 and became a full professor by 2012 while holding Chair positions at Newcastle University and Trinity College Dublin. Since August 2020, he has served as Professor of Statistics & Data Science at Khalifa University in Abu Dhabi, and he continues as an adjunct professor at Trinity College Dublin. Dr. Porcu leads a research group specializing in space-time covariance modeling, producing nearly 180 papers and advancing theory and application in areas such as Gaussian random fields on non-Euclidean domains, covariance functions on spheres and networks, and scalable kernel-based methods. He is also a Research Fellow at ADIA Lab, contributing to innovative data science initiatives in climate, complex systems, and computational statistics.
This presentation begins with a brief introduction to ADIA Lab, an independent research institute based in Abu Dhabi. ADIA Lab's mission is to advance fundamental and applied research in computational and data science, with a focus on addressing complex real-world challenges across domains such as climate, finance, and health. By fostering interdisciplinary collaborations and developing scalable algorithms and models, the lab seeks to drive innovation at the intersection of theory and practice. As an example of the projects underway at ADIA Lab, we then present our research on climate networks, structured around three complementary paradigms: (a) networks of data, where connections between geographic nodes are derived from statistical relationships, commonly referred to as Tsonis networks; (b) climate data over networks, where climate variables are defined on fixed topologies such as river basins or atmospheric grids—termed geophysical networks; and (c) networks for data, which leverages machine learning and statistical models grounded in network theory to analyze and interpret climate information. Special emphasis is placed on the first two frameworks. We examine how these network types are constructed, the insights they offer for understanding climate variability and model output, and their implications for climate governance. Finally, we discuss how integrating these perspectives can inform more robust analytical tools and policy strategies in the face of climate change.