Difference between revisions of "Timeseries XAI in Cybersecurity and Industry"

From ISLAB/CAISR
Jump to navigationJump to search
 
(One intermediate revision by one other user not shown)
Line 5: Line 5:
 
|References=https://www.sciencedirect.com/science/article/pii/S1566253523001148
 
|References=https://www.sciencedirect.com/science/article/pii/S1566253523001148
 
https://www.nature.com/articles/s41597-025-04603-x
 
https://www.nature.com/articles/s41597-025-04603-x
|Supervisor=Grzegorz J. Nalepa, Prayag Tawari, Aurora Esteban
+
|Supervisor=Grzegorz J. Nalepa, Prayag Tawari, Aurora Esteban Toscano
 
|Level=Master
 
|Level=Master
 
|Status=Open
 
|Status=Open

Latest revision as of 14:30, 24 October 2025

Title Timeseries XAI in Cybersecurity and Industry
Summary Timeseries data analysis with XAI in Cybersecurity and Industry
Keywords Cybersecurity, XAI, timeseries, industryProperty "Keywords" has a restricted application area and cannot be used as annotation property by a user.
TimeFrame Spring 2026
References https://www.sciencedirect.com/science/article/pii/S1566253523001148

https://www.nature.com/articles/s41597-025-04603-x

Prerequisites
Author
Supervisor Grzegorz J. Nalepa, Prayag Tawari, Aurora Esteban Toscano
Level Master
Status Open


Time series data is ubiquitous — from industrial monitoring systems and energy networks to cybersecurity systems and user activity traces. Understanding temporal patterns is crucial for detecting anomalies, anticipating failures, and supporting human decision-making. Yet, the increasing complexity of time series models makes them difficult to interpret and trust. Industrial and Cybersecurity systems have clearly became a very important area of AI applications recently. From the engineering perspective they produce a large amount of data that can only be analyzed by AI methods.

Explainable Artificial Intelligence (XAI) aims to make models more transparent by uncovering the why behind their predictions. While explainability methods are well-studied for tabular and image data, time series explanations remain a significant open challenge. Temporal dependencies, non-stationarity, and concept drift make it difficult to represent and communicate model reasoning to domain experts.

This project will explore explainable learning and reasoning for time series data, with several possible research directions depending on the student’s interests and available datasets: - Characterising domain-specific dynamics: analysing how time series from different domains (e.g., industrial processes vs. cybersecurity traffic) differ in variability, drifts, or anomaly structure. - Representation learning for interpretability: studying prototypes, motifs, or symbolic rules that capture meaningful temporal patterns. - Counterfactual explanations: developing or adapting methods (e.g., genetic algorithms, motif transformations, gradient perturbations) to generate realistic “what-if” scenarios for time series. - Explainable anomaly detection: integrating interpretability into models that identify abnormal or critical events over time. - Concept drift and model evolution: explaining how and why model behavior changes as time series distributions shift.

The work will be done in connection with the KEEPER project using data from our industrial partners such as Volvo, HMS, Toyota, etc. [[1]]. The project may use as well public data such as Numenta Anomaly Benchmark or UCR/UEA archive.

It is encouraged that this thesis will result in scientific publications possibly also developed in collaboration with external stakeholders.