<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://mw.hh.se/caisr/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Aurora</id>
	<title>ISLAB/CAISR - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://mw.hh.se/caisr/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Aurora"/>
	<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Special:Contributions/Aurora"/>
	<updated>2026-04-04T03:54:32Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.13</generator>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Timeseries_XAI_in_Cybersecurity_and_Industry&amp;diff=5621</id>
		<title>Timeseries XAI in Cybersecurity and Industry</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Timeseries_XAI_in_Cybersecurity_and_Industry&amp;diff=5621"/>
		<updated>2025-10-24T13:30:08Z</updated>

		<summary type="html">&lt;p&gt;Aurora: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Timeseries data analysis with XAI in Cybersecurity and Industry&lt;br /&gt;
|Keywords=Cybersecurity, XAI, timeseries, industry&lt;br /&gt;
|TimeFrame=Spring 2026&lt;br /&gt;
|References=https://www.sciencedirect.com/science/article/pii/S1566253523001148&lt;br /&gt;
https://www.nature.com/articles/s41597-025-04603-x&lt;br /&gt;
|Supervisor=Grzegorz J. Nalepa, Prayag Tawari, Aurora Esteban Toscano&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
Time series data is ubiquitous — from industrial monitoring systems and energy networks to cybersecurity systems and user activity traces. Understanding temporal patterns is crucial for detecting anomalies, anticipating failures, and supporting human decision-making. Yet, the increasing complexity of time series models makes them difficult to interpret and trust.&lt;br /&gt;
Industrial and Cybersecurity systems have clearly became a very important area of AI applications recently. From the engineering perspective they produce a large amount of data that can only be analyzed by AI methods.&lt;br /&gt;
&lt;br /&gt;
Explainable Artificial Intelligence (XAI) aims to make models more transparent by uncovering the why behind their predictions. While explainability methods are well-studied for tabular and image data, time series explanations remain a significant open challenge. Temporal dependencies, non-stationarity, and concept drift make it difficult to represent and communicate model reasoning to domain experts.&lt;br /&gt;
&lt;br /&gt;
This project will explore explainable learning and reasoning for time series data, with several possible research directions depending on the student’s interests and available datasets:&lt;br /&gt;
- Characterising domain-specific dynamics: analysing how time series from different domains (e.g., industrial processes vs. cybersecurity traffic) differ in variability, drifts, or anomaly structure.&lt;br /&gt;
- Representation learning for interpretability: studying prototypes, motifs, or symbolic rules that capture meaningful temporal patterns.&lt;br /&gt;
- Counterfactual explanations: developing or adapting methods (e.g., genetic algorithms, motif transformations, gradient perturbations) to generate realistic “what-if” scenarios for time series.&lt;br /&gt;
- Explainable anomaly detection: integrating interpretability into models that identify abnormal or critical events over time.&lt;br /&gt;
- Concept drift and model evolution: explaining how and why model behavior changes as time series distributions shift.&lt;br /&gt;
&lt;br /&gt;
The work will be done in connection with the KEEPER project using data from our industrial partners such as Volvo, HMS, Toyota, etc. [[https://www.hh.se/english/research/our-research/research-at-the-school-of-information-technology/technology-area-aware-intelligent-systems/research-projects-within-aware-intelligent-systems/keeper---knowledge-creation-for-efficient-and-predictable-industrial-operations-.html]]. The project may use as well public data such as Numenta Anomaly Benchmark or UCR/UEA archive.&lt;br /&gt;
&lt;br /&gt;
It is encouraged that this thesis will result in scientific publications possibly also developed in collaboration with external stakeholders.&lt;/div&gt;</summary>
		<author><name>Aurora</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Aurora_Esteban_Toscano&amp;diff=5584</id>
		<title>Aurora Esteban Toscano</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Aurora_Esteban_Toscano&amp;diff=5584"/>
		<updated>2025-10-13T10:51:44Z</updated>

		<summary type="html">&lt;p&gt;Aurora: Created page with &amp;quot;Hi, this is Aurora! Postdoc researcher in Computer Science at Halmstad University. I work in machine learning and temporal data.  * e-mail: aurora.esteban.toscano@hh.se  * Aff...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Hi, this is Aurora! Postdoc researcher in Computer Science at Halmstad University. I work in machine learning and temporal data.&lt;br /&gt;
&lt;br /&gt;
* e-mail: aurora.esteban.toscano@hh.se&lt;br /&gt;
&lt;br /&gt;
* Affiliation: CAISR, ISDD, ITE&lt;br /&gt;
&lt;br /&gt;
* Projects: KEEPER&lt;br /&gt;
&lt;br /&gt;
* Publications: https://scholar.google.es/citations?user=vTO55WcAAAAJ&amp;amp;hl=en&lt;/div&gt;</summary>
		<author><name>Aurora</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=User:Aurora&amp;diff=5583</id>
		<title>User:Aurora</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=User:Aurora&amp;diff=5583"/>
		<updated>2025-10-13T08:29:17Z</updated>

		<summary type="html">&lt;p&gt;Aurora: Aurora Esteban Toscano&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Hi, this is Aurora! Postdoc researcher in Computer Science at Halmstad University. I work in machine learning and temporal data.&lt;br /&gt;
&lt;br /&gt;
* e-mail: aurora.esteban.toscano@hh.se&lt;br /&gt;
&lt;br /&gt;
* Affiliation: CAISR, ISDD, ITE&lt;br /&gt;
&lt;br /&gt;
* Projects: KEEPER&lt;br /&gt;
&lt;br /&gt;
* Publications: https://scholar.google.es/citations?user=vTO55WcAAAAJ&amp;amp;hl=en&lt;/div&gt;</summary>
		<author><name>Aurora</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Lightweight_foundation_model_for_time_series_classification&amp;diff=5582</id>
		<title>Lightweight foundation model for time series classification</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Lightweight_foundation_model_for_time_series_classification&amp;diff=5582"/>
		<updated>2025-10-13T08:25:56Z</updated>

		<summary type="html">&lt;p&gt;Aurora: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Lightweight foundation model for time series classification&lt;br /&gt;
|TimeFrame=Fall 2025&lt;br /&gt;
|Supervisor=Aurora Esteban Toscano, Slawomir Nowaczyk&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
Time series data is everywhere — from industrial sensors and financial markets to healthcare monitoring and environmental systems. Classifying time series patterns is key to many applications, such as detecting equipment failures, predicting stock movements, or diagnosing medical conditions. However, training accurate models for time series classification (TSC) often requires large, labeled datasets, which are expensive and time-consuming to obtain.&lt;br /&gt;
&lt;br /&gt;
Foundation models have transformed fields like natural language processing and computer vision by enabling powerful generalization from large-scale pretraining. In time series, similar models have started to emerge, mainly for forecasting tasks. Such models can be adapted efficiently to new datasets and domains — a huge advantage when data is scarce or labeling is difficult.&lt;br /&gt;
&lt;br /&gt;
Existing time series foundation models primarily target forecasting (e.g., TimeGPT, Chronos), whereas classification-focused foundation models remain underexplored. Recent efforts like “Moment&amp;quot; demonstrate the potential of pretraining on large, diverse time series collections, but most current approaches rely on transformer architectures, which are computationally heavy and memory-intensive.&lt;br /&gt;
&lt;br /&gt;
This project proposes to develop a fast, lightweight foundation model for time series classification, following principles introduced by efficient architectures such as &amp;quot;Tiny Time Mixers&amp;quot;. The objective is to explore whether compact, mixer-style models can capture rich temporal representations while maintaining strong generalization and computational efficiency.&lt;br /&gt;
&lt;br /&gt;
The work will be done in connection with the KEEPER project using data from our industrial partners such as Volvo, HMS, Toyota, etc.&lt;br /&gt;
https://www.hh.se/english/research/our-research/research-at-the-school-of-information-technology/technology-area-aware-intelligent-systems/research-projects-within-aware-intelligent-systems/keeper---knowledge-creation-for-efficient-and-predictable-industrial-operations-.html&lt;/div&gt;</summary>
		<author><name>Aurora</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Deep_Decision_Forest&amp;diff=5581</id>
		<title>Deep Decision Forest</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Deep_Decision_Forest&amp;diff=5581"/>
		<updated>2025-10-13T08:24:31Z</updated>

		<summary type="html">&lt;p&gt;Aurora: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Designing a deep model that uses decision trees instead of artificial neurons&lt;br /&gt;
|Keywords=deep decision forest, explainable AI&lt;br /&gt;
|TimeFrame=Fall 2025&lt;br /&gt;
|Supervisor=Sławomir Nowaczyk, Aurora Esteban Toscano&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
The success of Deep Learning is largely attributed to the ability to create/extract &amp;quot;hierarchical features&amp;quot; from the data. This is successful using artificial neurons or perceptrons, and the backpropagation algorithm.&lt;br /&gt;
&lt;br /&gt;
The price is, however, the very large size of the model, which translates into computational costs and a &amp;quot;black-box&amp;quot; nature, or lack of explainability.&lt;br /&gt;
&lt;br /&gt;
This project aims to explore ways to train a deep model using a chain of decision trees, like layers in a neural network. It promises to significantly reduce model complexity and increase interpretability.&lt;br /&gt;
&lt;br /&gt;
It&amp;#039;s a continuation of a project done in 2024...&lt;/div&gt;</summary>
		<author><name>Aurora</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=User:Aurora&amp;diff=5580</id>
		<title>User:Aurora</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=User:Aurora&amp;diff=5580"/>
		<updated>2025-10-13T08:21:19Z</updated>

		<summary type="html">&lt;p&gt;Aurora: Aurora Esteban Toscano&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Hi, this is Aurora! Postdoc researcher in Computer Science at Halmstad University. I work in machine learning and temporal data.&lt;br /&gt;
&lt;br /&gt;
* Affiliation: CAISR, ISDD, ITE&lt;br /&gt;
&lt;br /&gt;
* Projects: KEEPER&lt;br /&gt;
&lt;br /&gt;
* Publications: https://scholar.google.es/citations?user=vTO55WcAAAAJ&amp;amp;hl=en&lt;/div&gt;</summary>
		<author><name>Aurora</name></author>
	</entry>
</feed>