<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://mw.hh.se/caisr/index.php?action=history&amp;feed=atom&amp;title=Explainable_AI_by_Training_Introspection</id>
	<title>Explainable AI by Training Introspection - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://mw.hh.se/caisr/index.php?action=history&amp;feed=atom&amp;title=Explainable_AI_by_Training_Introspection"/>
	<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Explainable_AI_by_Training_Introspection&amp;action=history"/>
	<updated>2026-04-04T15:28:34Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.35.13</generator>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Explainable_AI_by_Training_Introspection&amp;diff=5115&amp;oldid=prev</id>
		<title>Jens: Created page with &quot;{{StudentProjectTemplate |Summary=Research and development of novel XAI methods based on training process information |Keywords=XAI, Neural Networks |Supervisor=Jens Lundströ...&quot;</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Explainable_AI_by_Training_Introspection&amp;diff=5115&amp;oldid=prev"/>
		<updated>2022-10-03T08:41:40Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;{{StudentProjectTemplate |Summary=Research and development of novel XAI methods based on training process information |Keywords=XAI, Neural Networks |Supervisor=Jens Lundströ...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Research and development of novel XAI methods based on training process information&lt;br /&gt;
|Keywords=XAI, Neural Networks&lt;br /&gt;
|Supervisor=Jens Lundström, Peyman Mashhadi, Amira Soliman, Atiye Sadat Hashemi&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
As machine learning has become increasingly successful in commercial applications during the last decades, the demand for model explainability and interpretability also emerge. In many occasions, for a decision support system to be credible and useful the predicted decision support needs to follow with explainability. This need has sparked enormous activity in the field of Explainable AI (XAI) both for the industry and in AI/ML research for a couple of years. The focus of current XAI methods aims at utilizing the end result of the training process, i.e. the final trained model. In the master thesis we explore the hypothesized potential of XAI to be revealed by exploring the full trajectory of the model training process. The thesis will explore different data modalities, types of models and explainability aspects.&lt;/div&gt;</summary>
		<author><name>Jens</name></author>
	</entry>
</feed>