Difference between revisions of "Explainable Decision Forest"

From ISLAB/CAISR
Jump to navigationJump to search
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:
 
{{StudentProjectTemplate
 
{{StudentProjectTemplate
|Summary=Designing a deep model that uses decision trees instead of artificial neurons
+
|Summary=Designing an explainable decision forest classifier for fault detection
|Keywords=deep decision forest, explainable AI
+
|Keywords=decision forest, explainable AI, fault detection
 
|TimeFrame=Autumn 2024
 
|TimeFrame=Autumn 2024
|Supervisor=Sławomir Nowaczyk,  
+
|Supervisor=Hamid Sarmadi, Sepideh Pashami, Sławomir Nowaczyk,  
 
|Status=Open
 
|Status=Open
 
}}
 
}}
The success of Deep Learning is largely attributed to the ability to create/extract "hierarchical features" from the data. This is successful using artificial neurons or perceptrons, and the backpropagation algorithm.
+
An algorithm to train separate "explainable" decision trees for detecting different types of fault has been developed. We would like to extend the algorithm to an ensemble (Decision Forest) method when decision trees are aware of each other.
  
The price is, however, the very large size of the model, which translates into computational costs and a "black-box" nature, or lack of explainability.
+
You can read more about the original algorithm in the following link: https://hhse-my.sharepoint.com/:b:/g/personal/hamid_sarmadi_hh_se/EWlwNDHNrnNMqBQ93dNdu9kBS61tWwF56a-rI7A-kPEpRA?e=EpEFQn
 
 
This project aims to explore ways to train a deep model using a chain of decision trees, like layers in a neural network. It promises to significantly reduce model complexity and increase interpretability.
 

Latest revision as of 21:18, 24 August 2024

Title Explainable Decision Forest
Summary Designing an explainable decision forest classifier for fault detection
Keywords decision forest, explainable AI, fault detectionProperty "Keywords" has a restricted application area and cannot be used as annotation property by a user.
TimeFrame Autumn 2024
References
Prerequisites
Author
Supervisor Hamid Sarmadi, Sepideh Pashami, Sławomir Nowaczyk
Level
Status Open


An algorithm to train separate "explainable" decision trees for detecting different types of fault has been developed. We would like to extend the algorithm to an ensemble (Decision Forest) method when decision trees are aware of each other.

You can read more about the original algorithm in the following link: https://hhse-my.sharepoint.com/:b:/g/personal/hamid_sarmadi_hh_se/EWlwNDHNrnNMqBQ93dNdu9kBS61tWwF56a-rI7A-kPEpRA?e=EpEFQn