Difference between revisions of "Deep Decision Forest"

From ISLAB/CAISR
Jump to navigationJump to search
(Created page with "{{StudentProjectTemplate |Summary=Designing a deep model that uses decision trees instead of artificial neurons |Keywords=deep decision forest, explainable AI |TimeFrame=Autum...")
 
Line 2: Line 2:
 
|Summary=Designing a deep model that uses decision trees instead of artificial neurons
 
|Summary=Designing a deep model that uses decision trees instead of artificial neurons
 
|Keywords=deep decision forest, explainable AI
 
|Keywords=deep decision forest, explainable AI
|TimeFrame=Autumn 2024
+
|TimeFrame=Fall 2025
|Supervisor=Sławomir Nowaczyk
+
|Supervisor=Sławomir Nowaczyk & TBD
 
|Level=Master
 
|Level=Master
 
|Status=Open
 
|Status=Open
Line 12: Line 12:
  
 
This project aims to explore ways to train a deep model using a chain of decision trees, like layers in a neural network. It promises to significantly reduce model complexity and increase interpretability.
 
This project aims to explore ways to train a deep model using a chain of decision trees, like layers in a neural network. It promises to significantly reduce model complexity and increase interpretability.
 +
 +
It's a continuation of a project done in 2024...

Revision as of 20:40, 25 September 2025

Title Deep Decision Forest
Summary Designing a deep model that uses decision trees instead of artificial neurons
Keywords deep decision forest, explainable AIProperty "Keywords" has a restricted application area and cannot be used as annotation property by a user.
TimeFrame Fall 2025
References
Prerequisites
Author
Supervisor Sławomir Nowaczyk & TBD
Level Master
Status Open


The success of Deep Learning is largely attributed to the ability to create/extract "hierarchical features" from the data. This is successful using artificial neurons or perceptrons, and the backpropagation algorithm.

The price is, however, the very large size of the model, which translates into computational costs and a "black-box" nature, or lack of explainability.

This project aims to explore ways to train a deep model using a chain of decision trees, like layers in a neural network. It promises to significantly reduce model complexity and increase interpretability.

It's a continuation of a project done in 2024...