Publications:Stacked Ensemble of Recurrent Neural Networks for Predicting Turbocharger Remaining Useful Life

From ISLAB/CAISR
Revision as of 22:23, 29 December 2019 by Slawek (talk | contribs) (Created page with "<div style='display: none'> == Do not edit this section == </div> {{PublicationSetupTemplate|Author=Peyman Sheikholharam Mashhadi, Sławomir Nowaczyk, Sepideh Pashami |PID=138...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

Do not edit this section

Property "Author" has a restricted application area and cannot be used as annotation property by a user. Property "Author" has a restricted application area and cannot be used as annotation property by a user. Property "Author" has a restricted application area and cannot be used as annotation property by a user.

Keep all hand-made modifications below

Title Stacked Ensemble of Recurrent Neural Networks for Predicting Turbocharger Remaining Useful Life
Author
Year 2019
PublicationType Journal Paper
Journal Applied Sciences
HostPublication
Conference
DOI http://dx.doi.org/10.3390/app10010069
Diva url http://hh.diva-portal.org/smash/record.jsf?searchId=1&pid=diva2:1381884
Abstract

Predictive Maintenance (PM) is a proactive maintenance strategy that tries to minimize a system’s downtime by predicting failures before they happen. It uses data from sensors to measure the component’s state of health and make forecasts about its future degradation. However, existing PM methods typically focus on individual measurements. While it is natural to assume that a history of measurements carries more information than a single one. This paper aims at incorporating such information into PM models. In practice, especially in the automotive domain, diagnostic models have low performance, due to a large amount of noise in the data and limited sensing capability. To address this issue, this paper proposes to use a specific type of ensemble learning known as Stacked Ensemble. The idea is to aggregate predictions of multiple models—consisting of Long Short-Term Memory (LSTM) and Convolutional-LSTM—via a meta model, in order to boost performance. Stacked Ensemble model performs well when its base models are as diverse as possible. To this end, each such model is trained using a specific combination of the following three aspects: feature subsets, past dependency horizon, and model architectures. Experimental results demonstrate benefits of the proposed approach on a case study of heavy-duty truck turbochargers.