Deep Cascade of Extra Trees

N/ACitations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep neural networks have recently become popular because of their success in such domains as image and speech recognition, which has lead many to wonder whether other learners could benefit from deep, layered architectures. In this paper, we propose the Deep Cascade of Extra Trees (DCET) model. Representation learning in deep neural networks mostly relies on the layer-by-layer processing of raw features. Inspired by this, DCET uses a deep cascade of decision forests structure, where the cascade in each level receives the best feature information processed by the cascade of forests of its preceding level. Experiments show that its performance is quite robust regarding hyper-parameter settings; in most cases, even across different datasets from different domains, it is able to get excellent performance by using the same default setting.

Cite

CITATION STYLE

APA

Berrouachedi, A., Jaziri, R., & Bernard, G. (2019). Deep Cascade of Extra Trees. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11607 LNAI, pp. 117–129). Springer Verlag. https://doi.org/10.1007/978-3-030-26142-9_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free