Review of deep learning: concepts, CNN architectures, challenges, applications, future directions

4.3kCitations
Citations of this article
8.6kReaders
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In the last few years, the deep learning (DL) computing paradigm has been deemed the Gold Standard in the machine learning (ML) community. Moreover, it has gradually become the most widely used computational approach in the field of ML, thus achieving outstanding results on several complex cognitive tasks, matching or even beating those provided by human performance. One of the benefits of DL is the ability to learn massive amounts of data. The DL field has grown fast in the last few years and it has been extensively used to successfully address a wide range of traditional applications. More importantly, DL has outperformed well-known ML techniques in many domains, e.g., cybersecurity, natural language processing, bioinformatics, robotics and control, and medical information processing, among many others. Despite it has been contributed several works reviewing the State-of-the-Art on DL, all of them only tackled one aspect of the DL, which leads to an overall lack of knowledge about it. Therefore, in this contribution, we propose using a more holistic approach in order to provide a more suitable starting point from which to develop a full understanding of DL. Specifically, this review attempts to provide a more comprehensive survey of the most important aspects of DL and including those enhancements recently added to the field. In particular, this paper outlines the importance of DL, presents the types of DL techniques and networks. It then presents convolutional neural networks (CNNs) which the most utilized DL network type and describes the development of CNNs architectures together with their main features, e.g., starting with the AlexNet network and closing with the High-Resolution network (HR.Net). Finally, we further present the challenges and suggested solutions to help researchers understand the existing research gaps. It is followed by a list of the major DL applications. Computational tools including FPGA, GPU, and CPU are summarized along with a description of their influence on DL. The paper ends with the evolution matrix, benchmark datasets, and summary and conclusion.

References Powered by Scopus

Deep residual learning for image recognition

174383Citations
N/AReaders
Get full text

Deep learning

63571Citations
N/AReaders
Get full text

Histograms of oriented gradients for human detection

30481Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Multi-sensor information fusion based on machine learning for real applications in human activity recognition: State-of-the-art and research challenges

462Citations
N/AReaders
Get full text

Machine learning for structural engineering: A state-of-the-art review

369Citations
N/AReaders
Get full text

A survey on deep learning tools dealing with data scarcity: definitions, challenges, solutions, tips, and applications

313Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Alzubaidi, L., Zhang, J., Humaidi, A. J., Al-Dujaili, A., Duan, Y., Al-Shamma, O., … Farhan, L. (2021). Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. Journal of Big Data, 8(1). https://doi.org/10.1186/s40537-021-00444-8

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 1319

63%

Lecturer / Post doc 345

17%

Researcher 285

14%

Professor / Associate Prof. 140

7%

Readers' Discipline

Tooltip

Computer Science 981

50%

Engineering 789

41%

Biochemistry, Genetics and Molecular Bi... 89

5%

Agricultural and Biological Sciences 84

4%

Article Metrics

Tooltip
Mentions
News Mentions: 7
References: 1
Social Media
Shares, Likes & Comments: 20

Save time finding and organizing research with Mendeley

Sign up for free