Research on the Theory and Application of Deep Interactive Learning

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Knowledge distillation (KD), in which a small network (students) is trained to mimic a larger one(teachers), with high precision, has been widely used in various fields. However, the interaction between teachers and students is still weak. It is found in this study that most existing methods, such as Deep Mutual Learning (DML), mainly construct loss function through soft weight indexes. Few researchers pay attention to the sharing of hard and heavy ones. As an improvement of DML, a new online learning distillation method, namely, Deep Interactive Learning (hereinafter DIL), was proposed in this research, which has deeper interaction than DML. We not only output the features of layers, but also disclose the features of hidden layers. We transfer the features to other models to obtain the corresponding softer distribution or features for distillation. Extensive experiments on various data sets show that the accuracy of our method is improved by almost 3% in CIFAR and 2% in ImageNet, which proves the validity of our method.

References Powered by Scopus

ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices

6726Citations
N/AReaders
Get full text

Deep Mutual Learning

1521Citations
N/AReaders
Get full text

Quantized convolutional neural networks for mobile devices

965Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Wang, Z., & Guo, F. (2021). Research on the Theory and Application of Deep Interactive Learning. In Journal of Physics: Conference Series (Vol. 1982). IOP Publishing Ltd. https://doi.org/10.1088/1742-6596/1982/1/012085

Readers over time

‘2202468

Save time finding and organizing research with Mendeley

Sign up for free
0