Neural networks require a large dataset to learn features, and this takes high computational power, even using high end Graphical Processing Units (GPUs) some use cases might require days of continuous training. This can be solved either by making hardware specific changes to code or by replacing the algorithm used. In this paper we propose a novel loss function called Nth Absolute Root Mean Error (NARME) which speeds up the training process for a set of supervised learning problems, specifically regression (which is a problem in which we predict value of a continuous variable). In this function, we take nth root of the absolute difference between the predicted and the actual output, then calculate the mean over the entire set of data points available in the dataset, thus in a way starting at a much lower point in the gradient descent curve than other commonly used regression losses. The loss function (NARME) places the value of loss after the first iteration to a point that is closer to the minima (which is the goal of the training process), and then takes small number of descent steps in the n-dimensional space to reach the global minima. NARME has been found to work with commonly used neural network architectures and is shown to work exceptionally well with Neural Arithmetic and Logic Unit cell type networks. The usage of this type loss function in regression problems helps speed the training process by a factor of ten in most of the cases, due to the small number of steps it requires to reach the minima.
CITATION STYLE
Choudhury, S. D., Pandey, S., Mehrotra, K., Raj, C., & Rajeev, S. (2019). Reducing regression time in neural network using Nth absolute root mean error. International Journal of Innovative Technology and Exploring Engineering, 9(1), 4981–4985. https://doi.org/10.35940/ijitee.J9626.119119
Mendeley helps you to discover research relevant for your work.