The performance of a multi-layer neural network (MLP) depends how it is optimized. The optimization of MLP including its structure is tedious one as there is no explicit rules for deciding number of layers and number of neurons in each layer. Further, if the error function is multi-modal the conventional way of using gradient descent rule may give only local optimal solutions which may result in poorer performance of the network. In this paper a novel way is adopted to optimize the MLP in which a recently developed meta-heuristic optimization technique, Gray wolf optimizer (GWO) is used to optimize the weights of the MLP network. Meta-heuristic algorithms are known to be very efficient in finding globally optimal solutions of highly non-linear optimization problems. In this work the optimization of MLP is done by variation of hidden neurons layer wise and best performance is obtained using GWO algorithm. The ultimate optimal structure of MLP network so obtained is 13-6-1 where 13 is the number of neurons in the input layer, 6 is the number of neurons in the hidden layer and 1 is the number of neuron in the output layer. Single hidden layer is found to give better results as compared to more hidden layers. The performance of the optimized GWO-MLP network is investigated on three different datasets namely UCI Cleveland Benchmark Dataset, UCI Statlog Benchmark Dataset and Ruby Hall Clinic Local Dataset. On comparison the performance of the proposed approach is found to be superior to all other already reported works in terms of accuracy and MSE.
CITATION STYLE
Patil, S., Sinha, N., & Purkayastha, B. (2019). Novel methodology to optimize the architecture of multi-layer feed forward neural network using grey wolf optimizer (GWO-MLP). International Journal of Innovative Technology and Exploring Engineering, 8(6), 731–739.
Mendeley helps you to discover research relevant for your work.