Minimizing cluster errors in LP-based nonlinear classification

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent work has focused on techniques to construct a learning machine able to classify, at any given accuracy, all members of two mutually exclusive classes. Good numerical results have been reported; however, there remain some concerns regarding prediction ability when dealing with large data bases. This paper introduces clustering, which decreases the number of variables in the linear programming models that need be solved at each iteration. Preliminary results provide better prediction accuracy, while keeping the good characteristics of the previous classification scheme: a piecewise (non)linear surface that discriminates individuals from two classes with an a priori classification accuracy is built and at each iteration, a new piece of the surface is obtained by solving a linear programming (LP) model. The technique proposed in this work reduces the number of LP variables by linking one error variable to each cluster, instead of linking one error variable to each individual in the population. Preliminary numerical results are reported on real datasets from the Irvine repository of machine learning databases. © 2014 Springer International Publishing Switzerland.

Cite

CITATION STYLE

APA

Manzanilla-Salazar, O. G., Espinal-Kohler, J., & García-Palomares, U. M. (2014). Minimizing cluster errors in LP-based nonlinear classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8556 LNAI, pp. 163–174). Springer Verlag. https://doi.org/10.1007/978-3-319-08979-9_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free