Learning to predict the leave-one-out error of kernel based classifiers

24Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose an algorithm to predict the leave-one-out (LOO) error for kernel based classifiers. To achieve this goal with computational efficiency, we cast the LOO error approximation task into a classification problem. This means that we need to learn a classification of whether or not a given training sample – if left out of the data set – would be misclassified. For this learning task, simple data dependent features are proposed, inspired by geometrical intuition. Our approach allows to reliably select a good model as demonstrated in simulations on Support Vector and Linear Programming Machines. Comparisons to existing learning theoretical bounds, e.g. the span bound, are given for various model selection scenarios.

Cite

CITATION STYLE

APA

Tsuda, K., Atsch, G. R., Mika, S., & Uller, K. R. M. (2001). Learning to predict the leave-one-out error of kernel based classifiers. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2130, pp. 331–338). Springer Verlag. https://doi.org/10.1007/3-540-44668-0_47

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free