We propose an algorithm to predict the leave-one-out (LOO) error for kernel based classifiers. To achieve this goal with computational efficiency, we cast the LOO error approximation task into a classification problem. This means that we need to learn a classification of whether or not a given training sample – if left out of the data set – would be misclassified. For this learning task, simple data dependent features are proposed, inspired by geometrical intuition. Our approach allows to reliably select a good model as demonstrated in simulations on Support Vector and Linear Programming Machines. Comparisons to existing learning theoretical bounds, e.g. the span bound, are given for various model selection scenarios.
CITATION STYLE
Tsuda, K., Atsch, G. R., Mika, S., & Uller, K. R. M. (2001). Learning to predict the leave-one-out error of kernel based classifiers. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2130, pp. 331–338). Springer Verlag. https://doi.org/10.1007/3-540-44668-0_47
Mendeley helps you to discover research relevant for your work.