Supervised learning algorithms restrict the training of classification models to the classes of interest. Other related classes are typically neglected in this process and are not involved in the final decision rule. Nevertheless, the analysis of these foreign samples and their labels might provide additional information on the classes of interest. By revealing common patterns in foreign classification tasks it might lead to the identification of structures suitable for the original classes. This principle is used in the field of transfer learning. In this work, we investigate the use of foreign classes for the feature selection process of binary classifiers. While the final classification model is trained according to the traditional supervised learning scheme, its feature signature is designed for separating a pair of foreign classes. We systematically analyse these classifiers in $$10 \times 10$$ cross-validation experiments on microarray datasets with multiple diagnostic classes. For each evaluated classification model, we observed foreign feature combinations that outperformed at least 90% of those feature sets designed for the original diagnostic classes on at least 88.9% of all datasets.
CITATION STYLE
Lausser, L., Szekely, R., Kessler, V., Schwenker, F., & Kestler, H. A. (2018). Selecting features from foreign classes. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11081 LNAI, pp. 66–77). Springer Verlag. https://doi.org/10.1007/978-3-319-99978-4_5
Mendeley helps you to discover research relevant for your work.