In the last years automatic food image understanding has become an important research challenge for the society. This is because of the serious impact that food intake has in human life. Food recognition engines, can help the monitoring of the patient diet and his food intake habits. Nevertheless, distinguish among different classes of food is not the first question for assisted dietary monitoring systems. Prior to ask what class of food is depicted in an image, a computer vision system should be able to distinguish between food vs non-food images. In this work we consider one-class classification method to distinguish food vs non-food images. The UNICT-FD889 dataset is used for training purpose, whereas other two datasets of food and non-food images has been downloaded from Flickr to test the method. Taking into account previous works, we used Bag-of-Words representation considering different feature spaces to build the codebook. To give possibility to the community to work on the considered problem, the datasets used in our experiments are made publicly available.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Farinella, G. M., Allegra, D., Stanco, F., & Battiato, S. (2015). On the exploitation of one class classification to distinguish food Vs non-food images. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9281, pp. 375–383). Springer Verlag. https://doi.org/10.1007/978-3-319-23222-5_46