By bringing together the most prominent European institutions and archives in the field of Classical Latin and Greek epigraphy, the EAGLE project has collected the vast majority of the surviving Greco-Latin inscriptions into a single readily-searchable database. Text-based search engines are typically used to retrieve information about ancient inscriptions (or about other artifacts). These systems require that the users formulate a text query that contains information such as the place where the object was found or where it is currently located. Conversely, visual search systems can be used to provide information to users (like tourists and scholars) in a most intuitive and immediate way, just using an image as query. In this article, we provide a comparison of several approaches for visual recognizing ancient inscriptions. Our experiments, conducted on 17,155 photos related to 14,560 inscriptions, show that BoW and VLAD are outperformed by both Fisher Vector (FV) and Convolutional Neural Network (CNN) features. More interestingly, combining FV and CNN features into a single image representation allows achieving very high effectiveness by correctly recognizing the query inscription in more than 90% of the cases. Our results suggest that combinations of FV and CNN can be also exploited to effectively perform visual retrieval of other types of objects related to cultural heritage such as landmarks and monuments.
CITATION STYLE
Amato, G., Falchi, F., & Vadicamo, L. (2016). Visual recognition of ancient inscriptions using Convolutional Neural Network and Fisher Vector. Journal on Computing and Cultural Heritage, 9(4). https://doi.org/10.1145/2964911
Mendeley helps you to discover research relevant for your work.