Vector-Level and Bit-Level Feature Adjusted Factorization Machine for Sparse Prediction

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Factorization Machines (FMs) are a series of effective solutions for sparse data prediction by considering the interactions among users, items, and auxiliary information. However, the feature representations in most state-of-the-art FMs are fixed, which reduces the prediction performance as the same feature may have unequal predictabilities under different input instances. In this paper, we propose a novel Feature-adjusted Factorization Machine (FaFM) model by adaptively adjusting the feature vector representations from both vector-level and bit-level. Specifically, we adopt a fully connected layer to adaptively learn the weight of vector-level feature adjustment. And a user-item specific gate is designed to refine the vector in bit-level and to filter noises caused by over-adaptation of the input instance. Extensive experiments on two real-world datasets demonstrate the effectiveness of FaFM. Empirical results indicate that FaFM significantly outperforms the traditional FM with a 10.89% relative improvement in terms of Root Mean Square Error (RMSE) and consistently exceeds four state-of-the-art deep learning based models.

Cite

CITATION STYLE

APA

Wu, Y., Zhao, P., Liu, Y., Sheng, V. S., Fang, J., & Zhuang, F. (2020). Vector-Level and Bit-Level Feature Adjusted Factorization Machine for Sparse Prediction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12112 LNCS, pp. 386–402). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-59410-7_27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free