Disentangled Sticky Hierarchical Dirichlet Process Hidden Markov Model

0Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The Hierarchical Dirichlet Process Hidden Markov Model (HDP-HMM) has been used widely as a natural Bayesian nonparametric extension of the classical Hidden Markov Model for learning from sequential and time-series data. A sticky extension of the HDP-HMM has been proposed to strengthen the self-persistence probability in the HDP-HMM. However, the sticky HDP-HMM entangles the strength of the self-persistence prior and transition prior together, limiting its expressiveness. Here, we propose a more general model: the disentangled sticky HDP-HMM (DS-HDP-HMM). We develop novel Gibbs sampling algorithms for efficient inference in this model. We show that the disentangled sticky HDP-HMM outperforms the sticky HDP-HMM and HDP-HMM on both synthetic and real data, and apply the new approach to analyze neural data and segment behavioral video data.

Cite

CITATION STYLE

APA

Zhou, D., Gao, Y., & Paninski, L. (2021). Disentangled Sticky Hierarchical Dirichlet Process Hidden Markov Model. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12457 LNAI, pp. 612–627). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-67658-2_35

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free