Abstract
Shannon's information theory is a widely used tool for many applications including statistical inference, natural language processing, thermal physics, pattern recognition and neuroscience etc. However, it is a big challenge to effectively calculate the mutual information (MI) for these applications. In this paper, we propose an effective asymptotic bounds and approximation for evaluating MI in the context of neural population coding, especially for the case of high-dimensional inputs. Based on that, an unsupervised framework is presented to learn representations, e.g, complete, overcomplete or un-dercomplete bases from the input datasets. Our experiments on MNIST digits image and natural image patches data sets showed robustness and efficiency for extracting salient features compared to existing methods.
Original language | English (US) |
---|---|
Title of host publication | SS-17-01 |
Subtitle of host publication | Artificial Intelligene for the Social Good; SS-17-02: Computational Construction Grammar and Natural Language Understanding; SS-17-03: Computational Context: Why It's Important, What It Means, and Can It Be Computed?; SS-17-04: Designing the User Experience of Machine Learning Systems; SS-17-05: Interactive Multisensory Object Perception for Embodied Agents; SS-17-06: Learning from Observation of Humans; SS-17-07: Science of Intelligence: Computational Principles of Natural and Artificial Intelligence; SS-17-08: Wellbeing AI: From Machine Learning to Subjectivity Oriented Computing |
Publisher | AI Access Foundation |
Pages | 575-579 |
Number of pages | 5 |
Volume | SS-17-01 - SS-17-08 |
ISBN (Electronic) | 9781577357797 |
State | Published - Jan 1 2017 |
Event | 2017 AAAI Spring Symposium - Stanford, United States Duration: Mar 27 2017 → Mar 29 2017 |
Other
Other | 2017 AAAI Spring Symposium |
---|---|
Country/Territory | United States |
City | Stanford |
Period | 3/27/17 → 3/29/17 |
ASJC Scopus subject areas
- Artificial Intelligence