Authors
Artemy Kolchinsky, Brendan D Tracey, David H Wolpert
Publication date
2019/11/30
Journal
Entropy
Volume
21
Issue
12
Pages
1181
Publisher
MDPI
Description
Information bottleneck (IB) is a technique for extracting information in one random variable X that is relevant for predicting another random variable Y. IB works by encoding X in a compressed “bottleneck” random variable M from which Y can be accurately decoded. However, finding the optimal bottleneck variable involves a difficult optimization problem, which until recently has been considered for only two limited cases: discrete X and Y with small state spaces, and continuous X and Y with a Gaussian joint distribution (in which case optimal encoding and decoding maps are linear). We propose a method for performing IB on arbitrarily-distributed discrete and/or continuous X and Y, while allowing for nonlinear encoding and decoding maps. Our approach relies on a novel non-parametric upper bound for mutual information. We describe how to implement our method using neural networks. We then show that it achieves better performance than the recently-proposed “variational IB” method on several real-world datasets.
Total citations
201820192020202120222023202416202333333020
Scholar articles
A Kolchinsky, BD Tracey, DH Wolpert - Entropy, 2019