An Alphabet-Size Bound for the Information Bottleneck Function
Publikation: Bidrag til bog/antologi/rapport › Konferencebidrag i proceedings › Forskning › fagfællebedømt
Standard
An Alphabet-Size Bound for the Information Bottleneck Function. / Hirche, Christoph; Winter, Andreas.
2020 IEEE International Symposium on Information Theory, ISIT 2020 - Proceedings. IEEE, 2020. s. 2383-2388 9174416.Publikation: Bidrag til bog/antologi/rapport › Konferencebidrag i proceedings › Forskning › fagfællebedømt
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - An Alphabet-Size Bound for the Information Bottleneck Function
AU - Hirche, Christoph
AU - Winter, Andreas
PY - 2020
Y1 - 2020
N2 - The information bottleneck function gives a measure of optimal preservation of correlation between some random variable X and some side information Y while compressing X into a new random variable W with bounded remaining correlation to X. As such, the information bottleneck has found many natural applications in machine learning, coding and video compression. The main objective in order to calculate the information bottleneck is to find the optimal representation on W. This could in principle be arbitrarily complicated, but fortunately it is known that the cardinality of W can be restricted as |\mathcal{W}| \leq |\mathcal{X}| + 1 which makes the calculation possible for finite |\mathcal{X}|. Now, for many practical applications, e.g. in machine learning, X represents a potentially very large data space, while Y is from a comparably small set of labels. This raises the question whether the known cardinality bound can be improved in such situations. We show that the information bottleneck function can always be approximated up to an error \delta (\varepsilon,\;|\mathcal{Y}|) with a cardinality |\mathcal{W}| \leq f( \in,\;|\mathcal{Y}|), for explicitly given functions δ and f of an approximation parameter ϵ > 0 and the cardinality of \mathcal{Y}.Finally, we generalize the known cardinality boundsY to the case were some of the random variables represent quantum information.
AB - The information bottleneck function gives a measure of optimal preservation of correlation between some random variable X and some side information Y while compressing X into a new random variable W with bounded remaining correlation to X. As such, the information bottleneck has found many natural applications in machine learning, coding and video compression. The main objective in order to calculate the information bottleneck is to find the optimal representation on W. This could in principle be arbitrarily complicated, but fortunately it is known that the cardinality of W can be restricted as |\mathcal{W}| \leq |\mathcal{X}| + 1 which makes the calculation possible for finite |\mathcal{X}|. Now, for many practical applications, e.g. in machine learning, X represents a potentially very large data space, while Y is from a comparably small set of labels. This raises the question whether the known cardinality bound can be improved in such situations. We show that the information bottleneck function can always be approximated up to an error \delta (\varepsilon,\;|\mathcal{Y}|) with a cardinality |\mathcal{W}| \leq f( \in,\;|\mathcal{Y}|), for explicitly given functions δ and f of an approximation parameter ϵ > 0 and the cardinality of \mathcal{Y}.Finally, we generalize the known cardinality boundsY to the case were some of the random variables represent quantum information.
UR - http://www.scopus.com/inward/record.url?scp=85090401824&partnerID=8YFLogxK
U2 - 10.1109/ISIT44484.2020.9174416
DO - 10.1109/ISIT44484.2020.9174416
M3 - Article in proceedings
AN - SCOPUS:85090401824
SP - 2383
EP - 2388
BT - 2020 IEEE International Symposium on Information Theory, ISIT 2020 - Proceedings
PB - IEEE
T2 - 2020 IEEE International Symposium on Information Theory, ISIT 2020
Y2 - 21 July 2020 through 26 July 2020
ER -
ID: 256725258