Of the five plenary talks at this year’s ISIT, two were about concentration of measure: Katalin Marton’s Shannon lecture on “Distance-divergence inequalities” and Gabor Lugosi’s talk on “Concentration inequalities and the entropy method” the next morning. Since the topic of measure concentration is dear to my heart, I thought I would write down a few unifying themes.
When we speak of concentration of measure, we mean the following. Let be a random variable taking values in some metric space . Then we say that (or, more correctly, the distribution of ) has the concentration property if, for any set such that , we have
Here, is the distance from the point to the set :
Another way to express (1) is as follows: for any set and any , define the -blowup of by
Then has the concentration property if
In other words, has the concentration property if any set containing with not too small a probability can be “blown up” to contain with near-certainty.
Here are two classic examples of concentration:
- Gaussian distribution in Euclidean space. Let and take — the usual Euclidean distance. Let be a standard -dimensional Gaussian random variable, i.e., , where is the identity matrix. Then for any we have
- Uniform distribution in Hamming space. Let be the Hamming cube