The unreasonable effectiveness of additive combinatorics in computer science
As I have written several times on these pages, techniques from additive combinatorics seem to be very well suited to attack problems in computer science, and already a good amount of applications have been found. For example, "sum-product theorems" originally developed in a combinatorial approach to the Kakeya problem have been extremely valuable in recent constructions of randomness extractors. The central theorem of additive combinatorics, Szemeredi's theorem, has now four quite different proofs, one based on graph theory and Ramsey theory, one based on analytical methods, one based on ergodic theory and one based on hypergraph theory. The first proof introduced the Szemeredi regularity lemma, which is a fixture of algorithmic work on property testing. The analytical proof of Gowers introduced the notion of Gowers uniformity that, so far, has found application in PCP constructions, communication complexity , and pseudorandomness. There is also work in progress on complexity-theoretic applications of some of the ergodic-theoretic techniques.
Why is it the case that techniques developed to study the presence of arithmetic progressions in certain sets are so useful to study such unrelated notions as sub-linear time algorithms, PCP systems, pseudorandom generators, and multi-party protocols? This remains, in part, a mystery. A unifying theme in the recent advances in additive combinatorics is the notion that every large combinatorial object can be ``decomposed'' into a ``pseudorandom'' part and a ``small-description'' part, and that many questions that we might be interested in are easy to answer, at least approximately, on pseudorandom and on small-description objects. Since computer scientists almost always deal with worst-case scenario, and are typically comfortable with approximations, it is reasonable that we can take advantage of techniques that reduce the analysis of arbitrary worst cases to the analysis of much simpler scenarios.
Whatever the reason for their effectiveness, it is worthwhile for any theoretical computer scientist to learn more about this fascinating area of math. One of the tutorials in FOCS 2007 will be on additive combinatorics, with a celebrity speaker. More modestly, following Random-Approx 2007, in Princeton, there will be a course on additive combinatorics for (and by) computer scientists. (If you want to go, you have to register by August 1 and reserve the hotel by this weekend.)
Why is it the case that techniques developed to study the presence of arithmetic progressions in certain sets are so useful to study such unrelated notions as sub-linear time algorithms, PCP systems, pseudorandom generators, and multi-party protocols? This remains, in part, a mystery. A unifying theme in the recent advances in additive combinatorics is the notion that every large combinatorial object can be ``decomposed'' into a ``pseudorandom'' part and a ``small-description'' part, and that many questions that we might be interested in are easy to answer, at least approximately, on pseudorandom and on small-description objects. Since computer scientists almost always deal with worst-case scenario, and are typically comfortable with approximations, it is reasonable that we can take advantage of techniques that reduce the analysis of arbitrary worst cases to the analysis of much simpler scenarios.
Whatever the reason for their effectiveness, it is worthwhile for any theoretical computer scientist to learn more about this fascinating area of math. One of the tutorials in FOCS 2007 will be on additive combinatorics, with a celebrity speaker. More modestly, following Random-Approx 2007, in Princeton, there will be a course on additive combinatorics for (and by) computer scientists. (If you want to go, you have to register by August 1 and reserve the hotel by this weekend.)