Tuesday, November 07, 2006

On being second

Of the several things I learned from Jon Kleinberg's talk at FOCS, a simple fact about the shape of the "influence of friends" curve has really been stuck in my mind.

The question is the relation between the number of friends or associates who do something, or believe something, and the likelyhood that we will join them. Jon showed the curves for two, quite unrelated, examples: the number of livejournal friends who belong to a "community" versus the likelyhood of joining the community, and the number of former coauthors who have published in a conference versus the likelyhood of publishing for the first time in that conference. In both cases, after accounting for statistical noise, the curves look roughly monotone (the more friends/ coauthors, the likelier to join the community / publish in the conference) and show a "diminishing return" correlation: each extra friend/ coauthor adds less than the previous one to the likelyhood. There is, however, an exception: the second friend/ coauthor is more influential than the first. The "diminishing returns" start from the third on. This makes intuitive sense: we may discard a rumor heard once, but pay more attention when we hear it again, or dismiss the hobby of one friend as eccentric but start seeing it as a more reasonable when a second friend joins, and so on.

I have been wondering if something similar (but harder to quantify) is true for research problems and research papers. Being the first to introduce a new problem/ direction is an accomplishment that we are all justly used to praise. I am not talking here about being the first to introduce NP-completeness, or Zero-knowledge proofs, but the run-of-the-mill innovative paper that we see every year that brings something fresh to the theory community. Even when a new definition or question is "in the air," it can be difficult to write the first paper about it, because our community typically does not accept a paper made only of definitions and descriptions of open problems. One needs technical results, and so we see many seminal papers that are forever remembered for their definitions, and that are burdened by technical results that are there just so that the paper could make it past the referees. It is thanks to the imaginative people who make it past such difficulties that our field does not get stuck in a rut.

But, in a way, after the first paper on a problem, the barrier for entry is even higher. The community may still be skeptic about the importance of the problem, and the second paper on the subject has no claim of innovation: it must be evaluated solely on its technical content. After a second paper appears, however, the problem becomes an "area," and it becomes considerably more appealing. The most famous example is probably NP-completeness, that really took off after Karp's paper.

But there are plenty of small contemporary examples. For example, Boaz Barak introduced in Random'02 a beautiful type of argument to prove hierarchy theorems for slightly non-uniform versions of probabilistic polynomial time. The question was not revisited for the next two years, until Fortnow and Santhanam proved in Focs'04 that the "slight non-determinism" could be reduced to one bit. The following year and half saw a flurry of activity that lead to five sets of results that coalesced into three papers, the last of which, by van Melkebeek and Pervyshev, is more or less the final word on hierarchy theorems with small advice given current techniques.

It really pleases to me to write in praise of second papers, in part because I have been involved in a few second papers myself. In 1991, Feigenbaum and Fortnow proved (in the Complexity Conference, at the time called the Structure in Complexity Theory conference) that a certain approach (namely, the use of non-adaptive random self-reductions) cannot work in showing that BPP$\neq$NP implies the existence of hard-on-average problems in NP. This was part of a larger research program aimed at understanding random self-reducibility, but it was the only paper to specifically address the question of whether average-case intractability in NP could be proved assuming only worst-case intractability. This fundamental question seems to have been neclected for more than a decade. In the final exam for the complexity class I taught in Fall'02, I added as an extra credit question the problem of proving an incomparable version of the Feigenbaum-Fortnow result (to consider a more general notion of reduction, but restricted to make only one query), and Andrej Bogdanov was the only one to turn in a complete answer. We started working from there, and presented in Focs'03 a proper generalization of Feigenbaum-Fortnow. After that, the question seems to have become popular again. Akavia, Goldreich, Goldwasser and Moshkowitch showed in Stoc'06 how to prove stronger results for the related question of basing one-way functions on BPP$\neq$NP, and Gutfreund, Shaltiel, and Ta-Shma and, more recently, Gutfreund and Ta-Shma have shown that some weak average-case complexity conclusion for NP can be based on worst-case complexity assumptions.

I also wrote the second paper on hardness amplifcation within NP, following up on Ryan O'Donnell brilliant generalization of Yao's XOR Lemma; the second paper (with Mossel and Shpilka) on the question of constructing pseudorandom generators in NC0, a question introduced by Cryan and Bro Miltersen, and more or less resolved by a breakthrough of Applebaum, Ishai, and Kushilevitz; and the second paper on approximation algorithms for unique games, an issue that was quickly revolsed by the work of Gupta and Talwar, Charikar, Makarychev and Makarychev, and Chlamtac, Makarychev and Makarychev.

So, unimaginative theoreticians of the world, unite and pursue problems that have been studied only once. You have nothing to lose but your time.

4 Comments:

  1. Anonymous Anonymous
    11/08/2006 02:08:00 AM

    It's Moshkovitz.

     
  2. Anonymous Anonymous
    11/10/2006 11:20:00 AM

    Nice post. can you name some problems/directions where only the first paper has appeared and which are waiting for the, eventual, second paper.

     
  3. Anonymous Anonymous
    11/10/2006 06:32:00 PM

    Nice post. can you name some problems/directions where only the first paper has appeared and which are waiting for the, eventual, second paper.

    Ahh, so now you see there is an art to being second as well...

     
  4. Anonymous Anonymous
    11/17/2006 04:00:00 PM

    Yeah, a list of few "problems that have been studied only once" would be nice.

     

Post a Comment

<< Home