### Call it, already

Why did the liberal media call Florida for Bush in 2000 on a margin of less than .01%, while some news outlets are still calling Virginia too close to call after all votes are counted and there is a margin of more than .3%?

At long last, we can surrender to the terrorists and destroy the fabric of society. Why does America hate America so much?

Of the several things I learned from Jon Kleinberg's talk at FOCS, a simple fact about the shape of the "influence of friends" curve has really been stuck in my mind.

The question is the relation between the number of friends or associates who do something, or believe something, and the likelyhood that we will join them. Jon showed the curves for two, quite unrelated, examples: the number of livejournal friends who belong to a "community" versus the likelyhood of joining the community, and the number of former coauthors who have published in a conference versus the likelyhood of publishing for the first time in that conference. In both cases, after accounting for statistical noise, the curves look roughly monotone (the more friends/ coauthors, the likelier to join the community / publish in the conference) and show a "diminishing return" correlation: each extra friend/ coauthor adds less than the previous one to the likelyhood. There is, however, an exception: the*second* friend/ coauthor is more influential than the first. The "diminishing returns" start from the third on. This makes intuitive sense: we may discard a rumor heard once, but pay more attention when we hear it again, or dismiss the hobby of one friend as eccentric but start seeing it as a more reasonable when a second friend joins, and so on.

I have been wondering if something similar (but harder to quantify) is true for research problems and research papers. Being the first to introduce a new problem/ direction is an accomplishment that we are all justly used to praise. I am not talking here about being the first to introduce NP-completeness, or Zero-knowledge proofs, but the run-of-the-mill innovative paper that we see every year that brings something fresh to the theory community. Even when a new definition or question is "in the air," it can be difficult to write the first paper about it, because our community typically does not accept a paper made only of definitions and descriptions of open problems. One needs technical results, and so we see many seminal papers that are forever remembered for their definitions, and that are burdened by technical results that are there just so that the paper could make it past the referees. It is thanks to the imaginative people who make it past such difficulties that our field does not get stuck in a rut.

But, in a way, after the first paper on a problem, the barrier for entry is even higher. The community may still be skeptic about the importance of the problem, and the second paper on the subject has no claim of innovation: it must be evaluated solely on its technical content. After a second paper appears, however, the problem becomes an "area," and it becomes considerably more appealing. The most famous example is probably NP-completeness, that really took off after Karp's paper.

But there are plenty of small contemporary examples. For example, Boaz Barak introduced in Random'02 a beautiful type of argument to prove hierarchy theorems for slightly non-uniform versions of probabilistic polynomial time. The question was not revisited for the next two years, until Fortnow and Santhanam proved in Focs'04 that the "slight non-determinism" could be reduced to one bit. The following year and half saw a flurry of activity that lead to five sets of results that coalesced into three papers, the last of which, by van Melkebeek and Pervyshev, is more or less the final word on hierarchy theorems with small advice given current techniques.

It really pleases to me to write in praise of second papers, in part because I have been involved in a few second papers myself. In 1991, Feigenbaum and Fortnow proved (in the Complexity Conference, at the time called the*Structure in Complexity Theory* conference) that a certain approach (namely, the use of non-adaptive random self-reductions) cannot work in showing that BPP$\neq$NP implies the existence of hard-on-average problems in NP. This was part of a larger research program aimed at understanding random self-reducibility, but it was the only paper to specifically address the question of whether average-case intractability in NP could be proved assuming only worst-case intractability. This fundamental question seems to have been neclected for more than a decade. In the final exam for the complexity class I taught in Fall'02, I added as an extra credit question the problem of proving an incomparable version of the Feigenbaum-Fortnow result (to consider a more general notion of reduction, but restricted to make only one query), and Andrej Bogdanov was the only one to turn in a complete answer. We started working from there, and presented in Focs'03 a proper generalization of Feigenbaum-Fortnow. After that, the question seems to have become popular again. Akavia, Goldreich, Goldwasser and Moshkowitch showed in Stoc'06 how to prove stronger results for the related question of basing one-way functions on BPP$\neq$NP, and Gutfreund, Shaltiel, and Ta-Shma and, more recently, Gutfreund and Ta-Shma have shown that some weak average-case complexity conclusion for NP can be based on worst-case complexity assumptions.

I also wrote the second paper on hardness amplifcation within NP, following up on Ryan O'Donnell brilliant generalization of Yao's XOR Lemma; the second paper (with Mossel and Shpilka) on the question of constructing pseudorandom generators in NC0, a question introduced by Cryan and Bro Miltersen, and more or less resolved by a breakthrough of Applebaum, Ishai, and Kushilevitz; and the second paper on approximation algorithms for unique games, an issue that was quickly revolsed by the work of Gupta and Talwar, Charikar, Makarychev and Makarychev, and Chlamtac, Makarychev and Makarychev.

So, unimaginative theoreticians of the world, unite and pursue problems that have been studied only once. You have nothing to lose but your time.

The question is the relation between the number of friends or associates who do something, or believe something, and the likelyhood that we will join them. Jon showed the curves for two, quite unrelated, examples: the number of livejournal friends who belong to a "community" versus the likelyhood of joining the community, and the number of former coauthors who have published in a conference versus the likelyhood of publishing for the first time in that conference. In both cases, after accounting for statistical noise, the curves look roughly monotone (the more friends/ coauthors, the likelier to join the community / publish in the conference) and show a "diminishing return" correlation: each extra friend/ coauthor adds less than the previous one to the likelyhood. There is, however, an exception: the

I have been wondering if something similar (but harder to quantify) is true for research problems and research papers. Being the first to introduce a new problem/ direction is an accomplishment that we are all justly used to praise. I am not talking here about being the first to introduce NP-completeness, or Zero-knowledge proofs, but the run-of-the-mill innovative paper that we see every year that brings something fresh to the theory community. Even when a new definition or question is "in the air," it can be difficult to write the first paper about it, because our community typically does not accept a paper made only of definitions and descriptions of open problems. One needs technical results, and so we see many seminal papers that are forever remembered for their definitions, and that are burdened by technical results that are there just so that the paper could make it past the referees. It is thanks to the imaginative people who make it past such difficulties that our field does not get stuck in a rut.

But, in a way, after the first paper on a problem, the barrier for entry is even higher. The community may still be skeptic about the importance of the problem, and the second paper on the subject has no claim of innovation: it must be evaluated solely on its technical content. After a second paper appears, however, the problem becomes an "area," and it becomes considerably more appealing. The most famous example is probably NP-completeness, that really took off after Karp's paper.

But there are plenty of small contemporary examples. For example, Boaz Barak introduced in Random'02 a beautiful type of argument to prove hierarchy theorems for slightly non-uniform versions of probabilistic polynomial time. The question was not revisited for the next two years, until Fortnow and Santhanam proved in Focs'04 that the "slight non-determinism" could be reduced to one bit. The following year and half saw a flurry of activity that lead to five sets of results that coalesced into three papers, the last of which, by van Melkebeek and Pervyshev, is more or less the final word on hierarchy theorems with small advice given current techniques.

It really pleases to me to write in praise of second papers, in part because I have been involved in a few second papers myself. In 1991, Feigenbaum and Fortnow proved (in the Complexity Conference, at the time called the

I also wrote the second paper on hardness amplifcation within NP, following up on Ryan O'Donnell brilliant generalization of Yao's XOR Lemma; the second paper (with Mossel and Shpilka) on the question of constructing pseudorandom generators in NC0, a question introduced by Cryan and Bro Miltersen, and more or less resolved by a breakthrough of Applebaum, Ishai, and Kushilevitz; and the second paper on approximation algorithms for unique games, an issue that was quickly revolsed by the work of Gupta and Talwar, Charikar, Makarychev and Makarychev, and Chlamtac, Makarychev and Makarychev.

So, unimaginative theoreticians of the world, unite and pursue problems that have been studied only once. You have nothing to lose but your time.

Why are, in this heat, women wearing those formless huge boots that look like moon boots covered in suede? And doesn't it feel odd when their friends are wearing flip-flops?

Why was the invitation to see an advance screening of*Borat* three weeks ago only valid for people age 17 to 34? The opinions of some 35 year olds can be priceless, however demographically undesirable they may be.

And what possesses people to*cruise* on a Lamborghini or a Ferrari? Toyota pickup trucks are for cruising. Italian race cars are for racing. If you can spend \$200,000 on a car you can most definitely afford a speeding ticket. And you certainly do not want to drive at 20 mph on the left lane (in a 35mph zone), even if there is a car with good-looking women on the right lane. Not when there are cars behind you that are trying to go somewhere.

Finally, not that it has anything to do with LA, but you must have heard the story of Tedd Haggard, the influential Evangelical pastor who allegedly used crystal meth (an illegal drug) and had a three-year relationship with a 49 year old male prostitute. In an interview (see below), he admits buying meth and meeting with the guy. But: he threw away the meth,*each time*, after buying it, without consuming it, and he only had *massages* from the guy. And how did he meet this guy, who advertised on the internet as an escort? Via a referral from a hotel in Denver. What was he doing in Denver? He goes to stay in hotels in Denver to write his books. Then, again, the insurgency in Iraq is on its last throes, so this is not a completely unlikely scenario.

What strikes me in this story is the*49 year old male prostitute*. I don't mean to sound ageist, but, seriously, a 35 year old is too old to comment on a movie, but a 49 year old can be paid for sex?

Other people are striken by the fact that Haggard campaigned for a constitutional amendment to ban same-sex marriage. One is remainded of congressman Mark Foley, the former chairman of the House Caucus on Missing and Exploited Children, who was discovered to be quite the exploiter of minors himself.

One may think of such people as hypocrites, unless, that is, one is a reader of national review, where David Frum analyses the matter with rare moral clarity. You have to read the piece by yourselves to believe it but, in short, Frum is saying that it is more moral to live a lie and cheat on your wife than to live the life of an out gay man. The point is, if you really*have to* be gay, at least have the decency to be miserable about it and, if possible, ruin someone else's life.

Which somehow reminds me that the federal government is going to fund an initiative to promote abstinence among adults age 19 to 29.*Twenty-nine*! I assume they are recruiting male engineers as motivational speakers and role models.

Why was the invitation to see an advance screening of

And what possesses people to

Finally, not that it has anything to do with LA, but you must have heard the story of Tedd Haggard, the influential Evangelical pastor who allegedly used crystal meth (an illegal drug) and had a three-year relationship with a 49 year old male prostitute. In an interview (see below), he admits buying meth and meeting with the guy. But: he threw away the meth,

What strikes me in this story is the

Other people are striken by the fact that Haggard campaigned for a constitutional amendment to ban same-sex marriage. One is remainded of congressman Mark Foley, the former chairman of the House Caucus on Missing and Exploited Children, who was discovered to be quite the exploiter of minors himself.

One may think of such people as hypocrites, unless, that is, one is a reader of national review, where David Frum analyses the matter with rare moral clarity. You have to read the piece by yourselves to believe it but, in short, Frum is saying that it is more moral to live a lie and cheat on your wife than to live the life of an out gay man. The point is, if you really

Which somehow reminds me that the federal government is going to fund an initiative to promote abstinence among adults age 19 to 29.