The NSF reviewing process

This page has moved.

Main page

The process

The typical NSF panel I've seen is composed of about 1/4-1/3 people who have worked in the area at one point in time, but may not be active/publishing, about 1/4-1/3 people who are in related fields, and the rest actively working in at least the broad area. You probably have about a 30% chance of having someone who really knows your specific area read your proposal. You'll probably get one person who at least keeps up on the research in your area, even if they don't work in it themselves, and one person who knows very little about your area.

Each proposal is (usually) read by 3 people. Sometimes it will be read by more for one or more reasons:

Each reviewer is suppose to read the proposals before the meeting, and enter comments. A typical review load is 5-15 proposals, and, while some reviewers try to read them all the night before (!) usually they'll be read over a month or so.

The reviewers don't see each other's comments (typically) until the day of the meeting.

The day of the meeting, each proposal is discussed in turn, and the proposals categorised as Really Good (will be funded), Good (we like them, but won't be funded unless money falls out of the sky), and Reject. (This varies a bit - usually as the proposals are discussed they're roughly ordered. ) If your proposal ended up in the Really Good category (and typically only about 1/10 do) then you will be funded, barring any unusual circumstances. These proposals come back with straight Excellences and the occasional Very Good. Of the proposals in the Good category, the top one or two might be funded.

The program chair listens in on the discussion, but basically doesn't contribute. He/she has the final say on what gets funded. I think they generally follow the advice of the panel, but as I've never checked to see if the proposals we ranked highly were funded, I can't tell you if this is indeed the case. I have heard of at least one occasion where the program chair killed an excellently ranked proposal because "it wasn't in an area I was interested in" (he was retiring that year...). But (hopefully!) this sort of thing is rare.

Note: Some small proposals (especially from under-represented schools/states) may get funded even though they weren't in the top 1/10 (but they better be ranked well). This is because there's other pots of money to draw from in this case. And (I think) a small proposal might just slip in to fill in the chinks in the budget.

How this affects your proposal review.

How the composition of the panel affects your ranking

If someone on the panel works in your area, this can be either a Good thing, or a Bad thing. But it will definitely affect your ranking, because the remaining panelists will defer to the person who "knows what they're talking about".

On the plus side: If you really do have a good idea, but maybe you haven't presented it as clearly as you might, or argued why this is an important research topic, a knowledgeable reviewer can help.

On the down side: If the reviewer doesn't think much of your idea, they can kill it. The reviewer's arguments will be one of the following: "I/someone tried this x years ago and it failed", or they may present technical reasons. To combat this, I suggest the following:

People outside of your area will primarily judge your proposal on the first two pages, the outreach section, and on how well the proposal is written (does it flow, are the non-technical arguments convincing). I have seen a proposal that was very well-written, but somewhat slim on technical merit, get funded because the three reviewers were not knowledgeable in the area. And, in general, a well-written proposal will get higher marks from less-knowledgeable people simply because that's the only thing they have to judge the proposal on.

Of course, this means that if your proposal is dense and lacks compelling, easily understood arguments, it will get lower marks. If you're working in an area where the problems are not so obvious (e.g., if you work with robots, you know it is stunningly difficult to get a robot to roll down a corridor, but the average computer science researcher doesn't think this is a hard problem because they walk down corridors all the time, so how hard can it be...) then you're going to have to spend some time convincing the reviewer that this is actually a problem, and the current solutions are not good enough.

Why proposals aren't funded

The following are the most common rejections I've seen. They come in pairs, more or less, with a good proposal balancing between the two.