Have you ever found yourself in the position of asking, on your own behalf or on behalf of others, how many or precisely which people it would be useful to kill in order to secure a benefit for yourself or your cause? And just how to do it? No? Others have. Their answers have ranged from Cain’s original “Abel, with my bare hands” to Hitler’s “all the Jews, mainly by gas,” and the widespread Hutu view in the Rwanda of 1994, “the Tutsis, with machetes.” The question burns today for the government of Sudan and in the Congo.

Humanity will never be able to solve the problem of Cain, of fratricidal rage born of jealousy or some equivalent passion, nor of the more calculating retail impulse to profit in some way from doing someone in. Thus, for individuals, we maintain a system of laws, police forces, courts, prisons, mental hospitals, and, for extreme cases, the apparatus of the death penalty to punish those whom an impulse or cold calculation has led to murder—thereby deterring (so we hope) at least some others from embarking on a similar course of action. But we understand that our system is no solution to the problem of murder.

It is not obvious, however, or should not be, that because the human condition gives us no prospect of ridding the world of murder, we must be similarly pessimistic about our ability to rid the world of murder on the scale of populations. Mass atrocities, up to the point of genocide, are not simply collective acts of individual murder. Though genocides are not uniform in character, they are all political. Genocide constitutes the most extreme possible terms for settling differences: a stronger party’s decision to annihilate or extirpate the weaker. Genocide is organized. It entails a project, which in turn requires leaders with a purpose in mind and their acquisition of the means of death, including followers to do the dirty work.

We simply do not have to put up with this. By “we,” let me be clear. I do not mean “humanity,” although I would welcome the collective conclusion of mankind that genocide is unacceptable. I do not mean the “international community,” although a decision on the part of all national governments to refrain from engaging in mass atrocities at home or abroad would be most welcome, as would a collective intention to stop and punish leaders or would-be leaders seeking to deviate from the norm. What I really mean by “we” is “we who are strong enough to stop the murderous bastards before they can get away with it.”

This “we” is an inclusive group; everyone with a will and a way is welcome. But its purpose must go far beyond declaratory well-wishing. It is not a bad thing but a grossly insufficient thing to join in choruses of “never again,” the familiar refrain after something really bad has happened—say, 6 million dead Jews, 2 million dead Cambodians, or 800,000 dead Tutsis. No, we must act to stop the malefactors.

And by “we,” in the last analysis, I mean the United States.


We have the privilege to live at a time of unprecedented prosperity, and we know how to generate more of it. Anybody who thinks the present financial crisis has changed these fundamental facts is engaged in the time-honored human propensity for self-dramatization. Our prosperity is accompanied by a likewise unprecedented confluence of power and moral sensibility—or at least it seems to be. With regard to atrocities on a mass scale, we have the means at our disposal to stop what we and all right-thinking people know is wrong. It comes down to the choice of whether to act or not.

If we are unable to muster the political will to prevent or halt genocide and mass atrocities, the long-term consequences are truly chilling to contemplate. This is of course especially true with regard to future victims: the terror of being rounded up and held at gunpoint, especially in the final few seconds, as the shooting starts; of feeling the first slash of a swinging machete, knowing that more are coming. But it is also true for us. Future generations more committed to the principles we espouse but fail to act on may look back with disdain or disgust on our failure. Or, more horrifying still, future generations will conclude that all moral reasoning in political matters is sentimental superstructure that should be jettisoned in the interest of clarity about the first and only true principle of politics: the strong take care of themselves and the weak are on their own.

The progress of politics and civilization itself is nothing other than the long, difficult, incomplete struggle to overcome the original political principle of self-regard by instilling in the strong an empathetic regard for others. The first successes came in the mists of prehistory in the form of small groups ceasing to fight among themselves—clan, tribe, city. With the spread in terms of territory and clout of rights-regarding nation-states in recent centuries, it became possible to imagine cooperative efforts among such states to extend a principle of regard for others across international boundaries, indeed globally. In 1998, the NATO alliance—led, of course, by the United States—went to war against Serbia to stop ethnic cleansing and atrocities in Kosovo, averting a potential genocide in close proximity to NATO territory. But in 2004, after the U.S. Secretary of State, Colin Powell, declared that atrocities in the Darfur region of Sudan amounted to genocide, the response of the United States and others was uncertain and halting at best. Hundreds of thousands of lives were lost and millions evacuated their homes for refugee and displaced-persons camps. There they remain.

So, in recent memory, “we” have acted effectively, showing that we can, and “we” have failed to act effectively, revealing a gap between our professed moral sense and what we are prepared to do to vindicate it. The test of progress for this generation is whether we will be able to extend the principle of regard for others by acting when necessary to prevent or halt genocide.


Words are not enough; however, words matter. All things considered, when it comes to the importance of preventing genocide and mass atrocities, we talk a pretty good game. First there are American words. It is (or should be) a point of pride for believer and atheist alike that our founding national document, the Declaration of Independence, affirms that people are endowed by their Creator with, first of all, a right to life. The right to live can be especially difficult to vindicate. There is no one to whom a drowning man can appeal; it is not wrong for the water to drown him. But it surely is wrong if governments, wholly the creations of people, deny or violate this basic right. The Declaration sets forth the correct aspiration. True, certain historical conduct—the treatment of Native Americans in particular—miserably fails to measure up to the stated aspiration. But should we therefore abandon the aspiration? Of course not. We discredit only ourselves when we fail to live up to our ideals. The ideals themselves are not discredited.

Then there are words inspired by America’s founding that, in their drafting, sought to extend those ideals to the rest of the world, words in the United Nations Charter and the Universal Declaration of Human Rights. These documents affirm the rights of the individual against states or other actors that violate those rights. But the affirmation is more theoretical than actual, since the UN Charter also embraces a doctrine of sovereign right according to which states may not interfere in the internal affairs of others.

This aspect of the Charter gives states so inclined a ready cloak behind which to repress their people—including by commission of mass atrocities. This is what I mean when I say that words matter but are not enough. The UN’s universalist human-rights creed is honored far more in the breach than in the observance. At the same time, the UN Security Council is also charged to act in the interest of peace and security, which can create an opening in response to extreme situations in which large numbers of lives are at risk.

In 1946, with the dimensions of the horror of the Holocaust still unfolding, the UN General Assembly passed a resolution declaring genocide a crime under international law. Genocide “shocks the conscience of mankind,” the resolution memorably declared. This effort to “internationalize” the crime of genocide might have been the world body’s finest hour. The ensuing Genocide Convention of 1948 provides for “the prevention and punishment of the crime of genocide” whether “committed in time of peace or time of war” and elaborates a definition, which includes “acts committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group.”

The Convention isn’t self-executing, in that it doesn’t compel its signatories to take any particular action if the terms of the treaty are violated. But it does provide an international legal and, more important, moral framework for preventive action in response to the risk of genocide.

Breakthrough though it was, one unintended consequence of the Genocide Convention has been a serious problem. The definition of genocide is good as far as it goes, and the prevention mandate seems to allow latitude for timely action against would-be perpetrators. But whether “genocide” as defined in the treaty is actually occurring or about to occur is a complicated question both epistemologically and legally. For if you act to prevent genocide and succeed, there is no genocide—and so you cannot prove you have prevented one. Moreover, those you act against can claim you have violated their sovereign rights, and the argument will carry weight.

If, on the other hand, there is a legal finding of genocide, then it is too late for prevention. All that is left is mitigation. Moreover, if “genocide” is the trigger for action, then the bar is rather high: Atrocities short of genocide may somehow end up as tolerable, or at least tolerated. In 2005, a year after Colin Powell announced the U.S. finding of a genocide in Darfur, a UN special inquiry issued a report saying that while criminal atrocities had taken place in Sudan for which perpetrators needed to be held accountable, it lacked the basis for a conclusion that those crimes amounted to genocide. The bloodstained rulers in Khartoum were delighted to characterize the report as a vindication.

A further attempt to “internationalize” the Declaration’s “right to life” came in 2005, when the World Summit at the United Nations embraced in its “Outcome Document” the principle of the “responsibility to protect.” The doctrine of “responsibility to protect,” known colloquially as “R2P,” holds that a state has an obligation to protect those living on its territory from atrocities (specified in the Outcome Document as “genocide, war crimes, ethnic cleansing, and crimes against humanity”). If a state is unable or unwilling to fulfill this requirement, the protection function falls to the international community, which can take measures up to and including the use of force in order to protect populations. With sovereign right comes sovereign responsibility. The principle of noninterference gives way in circumstances of mass atrocities.

I had a small role in the adoption of R2P. Congress (principally in the person of Frank Wolf, a Republican member of the House of Representatives from Virginia) chartered a bipartisan task force on UN reform run by the U.S. Institute of Peace and co-chaired by former House Speaker Newt Gingrich and former Senate Majority Leader George Mitchell. I ran the Task Force’s expert group on human rights. Not without difficulty, we were able to include in the June 2005 consensus report a strong endorsement of the “responsibility to protect.” This was the first major bipartisan statement on behalf of R2P, which before had mainly been the province of liberal internationalists and human-rights groups on the Left.

The Task Force recommendation in turn influenced the Bush State Department to back the concept at the World Summit. In the absence of the Gingrich-Mitchell recommendation, the State Department’s traditional institutional wariness as well as ideological conservative skepticism would likely have led to U.S. opposition, which would have doomed the project.

As for the objections, the main concern has been (and remains) that the United States, by embracing R2P, will subject itself to the whims of the “international community” on whether and when to intervene in fulfillment of the protection function. Thus Steven Groves of the Heritage Foundation has expressed alarm that “the United States would cede control—any control—of its armed forces to the caprice of the world community without the consent of the American people.” In the extreme case, in this view, the U.S. might incur a legal obligation to go to war whether it wants to or not. The latter concern is so far down a trail of speculation piled on intemperate inference on top of worst-case hypothesizing that it hardly bears consideration. In its less extreme form, this is the question of how much the U.S. should engage with others to find common ends or interests and pursue them jointly.

Power is power, and the United States has more of it than any other state. But international political support is of value, and the U.S. does benefit from seeking it in fora that others regard as legitimate. We will never give the UN Security Council the last word. Other countries don’t like that, but then a Kosovo comes along, Russia blocks Security Council action, and people of good will realize that the price of calling off war because the Security Council hasn’t authorized it will be several hundred thousand dead Kosovars.
In other words, one should try one’s best at the UN for the simple reason that one might succeed. But failure at the UN does not end the discussion, as the U.S. determination in the months leading up to the war in Iraq demonstrated, and certainly should not when a genocide is brewing.

A more practical concern is that R2P would simply be used against Israel. This is true, but no more of R2P than of everything else, alas. Given bad will, any principle can be distorted almost into its opposite in the application. Vladimir Putin’s Russia cleverly cited the responsibility to protect as a reason for its invasion of Georgia in 2008—it was just acting to protect Russians in the breakaway Georgian regions of Abkhazia and South Ossetia, don’t you see! It fell to the Swedish foreign ministry to inform the Russians that the “responsibility to protect” here was Georgia’s, since it was on Georgian territory that the supposed offense against Russian ethnics was taking place—and that in case Georgia failed, the responsibility would fall to the “international community.”

All of these documents, from the Declaration to the UN Charter to the R2P language in the Outcome Document, are subject to the criticism that, again, they are mere words on paper. Whom have these words actually protected? The answer is that these words are tools of moral suasion. The principles they espouse represent some of our best conclusions about how the world should be and what we should do in pursuit of such a world. They are, of course, works in progress and remain subject to refinement. But we can’t say we haven’t really thought about genocide and mass atrocities, whether they matter to us or what we should do when confronted with them. By now, we know.


Institutions cannot respond effectively to the threat of genocide and mass atrocities in the absence of political will on the part of their members. Nevertheless, institutions can be more or less adroit, responsive, and effective. Here, we have a long way to go, though a range of promising steps has been taken.

Let me offer two snapshots of the problem and the response. The first comes from 2005, during work on the Gingrich-Mitchell report. The second comes from work I did last year on the Genocide Prevention Task Force,1 which issued a report in December 2008 with recommendations to the U.S. government on forestalling the threat of atrocities. The institutional change over the course of three years has been staggering.

In 2005, all was confusion, and the Darfur situation in particular was a frustrating daisy chain of inaction: Everybody who was potentially in a position to do something useful—from the Secretary General’s office at the United Nations to the UN Security Council to the European Union to NATO to the African Union mission on the ground in Darfur to the United States government itself—was full of explanations about why somebody else had to do something first.

In 2004, the African Union (AU) deployed a small number of troops to Sudan to protect outside monitors of a cease-fire agreement. They were able to do little to contain the depredations Sudanese government forces were inflicting on Darfur in conjunction with the Janjaweed militia, irregular forces of nomadic Arabic-speaking tribes at odds with the sedentary population of Western Sudan. As was well known to everyone involved in early 2005, the AU force was too small and woefully underequipped and unprepared. To be even minimally effective, the African Union needed a package of assistance that would include communications and intelligence assets, lift, planning and headquarters help, and training. Where to get it?

Well, maybe a military alliance with serious capabilities along those lines, like NATO. Or maybe NATO acting in conjunction with the European Union, which was already providing the main funding for the AU mission. Or maybe the European Union itself, if it could get its act together on its desire for a “common foreign and security policy.” Or maybe just the United States, leading a coalition of the willing or even acting on its own, if necessary.

It turned out that in the previous year, in the summer of 2004, the NATO military command under General James Jones (now Barack Obama’s national- security adviser) had begun a “prudent planning” exercise on Darfur—essentially, an inquiry into what might be done to help out the AU. It was undertaken without the authorization of the North Atlantic Council, NATO’s political decision-making body.

That exercise was interrupted when several allies, notably France, objected to NATO assigning itself a role in Africa. Some saw in the objection an effort to protect the EU’s turf. The planning didn’t cease, but it moved out of NATO auspices to the U.S. European Command, our military’s headquarters on the continent. As matters stood, there was no prospect of a NATO mission—but it seemed to us that matters need not have stood there.

We knew that UN Secretary General Kofi Annan had given a couple of speeches urging NATO to assist the African Union’s Darfur efforts. One ambassador at NATO told us he thought this represented an opening. The Europeans who were reluctant to involve NATO would not change their minds based merely on a speech by Annan, but if the Secretary General actually sent a formal letter to NATO asking for alliance help, that might change the debate. It would be one thing to say NATO shouldn’t insert itself into Africa, quite another to decline a UN request for help.

Does this sound ridiculous? Hundreds of thousands of lives potentially at stake over whether the contents of a speech are transferred to a letter? It does, and this is an indication of just how ill-equipped the “international community” as a whole was to deal with an emergency on the scale of Darfur.

Skeptics at the European Union’s headquarters in Brussels, meanwhile, informed us that the African Union would be reluctant to accept assistance from the West’s military alliance, since doing so would smack of neo-imperialism and colonialism. A better avenue would be through the European Union, according to the European Union—not that the EU actually had a plan.

So why not have Annan send a letter? We asked that question at a meeting with senior UN officials on the top floor of the organization’s building in New York. The answer was that Annan’s representatives had sounded out NATO and determined that there was simply no support for the alliance’s involvement in Africa. Annan couldn’t possibly ask for help only to be rebuked, explained Mark Malloch Brown, Annan’s top adviser (now an intimate of British Prime Minister Gordon Brown).

I found myself, to my surprise, shouting at Malloch Brown from the staff seats in the second row: Their information was simply wrong, there was substantial will at NATO to do just that. What was needed was a letter—Annan had already given speeches saying the same thing, all he needed to do was send a letter, just a letter. My importuning, though impolitic, got Malloch Brown’s attention and drew an invitation for follow-up on the matter. On the train on the way back to Washington, we drafted an e-mail explaining the situation as we had found it, why everything was so horribly stuck, and how it might at last get unstuck.

We quickly heard through intermediaries that though Annan was favorably disposed to the idea of a formal request, he didn’t think he had the authority to write such a letter—he didn’t want to get out too far in front of the Security Council on a matter that was subject to difficult ongoing negotiations there.

So now what? Another avenue to change the debate in NATO would be a letter directly from the African Union asking for assistance in Darfur—notwithstanding the patronizing assurance we had received that the African Union could not conceivably accept the neocolonialist assistance of monstrous Americans who had invaded poor Iraq.

Success. For what Annan could not write personally, he evidently could get written. After days of back-channel exchanges with Annan’s office, a letter arrived at NATO headquarters on April 26, 2005 from African Union chairman Alpha Konare specifically requesting NATO’s help in Darfur. Hard upon it, NATO’s North Atlantic Council—the same body that had insisted on an end to the previous year’s “prudent planning” exercise on how the genocide might be interrupted—formally approved the assistance.

I make no claim about the efficacy or adequacy of that NATO assistance. The best one can say about it is that things could have been worse. More than a million people in displaced-person and refugee camps are better than more than a million dead. The presence of peacekeepers, though woefully inadequate, seems nevertheless to have had some deterrent effect on the monstrous Janjaweed militia and the government.

The chief fact we found as we tried to manage the rules of the international system in 2005 was a high level of dysfunctionality. Nobody really knew what was on the minds of the key players in the African Union. The United Nations Secretary General didn’t know what was possible at NATO. NATO itself was uncertain about getting involved in Africa. Some Europeans seemed more interested in protecting their African turf than in action that might help those at risk. Meanwhile, the only organization that seemed genuinely interested in taking action, the African Union, was hobbled by a grievous lack of resources and capacity, and didn’t know how or whom to ask for help.

So what do you need to deal with a situation like Darfur? You need soldiers, and they had better be well trained and well led, otherwise you can end up (as the UN unfortunately has on more than one occasion) with peacekeepers who also dabble as sexual predators on the populations they are supposed to be protecting. You need equipment, like armored personnel carriers, and better still, helicopters. You need a mandate that enables your soldiers to take effective action, so they’re actually able to protect the locals in danger (not just to protect, as was notoriously the case in Darfur, the cease-fire monitors). Above all, you need the political will to take action.

And you really need to have figured out how to put together all of the above before a crisis spirals out of control. That means you’ve got to do the tedious work of getting people, governments, and institutions to think about what they need and plan in advance on how to get it. It means a hundred different letters and memorandums of understanding. The machinery of international politics was not developed to address problems such as Darfur. If we want to address them, and we must, then we have to retool and refine what we’ve got. To that end, the Gingrich-Mitchell report included a number of recommendations on things like “capacity-building,” an unlovely bit of foreign policy jargon, but one that nonetheless captures the imperative to close the gap between what you have and what you need.


Fast forward a few years later: Making the fact-finding rounds again, this time with the Genocide Prevention Task Force, I was astounded to see that all of the things we recommended in Gingrich-Mitchell were starting to happen. I don’t say these changes occurred because Gingrich-Mitchell recommended them. But we had clearly been onto something in terms of identifying the gaps and roadblocks in the international system.

Far from resisting American or European assistance on neocolonial or any other grounds, the African Union and other organizations on the continent welcome help. They are increasingly finding the political will to confront the continent’s malefactors. They have been working to develop “early warning” systems. They have the troops, but they need training and equipment before they will be fully prepared to act swiftly in response to trouble, and that’s where the developed world can be useful.

NATO, meanwhile, is in the process of figuring out how to do more in partnership with others and is favorably disposed to helping out with peace building and peacekeeping missions conducted under UN or other auspices. A deputy secretary general at NATO now has the responsibility to serve as the focal point for engagement with other organizations and institutions. A document outlining how NATO will work with the UN has been approved. And there is now a NATO liaison officer to the African Union.

The emphasis on Africa is obvious, but mass atrocities are not, of course, a problem unique to Africa. For the first time, the charter of the Association of Southeast Asian Nations now includes a provision on human rights. The UN Secretary General now has a special adviser on the “responsibility to protect” as well as a special adviser on the prevention of genocide. These offices are small, and they necessarily view their subjects from a UN perspective, which is too limiting for U.S. policymakers. But again, the more constructive the UN can be, the better.

One could go on. The point is that governments and international and regional organizations have made a beginning of taking the problem of preventing genocide and mass atrocities with the level of seriousness the subject demands. On the home front, the Genocide Prevention Task Force offered a large dose of specific guidance on internal government reform that holds out the promise of more effective and timely policymaking. This is no place for a discussion of the specifics of the interagency process and military planning procedures. Suffice it to say that better internal organization is within reach.

The missing institutional piece on the international scene now, it seems to me, flows from the absence of coordination and mutual awareness among the various parties that are now taking the issue seriously. The Task Force recommended that the U.S. government undertake a “major diplomatic initiative” whose purpose would be to put together a formal network linking all the parties that engage on the issue—governments, non-governmental organizations, and regional and international institutions. The idea would be to share information and strategize responses to emerging threats.

The report does not quite say so, but it would be prudent to have someplace to go where people with a record of taking the issue seriously and with genuine moral authority gather, in the all-too-likely event that the UN Security Council finds itself paralyzed once again in the face of mass atrocities. Such a network would have no legal authority, but it might well have moral authority of the sort that contributes to the generation of political will.

In the end, unsurprisingly, effective action may come down to U.S. power and will. Those of us who see an imperative for action in these cases should welcome encouragement to that end from wherever it may come. And realistically, it would most likely be due only to very poor diplomacy if the United States found itself without supporters and allies in preventing or stopping genocide.


The response to Darfur has to be judged a failure. But it has perhaps been a constructive failure that has galvanized people to think about how to make the system more nimble in response to gathering dangers. Those with a profound distaste for “nonconsensual military intervention”—that would be an “invasion” to the plain speakers among us—should be all the more concerned about timely action to identify the gathering danger of mass atrocities and nip the problem in the bud. Those with a will to argue for whatever is necessary to halt a slide into mass slaughter must realize that they will be most effective in galvanizing a response if they amass a chorus of the like-minded to speak as one on the moral imperative.

But we cannot assure ourselves that our best planning will always enable us to act early, nor can we count on having a phalanx of the like-minded alongside us. In the extreme case, halting or failing to halt genocide has come down to whether the political will exists within the United States to act. We will not be spared from such decisions in the future. If we are serious, we have to be willing to take upon ourselves the burden of providing the leadership, the arms, the troops, and the resources, and of bearing the casualties, the reversals of fortune, and the inevitable complaints and second-guessing.

Because the would-be genocidaires are out there, thinking about it: whom to kill; how many; how to do it. Whether they can get away with it.