Empathy in the electronic age
In which I drone on endlessly
The story of deterrence is the story of irreversible personal consequence.

At some point in the story of civilisation, we broke our empathy box. These days, there are too many people, spread too far, for us to be able to care about any single one of them terribly much. We tend to reserve our compassion for people closer to home: our family, our friends, our tribe. And, of course, we ourselves.

Technology has connected us to an ever-increasing net of people, but it has done so at an ever-increasing remove. This has several consequences — many of them, to some extent, self-evident — where empathy is concerned. Firstly, as I discussed several months ago, it has fostered the rise of empathy-by-retweet, where westerners "participate" in geopolitics by expressing their support (or displeasure) at a distance. Secondly, we are even now witnessing a counterrevolution against the inexorably open social graph towards a more tightly self-curated network, where you choose to share not with your entire Facebook friends' list but with a preselected circle.

Thirdly, though, there is a growing awareness on the part of some people to the "dehumanising" impact of technology on social interaction. Partly this is purely self-serving; people, wounded by the slings and arrows of outrageous 4chan, sniff that the aggressors would never have dared spoken their minds "to the face" of their tormentee.

One way of saying this is that we forget, or fail to consider, that there is an "actual human on the other side of the screen," as though people on IRC somehow think that they have been thrown into the midst of a group of particularly profane chatbots. This has become embedded in Internet lore: people are assholes not because they are assholes, but because technology permits them to forget that real people are impacted by the things they say and do.

I don't entirely buy this, particularly not in the touchy-feely form of the narrative advanced by xkcd and Google+ — that we could make things more "civil" by compelling people to be reminded of the humanity of others. I think we're pretty well aware that we're dealing with human beings, and I think it's telling that the "we do this because we forget they're real people" story is dragged out to explain why people are aggressive online and not, say, why they are promiscuous there also. Or why they are more comfortable with their self-identity. "It's easier to be a lesbian to words than to people" sounds silly, trite and dismissive because, well, it's silly, trite and dismissive.

I suspect that the real reason technology is dehumanising is in the effect it has of reducing or removing personal consequences. This is the role of "anonymity" in John Gabriel's "theory," as well as the role of distance: we say angry, hurtful things not because we have forgotten our target is a human being, but because there's nothing they can do about it.

If this sounds familiar, it's because this is similar to the logic employed by bullies around the world and stretching back into memories ageless. Bullies enjoy their power because it comes free of personal consequence — it's what happens in that sort of hedonism where you can do whatever you want.

It looks like sociopathy, but it's not. Humans are, after all, essentially bullies, whether we like to think of it that way or not. When our victims can't fight back, and we don't have to deal with the consequences, we're pretty unconcerned with the suffering we cause; if you're aware of your global impact, and don't make radical changes to lessen it, then you're using your position of power to bully others. So it goes.

My point is that it's a natural impulse. Technology hasn't made us less human, it's simply allowed more of us to enjoy the free reign enjoyed by aggressive 7th graders for many years now.

And this is not new to Facebook. Technology has done this for a long time. Condom technology, for example, did (and does) not somehow increase the atavistic tendencies of the average human. What it does, by reducing the personal consequences of a given action, is to reduce the deterrent to said action as well.

So you may judge that this is not always a bad thing and, lest you think I am about to spin this into a pitch for abstinence-only "education," no. Condoms are great. I'm merely making a point about what happens when technology strips away irreversible personal consequences. It makes us more of who we were already and, frequently, this is more primal than we would like to admit.

Deterrence serves to mediate our reptilian impulses. And in the same way that people can be governed by primal things like social posturing (bullies on Facebook or IRC) or sex (condoms), we can be governed also by anger or fear. In person, we lash out less commonly than we might otherwise, because there exists the possibility for — yes — irreversible personal consequences if we punch someone.

Or kill them.

The TASER is a device that allows police to stun a potential suspect. It's part of a growing arsenal of what are, charmingly, referred to as "less-than-lethal" alternatives to the Beretta or the billy-club. The theory, of course, is that in a situation where you would ordinarily kill someone to death, you simply zap them with force lightning instead.

But, of course, that's not what happens. The use of deadly force imposes the spectre of irreversible personal consequences to the officer — namely, it runs the risk of making them a killer and forcing them to complete what I have to imagine is reams of paperwork. When technology removes this deterrent — by giving a "less than lethal" option — we find that it's employed far more often.

Just like Facebook, the TASER has allowed people to indulge their inner bullies (in this case, the police). Absent consequences for physical aggression, they can employ it more often. Of course has the unfortunate side effect of presenting, on occasion, irreversible consequences for their victims, but curiously despite the TASER's ability to kill I do not see that possibility as entering into the calculus for use — it still seems to be viewed merely as an occasional and unintended consequence.

Again, though, I would find it overreaching to argue that the TASER makes cops more aggressive. More properly, I think, it simply allows them to exercise the aggression (driven often no doubt by fear) they naturally feel, absent the deterrent of irreversible personal consequence.

But that's small potatoes.

The United States is heavily invested in drone technology. Unmanned aerial vehicles, unmanned ground vehicles, robotic combatants. What we hear about is how many lives this has saved — how much less risk there is to "our men and women in uniform" when killing can be done without putting them in harm's way. One day, we are promised, unmanned drones will allow American soldiers to kill thousands of people without leaving the air-conditioned comfort of a California control room.

(left unstated is who the victims would be, because, as we know, America only kills bad people)

The silly apocalyptic view is that, one day, our robot warriors will, uh, forget that there are humans on the other side of the screen and murder them. I have no doubt that this will not happen. But there's a more reasonable, nearly as dire view built into this paradigm.

War is a tremendously destructive and costly activity, not just in terms of dollars spent and factories repurposed but in lives destroyed. The American experiment in Afghanistan and Iraq has cost us hundreds of thousands of man-years in people killed and irreparably crippled, psychologically scarred and physically maimed. Much of this, actually, remains more or less invisible. The United States abandoned the draft in favor of an all-volunteer army, and "all-vol" — much like social technology — inserted a distance between decision-making and decision-consequences. This, unsurprisingly, permitted us to act as bullies towards our troops and, inexorably, towards the people they were employed against.

But either way, there's a growing awareness of just how many people these wars have destroyed and, again, I'm consciously ignoring the Iraqis and Afghanis — remember, personal consequence is the mechanism of deterrent. We're beginning to wake up and see how these wars have affected Americans. Will this cause us to rethink the employment of "force projection" and armed conflict? I'm not sure. "It is well that war is so terrible — otherwise we would grow too fond of it." These words, spoken by Robert E. Lee, are a century and a half old.

One thing people don't always realise is that conflict is just another tool in the arsenal of any group actor, beyond diplomacy or legislation. It's a means of achieving your will. We tend to think of the negotiating room and the war room as being of a fundamentally different character, but they're not. Actors that wish to be taken seriously always back their words with violence, and in the end words and force are essentially the same. Words compel you to pay your income tax or obey the speed limit; if you don't, you will be forcibly deprived of your liberty.

But nations choose to fight less often than they choose to talk because the opportunity cost is much higher. War, large scale war, requires retooling your domestic industry, compelling your people to austerity, and asking them to make what we used to rather quaintly call the "ultimate sacrifice," back when dying in uniform was reserved for people defending their country from attack. Iraq and Afghanistan show what happens when you create a military infrastructure that can operate without these restrictions: fewer personal consequences equals less deterrence.

And so to drones.

"If this were a sci-fi movie," essayist and EFF co-founder John Perry Barlow points out, "flying robot assassins would not be a weapon you'd associate with the good guys."

The United States is not just creating a system in which wars can be fought without human casualty (on our side, anyway) — or at least, that is not the only thing they are doing. In embracing such a system, they are also creating one in which wars can be fought without consequence, and this, inevitably, will reduce those things that deter us from starting them.

Indeed, they already have. "Drone attack" is, after all, what we would've called an "airstrike" some years ago. But because they do not require the commitment of nearly so many human resources, and because the consequences of failure are so much smaller (think Black Hawk Down here) we can embark upon them at our leisure. The United States has clearly demonstrated that it is willing to do so even if such actions are not desired by the host country. Would we be so bold if failure meant having to navigate another Gary Powers incident?

No, of course not.

The problem, of course, is that it's hard to oppose drones. Opposing robot soldiers means that you are, implicitly, in favor of putting real people in harm's way. Appealing to a global sensibility or universal empathy is not a way to win votes. But I will suggest that, just like Facebook, just like TASERS, just like any other tool that empowers our natural bullying instinct, this will come back to haunt us. Consequences, after all, are infrequently avoided and more commonly deferred.

Unless, I suppose, the terminators get us all first.
You can use this form to add a comment to this page!




You will be identified by the name you provide. Once posted, comments may not be edited. For markup, use 'bulletin board' code: [i][/i] for italic, [b][/b] for bold, [ind][/ind] to indent, [url=][/url] for URLs, and [quote=Author|Date][/quote] for quotes (you can leave the date blank but you need the pipe). HTML is not allowed. Neither is including your website :)