Friday, October 15, 2010

Bridge-buying with the Post

What happens when clueless journalism runs head-on into jackleg social science? Well, we get the Washington Post buying itself a shiny new bridge to Arlington or Brooklyn or nowhere. Since this tale has brought such tidings of comfort and joy around the nattersphere, let's go for a drive:

Well, that's a relief! I wonder how we know.

A new analysis of political signs displayed at a tea party rally in Washington last month reveals that the vast majority of activists expressed narrow concerns about the government's economic and spending policies and steered clear of the racially charged anti-Obama messages that have helped define some media coverage of such events.

There's a lot to unravel here -- not just what constitutes a "vast majority" and how we tell "narrow concerns" about policy from "racially charged" messages about that Kenyan Muslim socialist colored guy in the White House, but what constitutes the sort of "new analysis" that rises to the attention of the Washington Post.


Emily Ekins, a graduate student at UCLA, conducted the survey at the 9/12 Taxpayer March on Washington last month by scouring the crowd, row by row and hour by hour, and taking a picture of every sign she passed.

Ekins photographed about 250 signs, and more than half of those she saw reflected a "limited government ethos," she found -- touching on such topics as the role of government, liberty, taxes, spending, deficit and concern about socialism. Examples ranged from the simple message "$top the $pending" scrawled in black-marker block letters to more elaborate drawings of bar charts, stop signs and one poster with the slogan "Socialism is Legal Theft" and a stick-figure socialist pointing a gun at the head of a taxpayer.


The journalistic threshold here appears to be "UCLA." In other words, invoke the name of a famous university and we can stop being skeptical. That's too bad, because quite a few questions get overlooked when the Mystic Appeal to Authority is invoked. First, this isn't a survey. It's a content analysis. The researcher isn't asking the signs what they think; she's assigning them to preconstructed categories. That means we -- meaning people who evaluate the "analysis" -- hold it to a particular set of standards for validity and reliability. Some of them are the ones we'd use in a survey and some aren't.

In either case, the first thing we need to deal with is whether our sample is appropriately representative. "Scouring the crowd, row by row, hour by hour" is bog-standard breathless journalism, but it isn't methodology. Which "the crowd," in which rows, in which hours, selected how? Is this a census -- representing all the signs at the rally -- or a sample ("every sign she passed")? If it's a sample, how big a population are we trying to describe with "about 250 signs"? What steps did we take to ensure that "every sign she passed" was a valid way of representing the whole?

The researcher might have done this; she might have written and executed a protocol rigid enough for us to believe that the "rows" and "hours" were chosen for some reason other than convenience -- or, worse, ideological propriety, which is the sort of accusation we try to head off with a competent sampling design. That isn't the sort of question reporters ask,* so it's no surprise this one didn't. But that doesn't mean it's not important. A convenience sample can illuminate a lot of things, but it can't talk about what all the signs at the rally looked like. Which points toward a set of related but distinct flaws.

The hed isn't the reporter's fault, but it expresses a common error in the journalistic process: making value judgments about abstract statistics. I don't know what it means for "few" signs at this rally to express the sort of knuckle-dragging racism the story imputes to Tea Party coverage, because I don't know what the normal figure for knuckle-dragging racism at rallies like this one might be. "Fewer" is a fact; if I have this year's proportion of racist signage and it's smaller than last year's, the matter is settled. But "few" is an opinion. Apparently it's big  news for the Post that "only 5 percent" of signs mention the president's race or religion, but absent some sort of baseline, it's meaningless. Editors are paid to know stuff like that.

And thus we're pointed back to -- oh, all right, editors should also know that "survey" and "study" aren't synonyms.** Here's where the Post goes deeply off the rails. Once again, this isn't a "survey." It's a content analysis. The sample isn't putting itself into categories: McCain vs. Obama, freedom vs. socialism, boxers vs. briefs. The sample is being assigned to categories by the researcher. The reporter, again, hasn't bothered to ask how those categories are built or validated, because reporters don't ask questions like that -- especially when the expert is from UCLA(!!!!). But we can make some guesses based on what we see.

Valid and reliable coding is what separates the sheeps from the goats in content analysis. You can tell me all day long that something you see in news stories indicates "bias" or "satanism" or "Harvey the six-foot rabbit," but if you can't write your definitions clearly enough that I see what you're claiming to see, you'll have a hard time showing that you measured it. Reporters (and editors), when a "study" lands on your desk that purports to put content into categories the way this one does, ask how "intercoder reliability" was calculated and whether it was corrected for the possibility of agreement by chance.*** If the answer is "huh?", ignore the study.

That's "reliability," and as you've probably guessed, it's easy to get when you measure simple stuff. If you're coding screwball comedies for the presence of male and female characters, you're going to get high reliability. It's different when you get into video games; how do you code aliens, robots and wer-creatures if your categories are based on Cary Grant and Roz Russell? Generally, categories get more interesting -- more "valid" -- as they get more complicated. A construct like "racism" is interesting precisely because it has a bunch of potential indicators: some of them blatant, most of them subtle. It takes a lot of work to write those rules. Walking around with the camera is the easy part.

You should see some problems emerging in the slideshow at this point. I have particular issues with 5 and 6. "The only reason you call it a living document is so you can KILL IT" is supposed to represent the health care bill, and "put down the golf clubs and pick up the Constitution" represents the "demise of the Constitution." That's out of tune with the Tea Party talk I see. The first one's about the Constitution; the second is a personal shot at Obama (the "look how often he's played golf" was a persistent theme at Fox and talk radio all summer). I don't see how anybody who keeps up with the discourse of American politics can make the calls the "study" does.

That's why you write rules and why you tell the other coders to shut up and follow the codebook. If we saw the rules, we'd at least have a clearer idea of what the concepts mean.**** When the scholar says most of the signs reflect a "limited government ethos," we're at a loss for what she's seeing -- except that it's apparently everything up to and including "concern about socialism." Sorry, but proclaiming that the scary colored guy is giving the Red Army the keys to the White House hardly strikes me as a "narrow concern" about economic policy.

Now, let's not get in the habit of picking on other people's grad students too much, all right? She's still a student, and students are still learning. But along with buying a good primer on content analysis --- damn, is it too much to ask that doctoral students in PoliSci pass an introductory course in masscomm theory too?

"Really this is an issue of salience," Ekins said. "Just because a couple of percentage points of signs have those messages doesn't mean the other people don't share those views, but it doesn't mean they do, either. But when 25 percent of the coverage is devoted to those signs, it suggests that this is the issue that 25 percent of people think is so important that they're going to put it on a sign, when it's actually only a couple of people."

Well, welcome to the big old world of agenda-setting. We're all about the transfer of salience here! Come to that, the transfer of salience is why half the signs in the sample are convinced their freedom is declining, or that the socialists are coming, or that the Constitution is in peril. That's what talk radio and Fox News have been telling them, and it worked.

What she's talking about is actually the stuff of undergraduate news writing courses. News is stuff that's important and/or doesn't happen very often. Deviance doesn't mean "man humps goat"; it's the academic way of saying "man bites dog." Open racism --- thankfully -- is pretty deviant in modern society, so nobody should be surprised that overt public racism draws media attention. In an analogy you've probably heard, we don't see the hundreds of airliner landings a day that go smoothly; we see the one a month that comes down too hard because of landing gear failure. Safe landings -- and people not making public troglodytes of themselves -- aren't news. Health-care debate carried out through pictures of a black guy with a bone through his nose ... yeah, sorry, that's news.

Is the Tea Party being unfairly portrayed? By the standards of the "protest paradigm," probably not. (A lot of this stuff has actually been studied and written about for a long, long time.) Coverage of out-group protests tends to ignore their message and focus on the deviant and disruptive: those hippies are skinny-dipping in the reflecting pool, screwing up my commute and singing their damn hippie songs in front of the kids. Who cares what they think about "free trade"? Whether Tea Party coverage is more or less respectful than traditional protest coverage is a great question. I'd speculate "more," because the grownup media seem to have been pretty thoroughly cowed when it comes to calling a loony a loony. But that's speculation, and if I was talking to a reporter and didn't make that clear, I'd share in the blame.

That doesn't mean the reporter is absolved of responsibility here. True, a reporter isn't a peer reviewer or a methodology adviser. It isn't the reporter's job to point out that the categories appear to reflect both sloppy design and ideological bias, or to ask whether the coding rules have been tested on some other humanoid entity before being loosed on the world. But it is the reporter's job to have a bloody clue about what a "study" can or can't do -- and to make some basic, triage-like judgments about whether the evidence at hand has some relation to the claims it's supposed to support. When you get an assertion that amounts to, oh, "Hey, only 6 percent of the signs in a sample that's already been screened for out-of-control looniness are openly bigoted = MEDIA FAIL," your job is to be skeptical. Please carry it out.

Social science ought to be a tool that illuminates how the world works. When I look at this set of content, I think I'm seeing stuff that helps show why a fairly significant organized political movement thinks Occupied Dearborn needs to be freed from the fell grip of Islamic law. Both the newspaper and the researcher seem to be missing the point. May both be endowed with clues before long.

* Yes, we ought to fix that.
** True, not every editor has to "know" this. It's possible to have a successful news organization in which many people don't know that baseball and basketball are actually different sports. Srsly! But the people who put the sports agate together had best have a big honking clue in a real hurry.
*** We'll be working on this Wednesday evening if you're in the midtown area.
**** So how would you code "The Kenyan Muslim socialist community organizer rolls joints in the Constitution" -- is that about the "demise of the Constitution" too?

Labels: ,

0 Comments:

Post a Comment

<< Home