Moving forward: RQ4. What did voters know about 2016’s major campaign stories?
If the press exists to inform the public, how can its performance be assessed? Journalists often respond to critique by noting that partisans on both sides are unhappy with their reporting. However, this sort of reflexive framework, a fundamentally moon-based journalism, can’t really tell us anything about how the public has been served. A better way is to take a sun-based approach, and figure out what news organizations have actually informed the public about by measuring public knowledge.
This isn’t as easy as it might sound, and there are a variety of approaches in political science to measuring political and civic knowledge (and other kinds of public knowledge in other areas). Large-scale surveys typically use true/false or multiple choice questions by necessity, which have the down side of allowing educated guesses, or at least giving an additional prompt to recall information that might otherwise have been inaccessible in memory. For example, this Buzzfeed survey, while illuminating, doesn’t tell us anything beyond whether people think they heard about a story, which could be accurate or could be the result of a bunch of cognitions running together. Another way of doing this is with open-ended questions that ask respondents to describe what they know about a particular topic, but which give them as little as possible in the prompt.
This is a particularly important question for examinations of the 2016 U.S. presidential election, but also for all elections going forward in liberal democracies. The reason is “fake news,” a term that in the last few weeks has come to represent misinformation, conspiracy theories, propaganda, hoaxes, and even parody to a certain extent. Although the political commentary community doesn’t have a great conceptual handle on “fake news” yet, we do know the election was subject to both widespread hacking and leaking of private information, and several stories that compounded upon themselves in partisan information loops fed by motivated reasoning. Occasionally these stories came up for air, but it’s reasonably likely that their main aggregate effect was simply to toxify the entire news environment.
Unlike some of my other question generation posts, this is a question I’m actually planning a study around. What I’m really interested in is what information people had available when certain story concepts were primed by events in the campaign, or at least what they have that can still be primed now. Specifically, I’m going to use open-ended questions to ask survey participants to describe in as much depth and details as they can three “scandal” stories each about the two major party candidates: for Hillary Clinton, her emails, her paid speeches, and the Clinton Foundation; and for Donald Trump, his tax returns, accusations of sexual assault against him, and the Trump Foundation. I’ll also ask about two fact-focused post-election topics: the percentage of the popular vote won by each candidate, and how turnout compares with that of 2012. After those knowledge questions, I’ll have an extensive battery of media and information behavior questions, political attitudes and behaviors, and a couple public opinion perceptions.
These results could be paired with content analysis (or reports such as this one from the Shorenstein Center on campaign news coverage) to see how well knowledge matches news volume, but I expect them to be interesting in their own right. The preliminary question that got me thinking about this study back in October was whether people distinguish the two ongoing Clinton stories involving email — the server she used as Secretary of State, and the hacks of the DNC and John Podesta. I’ve got five dollars American that says that on average they don’t.