Aside from questions over appropriateness of expertise being a rather slippery issue, there is very little information given about the expertise of a speaker. We found lot of reliance on phrases such as ‘scientists have found’ and ‘experts say’. Personally I think we need to address this issue before we can even get on to matters of whether experts are the right ones or not. Although expertise may be implied through editing, and TV in particular can flag up institutional association and title, we rarely saw a contributor’s disciplinary background specified. Especially significant I thought, in broadcast reports about new research we found little explicit reference to whether or not a particular contributor was involved in the research being reported (online reports often refer to someone as ‘lead author’ or ‘co-author’). This lack of definition makes it hard for audiences to judge a contributor’s independence, whether they are speaking on a topic they have studied in depth or if they are simply working from anecdote.
What's wrong with science journalism? How did it become so dull and predictable? And how do we fix it?
My point was really about predictability and stagnation. The formula I outlined – using a few randomly picked BBC science articles as a guide – isn't necessarily an example of bad journalism; but
science reporting is predictable enough that you can write a formula for it that everyone recognises, and once the formula has been seen it's very hard to un-see, like a faint watermark at the edge of your vision.
Professor Nisbet is a social scientist who studies strategic communication in policy debates and public affairs. His current work focuses on scientific and environmental controversies, examining the interactions between experts, journalists, and various publics. In this research, Nisbet examines how news coverage reflects and shapes policy, how strategists try to mold public opinion, and how citizens make sense of controversies.