(p.300) Appendix 5 Visual Bias
(p.300) Appendix 5 Visual Bias
To determine the amount of coverage, the campaign story was treated as the unit of analysis. The coding instrument differentiated between multiple-party campaign stories (featuring candidates from more than one political party) and single-party stories (featuring one political party’s candidates). Brief mentioning of another party’s candidate without visual material was coded as a single-party story. When a second party’s candidate was shown and/or presented in a sound bite, the campaign story was coded as a multiple-party story.
In the case of a multiple-party story, coders determined which party’s candidates dominated in the story, but with a “No one party dominates” option for this category. Dominance meant that a particular party’s candidates were featured more often visually, they had more opportunity to speak, or they were described more thoroughly than the opposition. We employed these categories rather than attempting to measure durations for each candidate’s appearance because in television news, multiple candidate stories are covered in an integrated way where it is often impossible to decide where one candidate’s coverage starts or stops and another’s begins and ends.
Visual weight of coverage was assessed using the individual campaign story as the unit of analysis. Six story types were identified: reader, voice-over (VO), voice-over/sound-on-tape (VO/SOT), interview, package, in-depth package, and other (to assure exhaustiveness). The position of campaign stories was coded as lead, before the first commercial break, and after the first commercial break.
(p.301) Two editing and nine camera maneuver categories were utilized in the structural features analysis. For editing, the Goldilocks-effect was recorded in multiple-party stories. The options included Democrats, Republicans, and other candidates as well as “no sound bite featured.” Lip flap was recorded in duration (seconds) using the individual candidate as the unit of analysis.
Camera maneuver categories included shot angle, length, and movement with the individual candidate as the unit of analysis. The codebook featured several photographic examples illustrating the threshold of counting a shot as angled and demonstrating shot lengths. The duration of shots in which candidates appeared in observable low and high angles was recorded in seconds. In addition, four different shot lengths were measured. The extreme close-up shot was defined as one in which the candidate’s face appears to fill the entire screen without the shoulder or chest area visible. This might include a visual cut-off of the candidate’s chin and forehead. The close-up shot shows the candidate’s full head and shoulders and a small portion of the chest area (well above the waist and elbows). A medium shot depicts the candidate’s full head, shoulders, and waist area but not the full body. The long shot reveals the full body, as well as contextualizing information including other objects and people. The durations of these shots featuring candidates were recorded in seconds.
The frequency and duration of zoom-in (movement from a longer to closer shot) and zoom-out (movement from a closer to longer shot) camera perspectives were measured. Finally, the duration of eyewitness camera shots of candidates was recorded in seconds. This viewpoint was defined as one in which the camera is placed on the operator’s shoulder while subjectively pursuing the action. These shots are relatively easily distinguished from shots that are filmed by a stationary camera that is secured to a tripod.
Coders and Reliability
A posthoc reliability check was performed on 20% of the sample. Coders maintained an acceptable level of agreement (Krippendorff’s alpha =. 83), with a minimum of 81% and maximum of 100% for the categories reported in Chapter 5.