Measurement Assessing the influence of military public affairs on media framing is by nature a qualitative study: magnitude of effect on public opinion is quite subjective. However, recent research has shown that the ascribed salience of an issue is a direct result of frequency of appearance (Iyengar & Simon 1993). This fact, coupled with the theoretical perspective of agenda-setting, can best be utilized in a pre-test, post-test content analysis scenario. Content analysis, as described by Berelson (1952), is an objective, systematic and quantitative description of the manifest content of communication. The ultimate goal of content analysis is to investigate the relationship of independent and dependent variables.
Within the context of this paper, the independent variable being manipulated is the presence or absence of a Joint Crisis Information Response Team (JCIRT). The dependent variable, which will be the subject of content analysis, is civilian media coverage of military involvement in a crisis situation.
Using the techniques developed by Berelson (1952), the first step in producing an effective content analysis is to formulate a hypotheses or research question to be answered. In this study, the original problem statement identified an adverse affect on issue framing by civilian media sources, resulting from the sluggish, ad hoc, and uncoordinated response of military public affairs. Keeping this problem in mind, a suitable hypothesis may be "The organization and operational employment of a military JCIRT creates a positive effect on civilian media framing of crisis situations." In order to test this hypothesis, an original content analysis must be conducted, referring to recent media coverage of military crises. This analysis will constitute the pre-test and will provide the framework of previous research necessary to assess the impact of manipulating the JCIRT independent variable.
The second step in generating an effective content analysis study is the identification of the sample to be analyzed. The potential universe available for analysis in this study consists of all civilian media coverage of military crisis situations. In order to narrow the universe for ease of data gathering and accuracy, the sample may be defined by time of crisis occurrence, and limited exclusively to television broadcast news and print media in the form of newspapers. To further define the universe and aid in data collection, the sample can be confined to broadcast members of the five major networks (ABC, CBS, NBC, CNN, and FOX) and three major national newspapers (The New York Times, The Washington Post, and USA Today). Using the split-half sample size determination technique, the number of media selections analyzed may be reduced if the reliability derived from a pilot sample is high. The sample unit for this study would consist of an entire newspaper article or entire broadcast piece.
The third task in developing a content analysis study lies in defining the categories to be applied. The subject matter category or substance of the analysis would be confined to media coverage of military actions in crisis situations. The form of the subject, as conceptualized above would be limited to newspapers and television broadcast. The unit of analysis employed for this study would be the presence or absence of a cited military public affairs official (or operational commander) in the entire contents of the media piece. The unit of enumeration for each sample would be conceptualized using a nominal scale to indicate presence or absence. In order to ensure that the study was exhaustive, all other mass media sources would be evaluated for relative advantage and influence on public opinion. Additionally, a crisis situation would be firmly defined. Mutual exclusivity might be addressed by eliminating the potential for coding foreign military, NATO, or allied public affairs officials as cited sources.
The next step in implementing an effective content analysis would be the selection and training of coders and a thorough explanation and test of coding instruments. Coders should be provided with background information regarding the content of the material that they will be coding and how to accurately identify and report units of content analysis. For purposes of this study, coders should have a clear impression of how much of a media piece is subject to analysis and what indicators would constitute the presence or absence of cited military public affairs officials. After training coders and providing an orientation to coding instruments, a test of inter-coder reliability should be conducted on a small sample to prove the reliability of coding instruments and instructions. The fifth step in the process of content analysis is the actual implementation of the coding process. Coders will use a code book to record results; it is imperative that coders work independently to eliminate bias in results.
Following the coding process, tests of reliability and validity must be performed to justify the construct and content of the method of analysis. Tests for inter-coder reliability must yield a result of .85 or greater to prove that the construct of the process and coding instruments used were employed effectively across a variety of coders. Reliability assessments of the coded data can then be conducted using the Holsti formula, Scottís Pi, or Cohenís Kappa. Positive results in reliability tests will indicate that the method may be successfully replicated in future studies, and that the data analyzed has strong concurrent properties. Validity is then tested to determine whether the results of coding can accurately be applied to the hypothesis. If the results are valid, the hypothesis and the study performed should also be generalizable and global in its potential for future applications.
The final step in the development and execution of a content analysis study would be an analysis of the results of the coding process for statistical significance. One of the rudimentary formulas for measuring this significance is the chi-square method. Through advances in computer technology and data processing capabilities, many new and user-friendly tests of statistical significance have been developed. One of the primary benefits of computer-based programs is that they will free the analyst from carrying out the tedious counting, sorting, identification, and analysis tasks inherent in most content studies (Budd, Thorp, & Donohew, p. 95). Ultimately, the greater number of methods used which corroborate statistical significance, the more acceptable content analysis results will be.
This page last updated on July 23, 1998.
For more information on military public affairs activities see one of the following homepages: