An important understanding of the independent variable is its flexibility. In this experiment bad news has been operationalized by nature, not by a particular topic. As will be shown, the variable should have multiple administrations within a set period of time. However, this does not mean that the topic of bad news, or even the release instrument, should be identical for each administration. Each administration of bad news will probably address separate topics. Nonetheless, because each topic falls within the category of bad news, the independent variable remains consistent and valid.
Procedure: Rule of threes
A general "rule of threes" procedure serves as the rule of thumb in administering the experiment. First, good news and bad news occur naturally; thus, they should be released in the order they occur, even if concurrently. Important in the administration of the experiment is validity, meaning that it should be free from experimenter bias. Therefore, for every three instances of bad news, there should be three instances of good news. This ratio is reasonable and natural, not artificial. An active military installation should be able to easily identify three cases each of good and bad news over a three-month period. Consequently, the formula for administering the independent variable and measurement of the dependent variable has a 3/3/3 rhythm: three "good news" press releases and three "bad news" press releases over three months. Repeated applications of the independent variable over a period of time is imperative to give the receiver the time to assimilate the news and alter perceptions (Schultz, Tannenbaum & Lauterborn, 1997). This formula allows the community to naturally absorb the good and bad news, and to make alterations, if at all, to perceptual judgments over time. The time variable is an art, and saliency may become a limitation in the experiment. However, given the random nature of news, three months should be adequate without losing saliency.
It is important to note that the distribution of good news is normal; it is not part of the manipulation of the independent variable, which is the distribution of bad news. To maintain ecological validity, the experiment must not isolate the independent variable from good news. Good news should be released as it occurs, and per standard operating procedures. However, bad news, which usually is not released but will be in the experiment, is controlled. Simply releasing bad news is manipulating this variable, for normally there is nondisclosure. Manipulation of the independent variable, then, is simply disclosure vice nondisclosure.
Further, the pretest-posttest design has limitations. Ideally, the three months leading up to the pretest measurement should be free from crisis or other unpleasant installation occurrences where the military released substantive bad news to the community. If content analysis of base or community media indicate that negative occurrences or bad news information might bias the pretest, then the experiment should be deferred until such a time when the pretest can be administered within a three month window clear of mishaps, crises, or general bad news. Similarly, if an installation crisis erupts within the three month administration of the independent variable, then the experiment should be discontinued and rescheduled. External validity will have been lost.
At the end of three months the survey contained in the appendix should be distributed and collected from members of the community. This experiment utilizes a convenience sample obtained through nonprobability sampling procedures (Sommer & Sommer, 1997). The goal is still to collect as random a sample as possible, with the sample size representing ten percent of the community’s population, not to exceed 3000 surveys. The sample needs to be as large as possible, constrained only by the actual size of the community and the availability of military resources. A larger sample size will increase statistical power and the significance of any findings, especially given the number of analyses to be performed. Naturally, experiment administrators should distribute a higher number of surveys than they expect to be returned, as many people will not participate in the survey. The experiment administrators may wish to distribute a personalized, well-written cover letter with the survey to increase return rates. A cover letter must be constructed if the survey is sent through the mail.
Distribution and collection methods vary and can be determined by funds allocated for the project and by whether the command chooses to consult with a professional market research company. If funds for the project are limited and the public affairs office conducts the survey, there are several convenient options for distribution and collection of the surveys. First, the command can utilize mall or sidewalk intercepts. Second, the command can mail the survey "to resident" separated by zip code and enclose a return envelope. Or, servicemembers residing off base in the community can take surveys home and give them to their neighbors.
Measuring instrument: the survey
The survey serving as the measuring instrument has been constructed with many variables considered. The survey utilizes closed questions arranged in a matrix (Sommer & Sommer, 1997). This allows easier analysis. Further there are also few questions: only 14 in the background section and 21 in the questionnaire section. The questions are balanced, with an almost equal split in polarities between negative, neutral, and positive (see Table 2). The neutral questions tend to measure respondent opinion toward factors outside of military control, like the respondent’s opinion of media objectivity, or the respondent’s opinion of whether the community really desires a relationship with the local installation. These neutral questions simply serve as additional research to help the local installation interpret findings. They also may assist in future research efforts, if any. All of the negatively- and positively-framed questions relate to one of the three components of credibility, as operationalized. Table 3 contains a breakdown of which questions relate to each of the components. Some questions relate to more than one component. Last, the survey’s question order is random to prevent respondents from anticipating questions, thus keeping their interest.
The background section of the survey separates respondents by demographics. Separations include differentiations based on gender and age. The respondents are further qualified based on their media consumption patterns, their frequency of contact with military personnel, and by how long they have lived in the community. Questionnaire responses can be judged by the qualifying background questions contained in the survey.
In addition to already mentioned limitations of this experiment, should consider several others before implementation. The administration of the survey requires resources that may not be available within most public affairs offices. Organizations may need to consider hiring professionals to assist in conducting the survey and interpreting the results. Additionally, in communities where the military provides the vast majority of the economic base, it may be difficult to find people that are not affiliated with the installation. Every experiment has limitations. The presence of limitations in this experiment should not deter its implementation (Sommer & Sommer, 1997).
Comments? Contact Bill Pierro