As discussed in the measurements section, the survey allows detailed analysis by demographics, and also by qualifying factors, such as the respondents opinion of the media and of the respondentís own community attitude toward the military. Analysis of these correlations can indicate respondent bias, validating or invalidating the severity of statistical findings.
Statistical analysis of results is imperative and can be accomplished by several means. First, the command can pay a market research company to generate statistical findings based on the raw data. Second, the command may have internal assets to generate findings on commercial statistical software. Third, the command could provide the raw data to the University of Oklahomaís Communications Department. Either the department or military students attending the "DoD Short Course" sponsored by the department can manipulate the data to provide statistical results.
Based on the analysis, conclusions and correlations can be drawn to support the alternative hypothesis. The experimenters can also validate the models and theoretical foundations of the hypothesis, or alter them accordingly. Nothing is universal, and results may vary across the country, depending on the nature of the installationís community. Because this experiment is quasi in nature and the sample is nonrandom, generalizability of findings is limited. Nonetheless, from location to location installations and their public affairs officers and commanding officers will find great utility in conducting the experiment and learning from the findings. The command may wish to conduct follow-up surveys to see if perceptions and attitudes continue to change.
Every experiment has limitations, including this one. Administrators of this experiment will want to review the limitations mentioned in the measurement section of this study and apply them in the analysis of their research data. However, limitations do not negate merit and utility (Sommer & Sommer, 1997).
Comments? Contact Bill Pierro