Evaluation of Report Done on the Validation of Consumer Food Behavior Questionnaire
Abstract
The article, “Observation Versus Self-Report: Validation of a Consumer Food Behavior Questionnaire,” seeks to evaluate food safety education programs by comparing self-reports done by consumers and their observed behavior in the kitchen. This study was done to see if consumers are actually practicing safe food handling procedures in order to prevent foodborne illness. The study used low-income graduates from a nutrition education program; all were volunteers. Researchers asked that their volunteers fill out a questionnaire based on their behavior when handling food, and subsequently had them perform tasks in a kitchen which could demonstrate these behaviors, and rated their behavior on a scale. The researchers then compared the volunteers’ questionnaire answers and their observed behavior reports to see which was more valid in terms of evaluating consumer food practices. The researchers found that “twenty-eight questions met the validity criterion (≥70% agreement between observed behavior and interviewed responses and self-reported responses)” (Kendall, 2004, p. 2578). The goal of the study is to understand where consumers fall short in food safety, to therefore develop a way to measure these shortcomings accurately in order to guide food safety education programs.
Evaluation
When I started the reading of the article, I had a difficult time determining what the study’s actual research question was. I couldn’t tell if they were evaluating education programs or the questions they used to evaluate consumer behavior; at one point I thought it was to evaluate food protection principles in reference to consumer food behavior. By the end of the introduction on the second page I finally saw a clearly stated goal for the study. The goals included developing regulated behavioral questions that could be accurately relied upon and to mitigate it at the source, which they found to be with consumers, by strengthening lax areas in food education programs. Kendall (2004) identified the research problem in the article by stating, “there have been no published studies designed as criterion-related validation studies of a food-handling behavioral questionnaire” (p. 2579). She goes on to explain how some researchers have compared their observed reports with other researchers results of self-reported behavior, but both have never been done in the same study; and most often, there is a contradiction in results between studies. One component that wasn’t a part of the article or that I wasn’t able to identify was the hypothesis, however, I could sense that they were working based on the assumption that food education programs needed to be brought under an umbrella regulation in order to prevent foodborne illness in consumer homes. The article did a great job of defining key terms that were pertinent to understanding their analysis and results. Some of the terms defined were reliability, validity (and various versions thereof).
The review of the literature was done fairly well. Kendall cited several sources in her review and covered each of them equally. Two of the articles recommended the behavioral standards of which food safety should be focused on (Kendall, 2004, p. 2578); while others were used to explain where other researchers attempted this type of study but never went to the full extent. The references dates span from 1974 to 2004, since the article was published in 2004, I wouldn’t go as far as to say that all of her references were out of date. All appear to be primary sources. There is no evidence of bias in the article.
The study used both survey (individual questionnaire) and semi-controlled observation as the research methodology. The study took some elements from past research in terms of design, but also had some elements of its own that they found weren’t addressed in the other research. The sample was pulled from volunteer graduates of a food safety education program; the criteria was that they had to be low income. The participants were asked to fill out a behavior questionnaire regarding their food practices and then asked to perform several tasks for videotaped observation of their behavior in the kitchen, which was also being rated on a scale by research assistants. The participants were aware of everything going on because of an informed consent form that they were asked to sign before the study began. No pilot study was ever conducted. One measure that they took that stood out to me was the “establishment of intercoder reliability” (Kendall, 2004, p. 2580). It explained how they got all of the research assistants to grade on roughly the same range of judgment to get similar results. They participated in various activities to ensure that they would all judge a particular situation the same way.
The data was analyzed by a statistical package “to assess the reliability of individual questions” (Kendall, 2004, p. 2580). They also compared the questionnaire answers to the observations and interviews to determine the criterion validity; which the article explained to be as “the extent to which the results from one instrument are correlated with those of another more accurate (and possibly more expensive) instrument – the criterion measure” (Kendall, 2004, p. 2579). The data was quantitative in answer and qualitative in result and analysis; this was necessary to achieve their purpose of assessing validity. The findings definitely supported the purpose.
The article discussed several limitations. One was the lack of statistical tests that could be used to determine validity, but that there were some for reliability (Kendall, 2004, p. 2582). Another weakness surrounded the fact that they couldn’t evenly compare all of their findings because of the lack in comparable research. One weakness that I noticed that wasn’t actually discussed but briefly mentioned, was how not all of their questions in the questionnaire could be observed in the kitchen during the observation. So in terms of evaluating validity, they couldn’t do that across all of the data. However, they mention that despite their ability to assess validity, they are still relevant to the conversation and can be applied similarly to the others.
There wasn’t much of a conclusion compared to the amount of discussion they posed. The only implications mentioned were positive to the community, including the stop of foodborne illness. The article did mention who the results and conclusions would affect; stating that, “identified 28 valid and reliable questions…that food safety educators can use…to evaluate their programs” (Kendall, 2004), p. 2586). They also mentioned that there was now a benchmark for other researchers to use and compare to collect greater samples of data (Kendall, 2004, p. 2586). While there weren’t any direct recommendations made, the discussion about what could be done with the research served as a suggestion to the population reading this article.
Overall, the article did a great job of explaining key terms, and really describing the study in detail. I understood how the purpose was significant to a vast majority of people which made it more relatable. The authors did a great job of explaining why and where there was a lapse in the research and why it was important to try to change that and create a benchmark for the research. Where I felt that they fell short was in the conclusion. They had a great discussion of the data and how it compared to past studies; but, the conclusion made no effort to recap the information, it only spoke of what was accomplished and suggested how that information could be used. Although I don’t believe there are any implications of this study other than the one mentioned above, I feel that they could have tried a little more to address any other possible implications.