This week we are discussing sampling, data collection, and statistical analysis. This week the content is a little heavier. I know that statistics is not usually a popular topic. The questions included below are meant to help you look at the sampling, data collection, and statistical analyses within your study in a methodical way that helps you to make sense of the data and helps you to determine the credibility of the study. Do the best you can as you make your way through these questions.
Use our article below to use the following questions in your critique.
Stephens J. D, Yager, A. M, & Allen J. (2017). Smartphone technology and text messaging for weight loss in young adultsLinks to an external site.: A randomized controlled trial. Journal of Cardiovascular Nursing, 32(1), 3946. https://doi.org/10.1097/JCN.0000000000000307Links to an external site.
Guidelines for Critiquing Quantitative Sampling Plans
Was the population identified? Were eligibility criteria specified?
What type of sampling design was used? Was the sampling plan one that could be expected to yield a representative sample?
How many participants were in the sample? Was the sample size affected by high rates of refusals or attrition? Was the sample size large enough to support statistical conclusion validity? Was the sample size justified on the basis of a power analysis or other rationale?
Were key characteristics of the sample described (e.g., mean age, percentage of female)?
To whom can the study results reasonably be generalized?
Guidelines for Critiquing Statistical Analyses
1. Did the descriptive statistics in the report sufficiently describe the major variables and background characteristics of the sample? Were appropriate descriptive statistics usedfor example, was a mean presented when percentages would have been more informative?
2. Were statistical analyses undertaken to assess threats to the studys validity (e.g., to test for selection bias or attrition bias)?
3. Did the researchers report any inferential statistics? If inferential statistics were not used, should they have been?
4. Was information provided about both hypothesis testing and parameter estimation (i.e., confidence intervals)? Were effect sizes reported? Overall, did the reported statistics provide readers with sufficient information about the study results?
5. Were any multivariate procedures used? If not, should they have been usedfor example, would the internal validity of the study be strengthened by statistically controlling confounding variables?
6. Were the selected statistical tests appropriate, given the level of measurement of the variables and the nature of the hypotheses?
7. Were the results of any statistical tests significant? What do the tests tell you about the plausibility of the research hypotheses? Were effects sizeable?
8. Were the results of any statistical tests nonsignificant? Is it possible that these reflect Type II errors? What factors might have undermined the studys statistical conclusion validity?
9. Was information about the reliability and validity of measures reported? Did the researchers use measures with good measurement properties?
Was there an appropriate amount of statistical information? Were findings clearly and logically organized? Were tables or figures used judiciously to summarize large amounts of statistical information? Are the tables clear, with good titles and row/column labels?