Assessing Quantitative Research Articles Review

Assessing Quantitative Research Articles Review Assessing Quantitative Research Articles Review I’m working on a Health & Medical exercise and need support. Please read and follow instructions, I have added the scholarly article for you to use for this post u9d18060.docx _scholary_article_practical_assessment_research__evaluation.pdf chapter_6practitioner_s_guide_to_using_research_for_evidenc…_______pg_132__151_.pdf step_by__step_guide_to_critquing_researcj_part_1_quantitative_research.pdf ORDER NOW FOR CUSTOMIZED AND ORIGINAL ESSAY PAPERS Assessing Quantitative Research Articles Review the unit readings. Use the attached scholarly quantitative research article. following research approaches: quasi-experimental research (pre/post-test design). For this post, use the attached readings from this unit as a guide and: • • • Cite the quantitative research article you selected and provide a brief summary of the article. Using Table 1 of the Coughlan, Cronin, and Ryan article, “Step-by-Step Guide to Critiquing Research. Part 1Quantitative Research,” as an assessment checklist, provide answers to the salient questions suggested. Note: The “Nursing Journal Toolkit: Critiquing a Quantitative Research” article provides additional tools for determining the quality of a quantitative journal article. Describe what instrument, if any, was used in the study to measure the primary outcome objective. Assess the validity and reliability of this instrument and cite other research used to justify its usage. In your response, include at least two APA-formatted citation. The citation should be from materials you have read during this unit. It may be from course textbooks, assigned readings, or an outside source. Your initial post must be a minimum of 300 words in length. A peer-reviewed electronic journal. Copyright is retained by the first or sole author, who grants right of first publication to Practical Assessment, Research & Evaluation. Permission is granted to distribute this article for nonprofit, educational purposes if it is copied in its entirety and the journal is credited. PARE has the right to authorize third party reproduction of this article in print, electronic and database forms. Volume 5, Number 14, November, 1997 ISSN=1531-7714 True and Quasi-Experimental Designs. Barry Gribbons National Center for Research on Evaluation, Standards, and Student Testing Joan Herman National Center for Research on Evaluation, Standards, and Student Testing Experimental designs are especially useful in addressing evaluation questions about the effectiveness and impact of programs. Emphasizing the use of comparative data as context for interpreting findings, experimental designs increase our confidence that observed outcomes are the result of a given program or innovation instead of a function of extraneous variables or events. For example, experimental designs help us to answer such questions as the following: Would adopting a new integrated reading program improve student performance? Is TQM having a positive impact on student achievement and faculty satisfaction? Is the parent involvement program influencing parents’ engagement in and satisfaction with schools? How is the school’s professional development program influencing teacher’s collegiality and classroom practice? As one can see from the example questions above, designs specify from whom information is to be collected and when it is to be collected. Among the different types of experimental design, there are two general categories: true experimental design: This category of design includes more than one purposively created group, common measured outcome(s), and random assignment. Note that individual background variables such as sex and ethnicity do not satisfy this requirement since they cannot be purposively manipulated in this way. quasi-experimental design: This category of design is most frequently used when it is not feasible for the researcher to use random assignment. This article describes the strengths and limitations of specific types of quasi-experimental and true experimental design. QUASI-EXPERIMENTAL DESIGNS IN EVALUATION. Assessing Quantitative Research Articles Review As stated previously, quasi-experimental designs are commonly employed in the evaluation of educational programs when random assignment is not possible or practical. Although quasi-experimental designs need to be used commonly, they are subject to numerous interpretation problems. Frequently used types of quasi-experimental designs include the following: Nonequivalent group, posttest only (Quasi-experimental). The nonequivalent, posttest only design consists of administering an outcome measure to two groups or to a program/treatment group and a comparison. For example, one group of students might receive reading instruction using a whole language program while the other receives a phonetics-based program. After twelve weeks, a reading comprehension test can be administered to see which program was more effective. A major problem with this design is that the two groups might not be necessarily the same before any instruction takes place and may differ in important ways that influence what reading progress they are able to make. For instance, if it is found that the students in the phonetics groups perform better, there is no way of determining if they are better prepared or better readers even before the program and/or whether other factors are influential to their growth. Nonequivalent group, pretest-posttest. The nonequivalent group, pretest-posttest design partially eliminates a major limitation of the nonequivalent group, posttest only design. At the start of the study, the researcher empirically assesses the differences in the two groups. Therefore, if the researcher finds that one group performs better than the other on the posttest, s/he can rule out initial differences (if the groups were in fact similar on the pretest) and normal development (e.g. resulting from typical home literacy practices or other instruction) as explanations for the differences. Some problems still might result from students in the comparison group being incidentally exposed to the treatment Page 1 of 3 condition, being more motivated than students in the other group, having more motivated or involved parents, etc. Additional problems may result from discovering that the two groups do differ on the pretest measure. If groups differ at the onset of the study, any differences that occur in test scores at the conclusion are difficult to interpret. Time series designs. In time series designs, several assessments (or measurements) are obtained from the treatment group as well as from the control group. This occurs prior to and after the application of the treatment. The series of observations before and after can provide rich information about students’ growth. Because measures at several points in time prior and subsequent to the program are likely to provide a more reliable picture of achievement, the time series design is sensitive to trends in performance. Thus, this design, especially if a comparison group of similar students is used, provides a strong picture of the outcomes of interest. Nevertheless, although to a lesser degree, limitations and problems of the nonequivalent group, pretest-posttest design still apply to this design. TRUE EXPERIMENTAL DESIGNS The strongest comparisons come from true experimental designs in which subjects (students, teachers, classrooms, schools, etc.) are randomly assigned to program and comparison groups. It is only through random assignment that evaluators can be assured that groups are truly comparable and that observed differences in outcomes are not the result of extraneous factors or pre-existing differences. For example, without random assignment, what inference can we draw from findings that students in reform classrooms outperformed students in non-reform classrooms if we suspect that the reform teachers were more qualified, innovative, and effective prior to the reform? Do we attribute the observed difference to the reform program or to pre-existing differences between groups? In the former case, the reform appears to be effective, likely worth the investment, and possibly justifying expansion; in the latter case, alternative inferences are warranted. There are several types of true experimental design: Posttest Only, Control Group. Posttest only, control group designs differ from previously discussed designs in that subjects are randomly assigned to one of the two groups.Assessing Quantitative Research Articles Review Given sufficient numbers of subjects, randomization helps to assure that the two groups (or conditions, raters, occasions, etc.) are comparable or equivalent in terms of characteristics which could affect any observed differences in posttest scores. Although a pretest can be used to assess or confirm whether the two groups were initially the same on the outcome of interest(as in pretest-posttest, control group designs), a pretest is likely unnecessary when randomization is used and large numbers of students and/or teachers are involved. With smaller samples, pretesting may be advisable to check on the equivalence of the groups. Other Designs. Some other general types of designs include counterbalanced and matched subjects (for a more detailed discussion of different designs see Campbell & Stanley, 1966). With counterbalanced designs, all groups participate in more than one randomly ordered treatment (and control) conditions. In matched designs, pairs of students matched on important characteristics (for example, pretest scores or demographic variables) are assigned to one of the two treatment conditions. These approaches are effective if randomization is employed. Even experimental designs, however, can be problematic even when true experimental designs are employed (Cook & Campbell, 1979). One threat is that the control group can be inadvertently exposed to the program; such a threat also occurs when key aspects of the program also exist in the comparison group. Additionally, one of the conditions (groups), such as instructional programs may be perceived as more desirable than the other. If participants in the study learn of the other group, then important motivational differences (being demoralized or even trying harder to compensate) could impact the results. Differences in the quality with which a program or comparison treatment is implemented also can influence results (the teachers implementing one or the other have greater content or pedagogical knowledge). Still another threat to the validity of a design is differential participant mortality in the two groups. LIMITATIONS OF TRUE EXPERIMENTAL DESIGN Experimental designs also are limited by narrow range of evaluation purposes they address. When conducting an evaluation, the researcher certainly needs to develop adequate descriptions of programs, as they were intended as well as how they were realized in the specific setting. Also, the researcher frequently needs to provide timely, responsive feedback for purposes of program development or improvement. Although less common, access and equity issues within a critical theory framework may be important. Experimental designs do not address these facets of evaluation. With complex educational programs, rarely can we control all the important variables which are likely to influence program outcomes, even with the best experimental design. Nor can the researcher necessarily be sure, without verification, that the implemented program was really different in important ways from the program of the comparison group(s), or that the implemented program (not other contemporaneous factors or events) produced the observed results. Being mindful of these issues, it is important for evaluators not to develop a false sense of security. Finally, even when the purpose of the evaluation is to assess the impact of a program, logistical and feasibility issues Page 2 of 3 constrain experimental frameworks. Randomly assigning students in educational settings frequently is not realistic, especially when the different conditions are viewed as more or less desirable. This often leads the researcher to use quasi-experimental designs. Problems associated with the lack of randomization are exacerbated as the researcher begins to realize that the programs and settings are in fact dynamic, constantly changing, and almost always unstandardized. RECOMMENDATIONS FOR EVALUATION The primary factor which directs the evaluation design is the purpose for the evaluation. Restated, it is critical to consider the utility of any evaluation information. If the program’s impact on participant outcomes is a key concern or if multiple programs (instructional strategies, or something else) are being considered and educators are looking for evidence to assess the relative effectiveness of each to inform decisions about which approach to select, then experimental designs are appropriate and necessary. Nonetheless, resulting information should be augmented by rich descriptions of programs and mechanisms need to be established which enable providing timely, responsive feedback (For a detailed discussion of other approaches to evaluation, see Lincoln & Guba, 1985; Patton, 1997, and Reinhart & Rallis, 1994).Assessing Quantitative Research Articles Review In addition to using multiple evaluation methods, evaluators should be careful in collecting the right kinds of information when using experimental frameworks. Measures must be aligned with the program’s goals or objectives. Additionally, it is often much more powerful to employ multiple measures. Triangulating several lines of evidence or measures in answering specific evaluation questions about program outcomes increases the reliability and credibility of results. Furthermore, when interpreting this evidence, it is often useful to use absolute standards of success in addition to relative comparisons. The last recommendation is to always consider alternative explanations for any observed differences in outcome measures. If the treatment group outperforms the control group, consider a full range of plausible explanations in addition to the claim that the innovative practice is more effective. Program staff and participants can be very helpful in identifying these alternative explanations and evaluating the plausibility of each. ADDITIONAL READING Campbell, D.T. & Stanley, J.C. (1966). Experimental and quasi-experimental designs for research. Chicago: Rand McNally College Pub. Co. Cook, T.D. & Campbell, D.T. (1979). Quasi-experimentation: design and analysis issues for field settings. Chicago: Rand McNally College Pub. Co. Lincoln, Y.S. & Guba, E.G. (1985). Naturalistic inquiry. Beverly Hills: Sage Publications. Patton, M.Q. (1997). Utilization focused evaluation, edition 3. Thousand Oaks, CA: Sage Publications. Reinhart, C.S. & Rallis, S.F. (1994). The qualitative-quantitative debate: New perspectives. San Francisco: Jossey-Bass. Descriptors: *Comparative Analysis; *Control Groups; Evaluation Methods; Evaluation Utilization; *Experiments; Measurement Techniques; *Pretests Posttests; *Quasiexperimental Design; Sampling; Selection Citation: Gribbons, Barry & Herman, Joan (1997). True and quasi-experimental designs. Practical Assessment, Research & Evaluation, 5(14). Available online: http://PAREonline.net/getvn.asp?v=5&n=14. Page 3 of 3 Chapter 6 Copyright © 2008. John Wiley & Sons, Incorporated. All rights reserved. CRITICALLY APPRAISING QUASI-EXPERIMENTS: NONEQUIVALENT COMPARISON GROUPS DESIGNS Nonequivalent Comparison Groups Designs 111 Are the Groups Comparable? 112 Grounds for Assuming Comparability 112 Additional Logical Arrangements to Control for Potential Selectivity Biases 113 Multiple Pretests 114 Switching Replications 115 Nonequivalent Dependent Variables 116 Statistical Controls for Potential Selectivity Biases 118 When the Outcome Variable Is Categorical 119 When the Outcome Variable Is Quantitative 121 Pilot Studies 123 Synopses of Research Studies 126 Study 1 Synopsis 126 Study 2 Synopsis 126 Key Chapter Concepts 128 Review Exercises 128 Additional Readings 129 In the real world of practice, it usually is not feasible to use random procedures to assign clients to different treatment conditions. Administrators might refuse to approve random assignment for several reasons. Practitioners and clients might complain about it. Board members might hear of it and not like or understand it. If some clients are assigned to a no-treatment control group or wait list, they might go elsewhere for service, and this might have undesirable fiscal effects on the agency. Administrators and practitioners alike might view random assignment as unethical, even if it is just to alternate forms of treatment and not to a no-treatment group. Assigning clients to treatment conditions on any 110 Rubin, A. (2008). Practitioner’s guide to using research for evidence-based practice . Retrieved from http://ebookcentral.proquest.com Created from capella on 2017-11-18 13:34:53. Nonequivalent Comparison Groups Designs 111 basis other than an assessment as to what seems best for the client might be utterly unacceptable. Fortunately, there are designs for conducting internally valid evaluations of the effectiveness of interventions, programs, and policies without using random assignment. Although these designs are less desirable than using random assignment from an internal validity standpoint, when employed properly, they can be almost as desirable. Such designs are called quasi-experimental. Quasi-experimental designs come in two major forms, each of which employs features that attempt to attain a credible degree of internal validity in the absence of random assignment to treatment conditions. One form is called the nonequivalent comparison groups design. The other form is time-series designs. This chapter focuses on critically appraising nonequivalent comparison groups designs. Chapter 7 focuses on time-series designs. Let’s begin by examining the nature and logic of nonequivalent comparison groups designs. Copyright © 2008. John Wiley & Sons, Incorporated. All rights reserved. NONEQUIVALENT COMPARISON GROUPS DESIGNS Nonequivalent comparison groups designs mirror experimental designs, but without the random assignment. Assessing Quantitative Research Articles Review That is, different treatment conditions (including perhaps a no-treatment condition) are compared, but the clients are not assigned to each condition randomly. The basis for forming each condition can vary. One option, for example, is to compare two different sites that seem comparable in all key respects except for the type of intervention, program, or policy in place in each. Thus, the morale of residents in a nursing home that provides a particular intervention can be compared to the morale of similar residents in a similar nursing home without that intervention. Or, the degree of change achieved by the first X number of clients who fill the caseload capacity of a new program can be compared to the degree of change of the next X number of clients who are placed on a waiting list until the first cohort has completed treatment. Likewise, each new case could be referred either to an innovative new treatment unit or to a routine treatment-as-usual unit depending on which unit has a caseload opening at the time of referral. The basic diagram for this design is the same as the pretest-posttest control group experiment except that it lacks the R for random assignment. It is as follows: O1 O1 X O2 O2 The blank space in the second row of the diagram between O1 and O2 represents the withholding of the tested intervention from comparison group clients. You can imagine the symbol TAU there for studies in which Rubin, A. (2008). Practitioner’s guide to using research for evidence-based practice . Retrieved from http://ebookcentral.proquest.com Created from capella on 2017-11-18 13:34:53. 112 Appraising Studies for EBP Questions about Intervention Effectiveness the comparison group receives treatment as usual instead of no treatment or delayed treatment. Copyright © 2008. John Wiley & Sons, Incorporated. All rights reserved. Are the Groups Comparable? As you may have already surmised, the key issue in judging whether a study using this type of design achieves a credible degree of internal validity is the extent to which its authors provide a persuasive case for assuming that the groups being compared are really comparable. Are they really equivalent? This might strike you as a … Purchase answer to see full attachment Student has agreed that all tutoring, explanations, and answers provided by the tutor will be used to help in the learning process and in accordance with Studypool’s honor code & terms of service . Get a 10 % discount on an order above $ 100 Use the following coupon code : NURSING10

Read more
Enjoy affordable prices and lifetime discounts
Use a coupon FIRST15 and enjoy expert help with any task at the most affordable price.
Order Now Order in Chat

We now help with PROCTORED EXAM. Chat with a support agent for more details