Reenvisioning Clinical Science To Improve Public Health Discussion

Reenvisioning Clinical Science To Improve Public Health Discussion ORDER NOW FOR CUSTOMIZED AND ORIGINAL ESSAY PAPERS ON Reenvisioning Clinical Science To Improve Public Health Discussion Read the article Reenvisioning clinical science . USF CLP 4143 Reenvisioning Clinical Science To Improve Public Health Discussion Write a 250-500 word response that addresses the following: Compare the portrayal of the mental healthcare system in this article to that of the Rosenhan article that you read last week. Reenvisioning Clinical Science To Improve Public Health Discussion What surprises you about the authors’ evaluation of the current state of the field? What is your response in general? How do you think the changes proposed by the authors would impact the state of our current mental healthcare system if they were all successfully implemented? Why? reenvisioning_clinical_science__onket_et_al_2014_.pdf on_being_sane_in_insane_places__rosenhan_1973___1_.pdf Reenvisioning Clinical Science To Improve Public Health Discussion. 497932 research-article2013 CPXXXX10.1177/2167702613497932Onken et al.NIH Stage Model for Intervention Development Special Series Reenvisioning Clinical Science: Unifying the Discipline to Improve the Public Health Clinical Psychological Science 2014, Vol 2(1) 22­–34 © The Author(s) 2013 Reprints and permissions: sagepub.com/journalsPermissions.nav DOI: 10.1177/2167702613497932 cpx.sagepub.com Lisa S. Onken1, Kathleen M. Carroll2, Varda Shoham3, Bruce N. Cuthbert3, and Melissa Riddle4 1 National Institute on Drug Abuse, 2Yale University, 3National Institute of Mental Health, and National Institute of Dental and Craniofacial Research 4 Abstract We present a vision of clinical science, based on a conceptual framework of intervention development endorsed by the Delaware Project. This framework is grounded in an updated stage model that incorporates basic science questions of mechanisms into every stage of clinical science research. The vision presented is intended to unify various aspects of clinical science toward the common goal of developing maximally potent and implementable interventions, while unveiling new avenues of science in which basic and applied goals are of equally high importance. Training in this integrated, translational model may help students learn how to conduct research in every domain of clinical science and at each stage of intervention development. This vision aims to propel the field to fulfill the public health goal of producing implementable and effective treatment and prevention interventions. Keywords intervention development, implementation, stage model, translation Received 4/15/13; Revision accepted 6/23/13 This article presents a vision for clinical science that aims to facilitate the implementation of scientifically supported interventions and to enhance our understanding of why interventions work, which interventions and implementation strategies work best, and for whom they work better.1 This vision also aims to enhance students’ training with a conceptualization that unifies the field in pursuit of these goals. Psychological clinical science encompasses broad and diverse perspectives (McFall, 2007), but there is a fundamental commonality that binds psychological clinical science together: the relevance of its various scientific endeavors to producing and improving treatment and prevention interventions. As such, all domains of clinical science are integral to the intervention development process, and the vision we propose in this article represents a comprehensive conceptual framework for the intervention development process and for training the next generation of clinical scientists. The framework described in the current article delineates specific stages of intervention development research that is inspired by and consistent with calls to action to identify mechanisms of change as a way of improving interventions (e.g., Borkovec & Castonguay, 1998; Hayes, Long, Levin, & Follette, in press; Kazdin, 2001; Kazdin & Nock, 2003; Rounsaville, Carroll, & Onken, 2001a). This framework inextricably links basic and applied clinical science (Stokes, 1997) and sharpens the distinction between implementation science that is focused on service delivery system research as opposed to intervention generation, testing, and refinement research, while recognizing that both share the goal of getting interventions into the service delivery system.USF CLP 4143 Reenvisioning Clinical Science To Improve Public Health Discussion Reenvisioning Clinical Science To Improve Public Health Discussion. Furthermore, the model presented in this article defines the intervention development process as incomplete until an intervention is optimally efficacious and implementable with fidelity by practitioners in the community. We contend that a reinvigorated stage model unites diverse and fragmented fields of clinical psychological science and can serve as Corresponding Author: Lisa S. Onken, National Institutes of Health, National Institute on Drug Abuse, 6001 Executive Boulevard, Room 3182 MSC 9593, Bethesda, MD 20892-9593 E-mail: [email protected] NIH Stage Model for Intervention Development an engine for the production of highly potent and implementable treatment and prevention interventions. We further contend that this framework can do this in such a way as to enrich every aspect of clinical science and the training of clinical scientists. Where Has Clinical Science Succeeded? The major accomplishments of psychological clinical science are well documented. Perhaps most remarkable is the development of efficacious behavioral treatments in the past half century. For many of the most severe behavioral health problems, there are efficacious treatments where there once were none. Mid-20th-century reviews of the child treatment literature by Levitt (1957, 1963) and the adult treatment literature by Eysenck (1952) were critical regarding the efficacy of psychotherapy. Levitt (1963) confirmed his 1957 conclusion that “available evaluation studies do not furnish a reasonable basis for the hypothesis that psychotherapy facilitates recovery from emotional illness in children” (p. 49), and Eysenck (1952) stated that the adult studies “fail to support the hypothesis that psychotherapy facilitates recovery from neurotic disorder” (p. 662). As recently as four decades ago, even after the articulation of behavioral therapies for anxiety disorders, Agras, Chapin, and Oliveau (1972) reported that untreated adults suffering from phobia improve at the same rate as treated ones, and Kringlen (1970) reported that most typical obsessional patients “have a miserable life” (p. 418). Today, anxiety disorders are considered among the most treatable disorders, with behavioral interventions serving as the gold standard, outperforming psychopharmacology (Arch & Craske, 2009; McHugh, Smits, & Otto, 2009; McLean & Foa, 2011; Roshanaei-Moghaddam et al., 2011). Efficacious behavioral interventions developed by clinical scientists include treatments for cocaine addiction, autism, schizophrenia, conduct disorder, and many others (e.g., Baker, McFall, & Shoham, 2008; Chambless & Ollendick, 2001; DeRubeis & Crits-Christoph, 1998; Kendall, 1998). These advances in developing behavioral treatments are especially impressive given that each of these treatments attempts to address seemingly discrete, albeit often overlapping diagnostic categories (Hyman, 2010). In spite of obstacles created by a diagnostic system in which categories based on clinical consensus exhibit considerable heterogeneity (British Psychological Society, 2011; Frances, 2009a, 2009b), our understanding of the basic processes of psychopathology that should be addressed in targeted treatments has substantially increased (T. A. Brown & Barlow, 2009; Cuthbert & Insel, 2013; Insel, 2012). With this better understanding of mechanisms of disorders, the promise for a new generation of more efficient and implementable interventions is improved. 23 We now have solid basic behavioral science and neuroscience, good psychopathology research, innovative intervention development, numerous clinical trials producing efficacious treatment and prevention interventions, an extensive set of effectiveness trials aimed to confirm the value of these interventions, and a robust research effort in implementation science. USF CLP 4143 Reenvisioning Clinical Science To Improve Public Health Discussion Reenvisioning Clinical Science To Improve Public Health Discussion. Many clinical psychology training programs include the teachings of empirically supported treatments or at least engage in debating their value (Baker et al., 2008; Follette & Beitz, 2003). All these notable accomplishments raise a question: What is broken that needs fixing? What Are the Problems and Who Should Fix Them? Although efficacious behavioral treatments for many mental disorders exist, patients who seek treatment in community settings rarely receive them (Institute of Medicine, 2006). Several factors converge to create this widely acknowledged science-to-service gap, or what Weisz et al. (Weisz, Jensen-Doss, & Hawley, 2006; Weisz, Ng, & Bearman, 2014) call the implementation cliff. For one, we can trace the implementation cliff back to the effect size drop evident in many effectiveness trials (Curtis, Ronan, & Borduin, 2004; Henggeler, 2004; Miller, 2005; Weisz, Weiss, & Donenberg, 1992). There is a clear disconnect between efficacy research that values internal validity and effectiveness research that prioritizes external validity at the expense of internal validity. Despite this gap, many investigators still move directly from traditional efficacy studies (with research therapists) to effectiveness studies, without first conducting an efficacy (i.e., controlled) study with community therapists to ensure that the intervention being studied is implementable with fidelity when administered by community practitioners. This strategy is particularly puzzling in light of the fact that so many efficacious behavioral interventions do not make their way down the pipeline through implementation (Carroll & Rounsaville, 2003, 2007; Craske, RoyByrne, et al., 2009). A major factor that can explain this drop in effect size is treatment fidelity (also known as treatment integrity), which refers to the implementation of an intervention in a manner consistent with principles outlined in an established manual (Henggeler, 2011; Perepletchikova, Treat, & Kazdin, 2007). To wit, only a small fraction of clinicians who routinely provide interventions such as cognitive behavioral therapy (CBT) are able to do so with adequate fidelity (Beidas & Kendall, 2010; Olmstead, Abraham, Martino, & Roman, 2012). For instance, in one study CBT concepts were mentioned in fewer than 5% of sessions based on direct observation (Santa Ana, Martino, Ball, Nich, & Carroll, 2008). This may reflect the insufficiency of commonly used training and dissemination methods Onken et al. 24 such as workshops and lectures, which by themselves effect little substantive change in clinician behavior (Miller, Yahne, Moyers, Martinez, & Pirritano, 2004; Sholomskas et al., 2005). Furthermore, even documented acquisition of fidelity skills under close supervision does not guarantee continued, postsupervision fidelity maintenance. USF CLP 4143 Reenvisioning Clinical Science To Improve Public Health Discussion Reenvisioning Clinical Science To Improve Public Health Discussion. Direct supervision, via review of clinicians’ levels of fidelity and skill in delivering evidence-based practice, is rarely provided in community-based settings and is also not reimbursed or otherwise incentivized (Olmstead et al., 2012; Schoenwald, Mehta, Frazier, & Shernoff, 2013). Moreover, community providers’ motivation and comfort level with empirically supported treatments is lower than that of research therapists (Stewart, Chambless, & Baron, 2012). For example, in a study of clinical psychologists’ use of exposure therapy for posttraumatic stress disorder, only 17% of therapists reported using the evidence-based treatment, and 72% reported a lack of comfort with exposure therapy (Becker, Zayfert, & Anderson, 2004). Research therapists tend to be committed to the therapy they are administering and to the research process, and they are directly incentivized to implement treatments with skill and fidelity. There is no similar incentive system for community therapists. These and other barriers to implementation contribute to a mounting sentiment that business as usual must change (Institute of Medicine, 2006). Authors such as B. S. Brown and Flynn (2002) have exclaimed that clinical science can and should do much more to implement efficacious treatment and prevention interventions (see also Glasgow, Lichtenstein, & Marcus, 2003; Hoagwood, Olin, & Cleek, 2013). Some even suggested a 10-year moratorium on efficacy trials (Kessler & Glasgow, 2011). On the other side of the translation-implementation continuum (Shoham et al., 2014), we are missing a tighter link between basic and applied clinical science (Onken & Blaine, 1997; Onken & Bootzin, 1998). Despite substantial advances in the understanding of neurobiological, behavioral, and psychological mechanisms of disorders, these understandings are not sufficiently linked to mechanisms of action in intervention development research (Kazdin, 2007; Murphy, Cooper, Hollon, & Fairburn, 2009). In the absence of better understanding of how interventions work, efforts to adapt interventions (e.g., dose reduction), a common practice in community settings, may render the interventions devoid of their original efficacy. A meta-level problem is that these problems fall between scientific cracks. Perhaps a reason why there has been such difficulty implementing empirically supported interventions is that no subgroup of clinical scientists have a defined role for ensuring implementability of interventions: Is it the responsibility of basic behavioral scientists to ensure that interventions get implemented? Surely that is not their job! Their mission is to understand basic normal and dysfunctional behavioral processes, not to directly develop interventions or ensure their implementability. What about the researchers who generate, refine, and test interventions in efficacy trials? Would they say that it is their mission to develop and test the best interventions possible, but it is not their job to strive toward the implementability of those interventions? Would they argue that ensuring implementability is the responsibility of someone else, such as researchers who conduct effectiveness trials? If effectiveness is not sufficiently strong, could it not be that the problem lies within the way the intervention was delivered, or within the design of the effectiveness trial, not with the efficacious intervention? USF CLP 4143 Reenvisioning Clinical Science To Improve Public Health Discussion Conversely, effectiveness researchers claim responsibility for real-world testing of interventions that have initial scientific support. If these empirically supported interventions are not viable for use in the real world, is not this the fault of the intervention developers? Should not the intervention developers produce interventions that can be sustained effectively in the real world? Finally, consider the implementation scientists (e.g., Proctor et al., 2009). One can assume that implementation researchers must be responsible for implementation! These scientists are doing all they can to determine how to get interventions adopted and have identified a multitude of program- and system-level constraints and barriers to implementation (e.g., Fixsen, Naoom, Blase, Friedman, & Wallace, 2005; Hoagwood et al., 2013; Lehman, Simpson, Knight, & Flynn, 2011). When systemlevel barriers are addressed by community practitioners who adapt empirically supported interventions for the populations they serve, intervention developers assert that these adapted interventions are no longer the same interventions that were shown to have efficacy. As it turns out, nobody takes charge, and the cycle continues. Possible Solutions Changing the System Suggestions to solve the science-practice gap by changing the service delivery system have encountered formidable barriers. The infrastructure of existing delivery systems may be too weak to provide the complex, albeit high-quality empirically supported therapies practiced in efficacy studies (McLellan & Meyers, 2004). For example, implementation of such treatments may require fundamental changes in the training and ongoing supervision of community-based clinicians (Carroll & Rounsaville, 2007), smaller numbers of patients assigned to each clinician, and increased time allotted per patient. Another NIH Stage Model for Intervention Development particularly unfortunate barrier is that empirically supported therapies are not always preferentially reimbursed, whereas some interventions that have been shown to be ineffective or worse (e.g., repeated inpatient detoxification without aftercare) continue to be reimbursed (Humphreys & McLellan, 2012). Any one of these systemic barriers could be difficult to change, and a synthesis of the literature suggests that successful implementation necessitates a sustained, multilevel approach (Damschroder et al., 2009; Fairburn & Wilson, 2013; Fixsen et al., 2005), requiring that multiple barriers be addressed simultaneously for implementation to be successful. In the meanwhile, we turn the spotlight to an alternative and complementary solution. Changing the Interventions: Adapting Square Pegs to Fit Into Round Holes Multiple unsuccessful attempts to change service delivery systems bring up the possibility that what needs to change is the intervention. Perhaps instead of forcing the square pegs of our evidence-based interventions into the round holes of the delivery system (Onken, 2011) we should consider making our interventions somewhat more round. If efficacy findings are to be replicated in effectiveness studies, perhaps it is time for clinical scientists to accept the responsibility of routinely and systematically creating and adapting interventions to the intervention delivery context as an integral part of the intervention development process. USF CLP 4143 Reenvisioning Clinical Science To Improve Public Health Discussion Knowing how to adapt an intervention so that the intervention retains its effects while at the same time fitting in the real world requires knowledge about mechanisms and conditions in relevant settings. This solution may require the participation of practitioners in a research team that is ready to ask hard questions regarding why the intervention works and how to preserve its effective ingredients while adapting the intervention to fit broader and more varied contexts (Chorpita & Viesselman, 2005; Lilienfeld et al., 2013). Unfortunately, clinical scientists are not usually the conveyers of such modifications, nor do they always have the tools necessary to retain effective intervention ingredients while guiding adaptation efforts. Often, community practitioners modify the intervention, including those participating in effectiveness studies of sciencebased interventions (Stewart, Stirman, & Chambless, 2012). Such alterations typically involve delivering the intervention in far fewer sessions, in group versus individual format, and other shortcuts that could diminish potency. Changes to interventions are made with good intentions, often due to necessity (e.g., insurers’ demands), and they are frequently based on clinical intuition, clinical experience, or clinician or patient preferences, but 25 not on science (Lilienfeld et al., 2013). Adapting the intervention in response to practical constraints is an inherently risky endeavor: The intervention may or may not retain the elements that make it work. Whether done by clinicians attempting to meet real-world demands or by scientists lacking evidence of the intervention’s mechanism of action, practical alteration of evidenced-based interventions could very well diminish or eliminate the potency of the (no-longer-science-based) interventions. On the other hand, when clinical scientists uncover essential mechanisms of action, they may be able to package the intervention in a way that is highly implementable. For example, with an understanding of mechanisms, Otto et al. (2012) were able to create an ultra-brief treatment for panic disorder. Another example is the computerized attention modification program that directly targets cognitive biases operating in the most prevalent mood disorders (Amir & Taylor, 2012). Redefining When Intervention Development Is Incomplete Intervention development is incomplete until the intervention is maximally potent and implementable for the population for which it was developed. Intervention developers need to address issues of fit within a service delivery system, simplicity, fidelity and supervision, therapist training, and everything else that relates to implementability before the intervention development is considered complete. For example, intervention development is incomplete if community providers are expected to deliver the intervention, but there are no materials available that ensure that they administer the intervention with fidelity, or know the level of fidelity required to deliver the intervention effectively. Therefore, materials to ensure fidelity of intervention delivery … Get a 10 % discount on an order above $ 100 Use the following coupon code : NURSING10

Read more
Enjoy affordable prices and lifetime discounts
Use a coupon FIRST15 and enjoy expert help with any task at the most affordable price.
Order Now Order in Chat

We now help with PROCTORED EXAM. Chat with a support agent for more details