PROTOCOL: Systematic Review of the Effects of “Pulling Levers” Focused Deterrence Strategies on Crime
2010; The Campbell Collaboration; Volume: 6; Issue: 1 Linguagem: Inglês
10.1002/cl2.70
ISSN1891-1803
AutoresAnthony A. Braga, David Weisburd,
Tópico(s)Criminal Justice and Corrections Analysis
ResumoThis systematic review will be supported through a $25,000 grant to George Mason University from the National Policing Improvement Agency in the United Kingdom. These funds will support the work of Braga, Weisburd, and a research assistant to conduct the work identified in this review protocol. The Program in Criminal Justice Policy and Management at Harvard University's John F. Kennedy School of Government will also be an intramural source of support for this project. The Program in Criminal Justice will support the research through the provision of office space, computer, phone, fax, and paper supplies. As appropriate, we will seek modest support for the research from external sources such as private foundations and government grant-making agencies. A number of American police departments have been experimenting with new problem-oriented policing frameworks to prevent gang and group-involved violence generally known as the "pulling levers" focused deterrence strategies. These new strategic approaches have shown promising results in the reduction of violence (Braga, Kennedy, and Tita 2002). Pioneered in Boston to halt serious gang violence, the pulling levers framework has been applied in many American cities through federally sponsored violence prevention programs such as the Strategic Alternatives to Community Safety Initiative and Project Safe Neighborhoods (Dalton 2002). In its simplest form, the approach consists of selecting a particular crime problem, such as youth homicide; convening an interagency working group of law enforcement, social-service, and community-based practitioners; conducting research to identify key offenders, groups, and behavior patterns; framing a response to offenders and groups of offenders that uses a varied menu of sanctions ("pulling levers") to stop them from continuing their violent behavior; focusing social services and community resources on targeted offenders and groups to match law enforcement prevention efforts; and directly and repeatedly communicating with offenders to make them understand why they are receiving this special attention (Kennedy 1997, 2006). The pulling levers approach is also consistent with recent theorizing about police innovation, which suggests that approaches that seek to both create more focus in application of crime prevention programs and that expand the tools of policing are likely to be most successful (Weisburd and Eck 2004). Nationally, without the support of a formal evaluation, Boston's Operation Ceasefire pulling levers strategy was hailed as an unprecedented success (see, e.g. Butterfield 1996; Witkin 1997). These claims followed a surprising large decrease in youth homicide after the strategy was fully implemented in mid May 1996. However, more rigorous examinations of youth homicide in Boston soon followed. A U.S. Department of Justice-sponsored evaluation of Operation Ceasefire used a quasi-experimental design to analyze trends in serious violence between 1991 and 1998. The evaluation reported that the intervention was associated with a 63% decrease in monthly number of Boston youth homicides, a 32% decrease in monthly number of shots-fired calls, a 25% decrease in monthly number of gun assaults, and, in one high-risk police district given special attention in the evaluation, a 44% decrease in monthly number of youth gun assault incidents (Braga, Kennedy, Waring, and Piehl 2001). The timing of the "optimal break" in the time series was in the summer months after Ceasefire was implemented (Piehl, Cooper, Braga, and Kennedy 2003). The evaluation also suggested that Boston's significant youth homicide reduction associated with Operation Ceasefire was distinct when compared to youth homicide trends in most major U.S. and New England cities (Braga et al. 2001). Other researchers, however, have observed that some of the decrease in homicide may have occurred without the Ceasefire intervention in place as violence was decreasing in most major U.S. cities. Fagan's (2002) cursory review of gun homicide in Boston and in other Massachusetts cities suggests a general downward trend in gun violence that existed before Operation Ceasefire was implemented. Levitt (2004) analyzed homicide trends over the course of the 1990s and concluded that the impact of innovative policing strategies, such as Operation Ceasefire in Boston and broken windows policing and Compstat in New York, on homicide was limited. Other factors, such as increases in the number of police, the rising prison population, the waning crack-cocaine epidemic, and the legalization of abortion, can account for nearly the entire national decline in homicide, violent crime, and property crime in the 1990s. Using growth-curve analysis to examine predicted homicide trend data for the 95 largest U.S. cities during the 1990s, Rosenfeld and his colleagues (2005) found some evidence of a sharper youth homicide drop in Boston than elsewhere but suggest that the small number of youth homicide incidents precludes strong conclusions about program effectiveness based on their statistical models.1 In his examination of youth homicide trends in Boston, Ludwig (2005) suggested that Ceasefire was associated with a large drop in youth homicide but, given the complexities of analyzing city-level homicide trend data, there remained some uncertainty about the extent of Ceasefire's effect on youth violence in Boston. The National Academies' Panel on Improving Information and Data on Firearms (Wellford, Pepper, and Petrie 2005) concluded that the Ceasefire evaluation was compelling in associating the intervention with the subsequent decline in youth homicide. However, the Panel also suggested that many complex factors affect youth homicide trends and it was difficult to specify the exact relationship between the Ceasefire intervention and subsequent changes in youth offending behaviors. While the DOJ-sponsored evaluation controlled for existing violence trends and certain rival causal factors such as changes in the youth population, drug markets, and employment in Boston, there could be complex interaction effects among these factors not measured by the evaluation that could account for some meaningful portion of the decrease. The evaluation was not a randomized, controlled experiment. Therefore, the non-randomized control group research design cannot rule out these internal threats to the conclusion that Ceasefire was the key factor in the youth homicide decline. The National Academies' Panel also found that the evidence on the effectiveness of the pulling levers focused deterrence strategy in other settings was quite limited (Wellford et al. 2005). The available evidence on the effects of pulling-levers programs in other jurisdictions was scientifically weak. For instance, sudden large decreases in homicide and serious gun violence followed the implementation of pulling levers in Baltimore (Braga, Kennedy, and Tita 2002), Minneapolis (Kennedy and Braga 1998), Stockton (CA) (Wakeling 2003), and High Point (NC) (Coleman, Holton, Olson, Robinson, and Stewart 1999). Unfortunately, these assessments did not use control groups and relied upon simple pre-post measurements of trends in homicide and gun violence. In East Los Angeles, a DOJ-sponsored replication of Operation Ceasefire experienced noteworthy difficulty keeping the social service and community-based partners involved in the interagency collaboration (Tita, Riley, Ridgeway, Grammich, Abrahamse, and Greenwood 2003). However, the law enforcement components of the intervention were fully implemented and focused on two gangs engaged in ongoing violent conflict. The quasi-experimental evaluation revealed that the focused enforcement resulted in significant short-term reductions in violent crime and gang crime in targeted areas relative to matched comparison areas (Tita et al. 2003). Since the publication of the Panel's report, several rigorous evaluations of the effects of pulling levers on gang violence in other jurisdictions have been completed. For instance, a quasi-experimental evaluation of the Indianapolis Violence Reduction Partnership found that the pulling levers strategy was associated with a 42% reduction in homicide in Indianapolis (McGarrell, Chermak, Wilson, and Corsaro 2006). When compared to homicide trends in the nearby cities of Cleveland, Cincinnati, Kansas City, Louisville, and Pittsburgh, the evaluation found that Indianapolis was the only city experiencing a statistically significant decrease in homicide during the study time period. In Chicago, a quasi-experimental evaluation of a Project Safe Neighborhoods gun violence reduction strategy found significant reductions in homicides in treatment neighborhoods relative to control neighborhoods (Papachristos, Meares, and Fagan 2007). The evaluation found that the largest effect was associated with preventive tactics based on the pulling levers strategy, such as offender notification meetings that stress individual deterrence, normative change in offender behavior, and increasing views on legitimacy and procedural justice. Given the growing popularity of pulling levers policing, as well as the conflicting views on its crime prevention value, a systematic review of the empirical evidence on the effects of pulling interventions on crime is necessary to assess the value of this approach to crime prevention. This review will synthesize the existing published and non-published empirical evidence on the effects of pulling levers on crime and will provide a systematic assessment of the preventive value of this approach. Pulling levers focused deterrence strategies represent a specific application of deterrence strategies within a problem-oriented policing framework (Kennedy 2006; Braga 2008; see also Goldstein 1990; Eck and Spelman 1987). In short, pulling levers strategies use the iterative problem-oriented policing process (scanning, analysis, response, and assessment) to frame an interagency response to deter groups of chronic offenders from continuing their ongoing violent conflicts. As such, the proposed review will also include a literature review that sets the practical and theoretical context for the development of pulling levers approaches to crime prevention. Only studies that use comparison group designs involving before and after measures will be eligible for the main analyses of this review. In most pulling levers evaluations (e.g. Braga et al. 2001; McGarrell et al. 2006), the control group experiences routine modern police responses to crime. Control areas usually experience a blend of traditional police responses (e.g., random patrol, rapid response, and ad-hoc investigations) and opportunistic community problem-solving responses. While strategic interventions developed from community policing initiatives may be present in certain control areas, none of the controls engage pulling levers policing strategies to address crime problems. As part of the review, we will also examine and document less rigorous research evaluation evidence (e.g. simple pre-post assessments without control groups). Although we have strong concerns regarding the methodological rigor of such studies, we will identify and analyze them separately from our main analyses. Pulling levers strategies attempt to influence the criminal behavior of individuals through the strategic application of enforcement and social service resources to facilitate desirable behaviors. However, existing reviews of the crime prevention value of pulling levers strategies have noted that published evaluations only report aggregated measures of underlying levels of criminal behaviors in targeted areas (Wellford et al. 2005). For example, in Boston, the pulling levers intervention targeted violent behavior among gang-involved offenders. The evaluators measured behavioral change among gang-involved offenders by examining city-wide trends in aggregated measures of serious violence (Braga et al. 2001). Since most of the existing evidence on the crime prevention value of pulling levers involves aggregated measures of changes in individual behaviors, this review will be limited to area-level analyses of aggregated crime trends. If the search strategies reveal new or unpublished studies that examine individual-level analyses of changes in individual criminal behavior, these studies will be included in our examination as additional descriptive information on the crime prevention mechanisms associated with the pulling levers approach. However, individual-level studies will not be included in the formal systematic review. It is important to note here that the research strategy will yield a diverse set of targeted areas across the identified studies. For example, evaluations of pulling levers policing strategies in Boston and Indianapolis compared the homicide trends in the targeted cities to homicide trends in nonrandomized groups of control cities (Braga et al. 2001; McGarrell et al. 2006). In evaluations of pulling levers programs implemented in Los Angeles and Chicago, however, compared violent crime trends in treatment neighborhoods to violent crime trends in non-randomized groups of control neighborhoods (Tita et al. 2003; Papachristos et al. 2007). This heterogeneity in the units of analysis across studies could have varying and policy-relevant effects on crime prevention outcomes associated with the pulling levers strategies. As such, the review will also classify the types of areas to ensure that the review is measuring similar findings across the potentially diverse set of locations subjected to treatment. Eligible studies must measure the effects of police intervention on officially recorded levels of crime at places. Appropriate measures of crime could include crime incident reports, citizen emergency calls for service, or arrest data. Other outcomes measures such as survey, interview, and victimization measures used by eligible studies to measure program effectiveness will also be coded and analyzed. While all eligible studies must include a crime outcome measure, we will also collect data on community satisfaction measures such as citizen attitudes towards police, fear of crime, and other outcomes. Particular attention was paid to studies that measured crime displacement effects and diffusion of crime control benefit effects. Kennedy (2006) describes a place-based application of pulling levers focused on a disorderly drug market operating in High Point, North Carolina. Policing strategies focused on specific locations have been criticized as resulting in displacement (see Repetto 1976). More recently, academics have observed that crime prevention programs may result in the complete opposite of displacement—that crime control benefits were greater than expected and "spill over" into places beyond the target areas (Clarke and Weisburd 1994). The quality of the methodologies used to measure displacement and diffusion effects, as well as the types of displacement (spatial, temporal, target, modus operandi) examined, was assessed. These different sources will complement each other in the identification of eligible pulling levers policing studies. For example, if an eligible study exists that does not use the one of the search studies or does not appear in one of the on-line databases, contacts with leading researchers and searches of existing bibliographies are likely to discover any such study if it existed. We will also contact researchers involved in previous systematic reviews, such as the recent problem-oriented policing review by Weisburd et al. (2010), to determine whether any pulling levers interventions were identified. All published and unpublished studies will be considered for this review. Each on-line database will be search as far back as possible. However, since pulling levers policing is a very recent development in crime prevention, the search strategies described above should be sufficient to identify all relevant studies. In addition, two existing registers of randomized controlled trials will be consulted. These include (1) the "Registry of Experiments in Criminal Sanctions, 1950–1983 (Weisburd et al. 1990) and (2) the "Social, Psychological, Educational, and Criminological Trials Register" or SPECTR being developed by the United Kingdom Cochrane Centre and the University of Pennsylvania (Turner et al. 2003). The reviewer will screen abstracts and leads to potentially eligible studies and decide which full-text reports should be acquired. Only the full-text papers of titles and abstracts indicating, or potentially indicating, an evaluation of a pulling levers strategy will be obtained. Studies that use randomized controlled designs or quasi-experimental techniques such as matching, statistical controls, comparison groups, and the like will be considered for inclusion in the review. In cases of ambiguity, the full text of the study will be obtained in order to properly determine whether an eligible study design was used. Correlation and observational studies without control groups that examine the effects of pulling levers policing on crime will be noted and appear in catalog form in an appendix to the final report. These descriptive studies will not be included in the formal analysis reporting the findings of the review. Studies meeting the eligibility set forth above will be coded for a range of characteristics related to methodological quality including the definition criteria used to identify the units of analysis, the statistical tests used to determine crime prevention effectiveness, the measurement of displacement, the violation of randomization procedures, case attrition from the study, and the subversion of the experiment by participants. Farrington (2003) proposes five easily understood methodological criteria to assess the methodological quality of evaluation studies. These criteria include statistical conclusion validity, internal validity, construct validity, external validity, and descriptive validity. As appropriate and possible, the role of the various methodological factors on the observed empirical results will be assessed. However, it is important to recognize that eligible studies may not detail or even mention implementation issues. Indeed, all field experiments face implementation difficulties and care will be taken not to artificially downgrade the value of certain studies simply because study provided an open account of potential process problems. The reviewer, with the help of a trained research assistant, will extract information from the full text report on the characteristics of the study using a carefully designed data extraction instrument (see coding instruments included in Appendix A). A content analysis will be conducted on the full text of the report and the data extraction instrument will capture data on the relevant dimensions of this review. These dimensions include: a complete description of the treatment, methods used to define and identify targeted areas, research design and statistical techniques, threats to the research design, crime outcome measures, and alternative outcome measures. When important information is missing from available study reports, the original researchers will be contacted, if possible, to determine if they can supply that information. A single evaluation of pulling levers policing intervention may provide data on multiple outcome measures. For example, the quasi-experimental evaluation of Boston's Operation Ceasefire intervention presents an array of outcome measures including youth homicide incident data, gun assault incident data, and "shots fired" citizen calls for service data (Braga et al. 2001). Separate studies by Winship and Berrien (1999) and Stoutland (2001) reported community perceptions of the value of Operation Ceasefire in preventing gang violence. Pulling levers interventions could also be targeted at chronic offenders responsible for generating a variety of offenses such as violent crimes, property crimes, and drug crimes. The treatment could have varying effects on offending trends and patterns in different crime categories. For cases such as this with multiple findings from the same sample, each will be examined independently to decide how to either combine the findings or to choose the one that best represents the study. It is likely that most pulling levers interventions will be designed to deal with a specific problem, but some may also target some secondary problems and report outcomes for these as well. In these cases the effect size for the primary problem will be reported. For instance, while multiple outcomes are reported, the Braga et al. (2001) evaluation clearly identifies the monthly counts of "youth homicide" (homicide victims 24 and younger) in Boston as the primary outcome measure. However, some studies might have multiple primary outcomes. Analyzing these separately would clearly lead to problems regarding statistical dependence of outcomes. As such, we will code a maximum of three primary outcomes, with the criteria of choosing the maximum, moderate/median effect size and minimum effect size to offer flexibility in calculating an overall effect size for such studies. The same strategy will be used for any studies reporting the same outcome multiple times with different types of data (i.e. a study evaluating the impact of a pulling levers on "gun violence" may use gun homicide incidents, gun assault incidents, and "shots fired" calls for service as primary outcome measures). Finally, some studies may involve multiple sites—i.e. pulling levers program delivered by one police department/taskforce to specific problems in multiple areas within a city. Such cases will be treated as one study with sub units, and independent effect sizes for primary outcomes will again be created in the same manner as above. Analysis of outcome measures across studies will be carried out in a uniform manner and, where appropriate and possible, involve quantitative analytical methods. As described above, existing reviews of the effects of pulling levers policing on crime have identified only a handful studies that use control group evaluation designs. The expanded search strategy and time period described below may identify a few additional studies. The review will consist of simple descriptive statistics reporting the proportion of studies reporting significant effects on outcome measures, the size of the effect, and the direction of the effect. At a minimum, the systematic review will rely upon "vote counting" procedures to assess the effects of the pulling levers interventions on crime. In this rudimentary approach, each study metaphorically casts a vote for or against the effectiveness of treatment. Unfortunately, vote counting methods of synthesizing results across studies suffer from a number of limitations. These weaknesses include: failing to account for the differential precision of the studies being reviewed (e.g. larger studies, all else being equal, provide more precise estimates), failing to recognize the fundamental asymmetry of statistical significance tests (e.g. a large proportion of nonsignificant findings in the same direction provide evidence that the null hypothesis is false), ignoring the size and direction of observed program effects, and, if the statistical power of the studies in that area of concern is low, the likelihood of arriving at an incorrect conclusion increases as the number of studies on a topic increases (Wilson 2001: 73–74). Meta-analyses of program effects avoid these pitfalls by focusing on the size and direction of the effects, not whether the individual effects were statistically significant, and by weighting effect sizes based on the variance of the effect size and the study sample size (Lipsey and Wilson 2001). Meta-analytic procedures will be used to combine data from studies. For eligible studies, with enough data present, effect sizes will be calculated using the standardized measures of effect sizes as suggested in the meta-analytic literature (e.g. see Lipsey and Wilson 2001). Mean effect sizes will be computed across studies and we will use a correction such as the inverse variance weight for computing the associated standard error. For eligible studies that contrast two groups that have a continuous underlying distribution, the standardized mean difference effect size (also known as Cohen's d; see Rosenthal 1994; Shadish et al. 2003) will be used in our meta-analysis to synthesize results. This effect size measure would be appropriate for eligible evaluations that contrast an outcome measure, such as mean homicide counts, for treatment and control areas over pre-test and post-test observation periods. However, a priori, we know that several pulling levers policing quasi-experimental evaluations, such as the Boston Operation Ceasefire evaluation (Braga et al. 2001), use interrupted time series analyses that compare trends in serious violence in treated areas to trends in serious violence in comparison areas. Controlling for other rival factors, seasonal variations, and existing secular trends, program impacts are determined by estimating the change in outcome measures via a dummy variable indicating the absence and presence of the pulling levers program over the time series. The estimated program effect size, direction, and statistical significance level are then compared to similar coefficients estimating pre-post contrasts in outcome trends in comparison areas. In our proposed systematic review, we will qualitatively examine the assessment of the treatment coefficient relative to comparison coefficients in each eligible study to determine whether study comes to an appropriate conclusion regarding the absence or presence of an actual treatment effect. Then, for each eligible study, we will calculate standardized mean gain effect size coefficients (a measure that calculates a standardized pre-post contrast that can be compared across samples and studies; see Lipsey and Wilson 2001: 44 – 45) to be used in our meta-analysis to synthesize results. In our proposed meta-analyses, we will examine the Q statistic to assess heterogeneity of effect sizes across studies, though it is our initial assumption that effect size is a fixed factor in our analysis. Our assumption that fixed effects comes from our view that pulling levers programs are fairly consistent one to another. Nonetheless, we recognize that there may be considerable diversity across programs that have not been so far recognized. If the Q statistic and our reading of the studies suggests this we will implement a random effects model for all analyses involving effect sizes. We also hope to examine contextual or moderating features of pulling levers approaches. We are interested in whether the strength of the effect varies across departments or other contextual variables. To assess this we will use the analog to the ANOVA method of moderator analysis (see Lipsey and Wilson 2001) for categorical moderator variables and meta-analytic regression analysis for continuous moderator variables or analyses involving multiple moderators. As described earlier, we will also examine and document less rigorous research evaluation evidence (e.g. simple pre-post assessments without control groups) on the crime prevention effectiveness of pulling levers strategies. Following the methods of a recent systematic review by Weisburd et al. (2010), we will report the percent change for the pre-post studies and the average percent change for all pre-post studies. When more than one primary outcome is present, we will average these outcomes to create a single outcome. To account for variation in sample size, we will also calculate a weighted average percent change by weighting each study by the inverse of its variance (assuming that crime follows a Poisson distribution). The sampling variance will allow us to construct a confidence interval around the percent change of each study and, after weighting each study by the inverse of its variance, we will recalculate the average percent change. This analysis will be separate from the main analyses described in this section and will receive far less emphasis in our conclusions on the crime prevention value of pulling levers strategies. Qualitative research on crime and disorder outcomes will not be formally included in this systematic review. Qualitative insights on the crime prevention value of pulling levers policing will be included as descriptive information in the review report. If the search strategies reveal a number of qualitative studies, the authors will engage a qualitative researcher to assist in future updates to this review with a synthesis of qualitative evaluation measures. The estimated timeline for a completed report includes the following benchmarks and anticipated dates: In accordance with Campbell Collaboration guidelines, we will update this review once every three years. We would like to thank David B. Wilson, Terri Piggott, and several anonymous reviewers for their advice and helpful comments on earlier versions of this protocol. David Weisburd does not have any conflicts of interest in conducting a review of the crime prevention effects of pulling levers policing strategies. However, with colleagues, Braga has conducted a quasi-experimental evaluation that found pulling levers policing to be effective in reducing serious violence among gang members in Boston (see Braga et al. 2001). Although Braga doesn't have an ideological bias towards the effectiveness of pulling levers interventions, it may be uncomfortable for him to report findings in this review that contradict the findings of his evaluation or related evaluations conducted by his colleagues. A study must meet the following criteria in order to be eligible. Answer each question with a "yes" or a "no." If the study does not meet the criteria above, answer the following question: The study is a review article that is relevant to this project (e.g. may have references to other studies that are useful, may have pertinent background information) ____ Eligibility status: Notes: ______________________________________________________________________________________________ ______________________________________________________________________________________________ Date range of research (when research was conducted): Start: ____________ Finish: ____________ Did the study formally identify the treatment as a pulling levers policing intervention? If No, what did the study call the intervention? _________________________________________________________________ What crime problem was targeted for the pulling levers intervention? (Select all that apply) Who were the primary targets of the pulling levers intervention? (Select all that apply) If the intervention was primarily targeted at "high-risk individuals," please describe the individuals: (Select all that apply) Specifically, what event(s) makes up the problem? ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ Was the pulling levers intervention developed based on an analysis of the targeted problem? At what unit of analysis was the treatment delivered/intervention directed at? (Select all that apply) What agency was primarily responsible for the implementation of the intervention? (Select the lead agency only) What groups were involved in the implementation of the intervention? (Select all that apply) What key elements of the pulling levers strategy were identified in the program evaluation? (Select all that apply) If a communications strategy was present, please identify the key elements of the message(s) (Select all that apply) If a communications strategy was present, how were the message(s) delivered / marketed to the targeted audience? (Select all that apply) What did the evaluation indicate about the implementation of the response? _____ If the process evaluation indicated there were problems with implementation of the response, describe these problems: ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ If the process evaluation identified inadequate participation by involved agencies, indicate the agencies below that were responsible for weak participation (Select all that apply) City (and state/province, if applicable) where study was conducted: ____________________ _____________________________________________________________________________ Type of study: _________ How were study units allocated to treatment or comparison conditions? Explain how independent and extraneous variables were controlled so that it was possible to disentangle the impact of the intervention or how threats to internal validity were ruled out. ___________________________________________________________________________ ___________________________________________________________________________ ___________________________________________________________________________ The following questions refer to the area receiving treatment: Geographic area receiving treatment: ______ What is the exact geographic area receiving treatment? ______________________________________________________________________________ The following question refers to the area not receiving treatment Geographic area NOT receiving treatment: ______ What were the casual hypotheses tested in this study? __________________________________________________________________________________________ __________________________________________________________________________________________ __________________________________________________________________________________________ Please identify any theories from which the causal hypotheses were derived. __________________________________________________________________________________________ __________________________________________________________________________________________ __________________________________________________________________________________________ Outcomes reported (Note that for each outcome, a separate coding sheet is required) What is the specific outcome recorded on this coding sheet? _______________________________________________________________ Was it the primary outcome of the study? _______ Was this initially intended as an outcome of the study? ______ If no, explain why: ______________________________________________________________________________ ______________________________________________________________________________ Unit of analysis What was the unit of analysis for the research evaluation? Did the researchers collect nested data within the unit of analysis? Dependent Variable What type of data was used to measure the outcome covered on this coding sheet? ____ If official data was used, what specific type(s) of data were used? (Select all that apply) If researcher observations were used, what types of observations were taken? (Select all that apply) If self-report surveys were used, who was surveyed? (Select all that apply) For the units of analysis in this study, what time periods were examined for the outcome covered on this coding sheet? What was the length in time of the follow-up period after the intervention? ___________________________________________________________________________ Did the researcher assess the quality of the data collected? Did the researcher(s) express any concerns over the quality of the data? If yes, explain ______________________________________________________________________________ ______________________________________________________________________________ Does the evaluation data correspond to the initially stated problem? (i.e. if the problem is gang violence, does the evaluation data specifically look at whether gang violence changed?) If no, explain the discrepancy: ______________________________________________________________________________ ______________________________________________________________________________ Dependent Measure Descriptors Statistical analysis design: _____ Sample Size Was attrition a problem in the analysis for this outcome? If attrition was a problem, provide details (e. g. how many cases were lost and why were they lost). ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ What do the sample sizes above refer to? Effect Size Data Raw difference favors (i.e. shows more success for): Did a test of statistical significance indicate statistically significant differences between either the control and treatment groups or the pre and post tested treatment group? ____ Was a standardized effect size reported? If no, is there data available to calculate an effect size? Type of data effect size can be calculated from: Means and Standard Deviations Proportions or frequencies Significance Tests Calculated Effect Size Note that the following questions refer to conclusions about the effectiveness of the intervention in regards to the current outcome being addressed on this coding sheet. Conclusion about the impact of the intervention? _____ Did the assessment find evidence of a geographic displacement of crime? ______ Did the assessment find evidence of a temporal displacement of crime? _____ Did the author(s) conclude that the pulling levers intervention was beneficial? _____ Did the author(s) conclude there a relationship between the pulling levers intervention and a reduction in crime? _____ Who funded the intervention? ______________________________________________________________________________________________ ______________________________________________________________________________________________ Who funded the evaluation research? ______________________________________________________________________________________________ ______________________________________________________________________________________________ Were the researchers independent evaluators? If no, explain the nature of the relationship: ______________________________________________________________________________ ______________________________________________________________________________ Additional notes about conclusions: ______________________________________________________________________________________________ ______________________________________________________________________________________________ Additional notes about study: ______________________________________________________________________________________________ ______________________________________________________________________________________________
Referência(s)