Volume 1, Issue 1 • Fall 2011

Table of Contents

Foreword

Measuring Recidivism in Juvenile Corrections

Barron County Restorative Justice Programs: A Partnership Model for Balancing Community and Government Resources for Juvenile Justice Services

Parents Anonymous® Outcome Evaluation: Promising Findings for Child Maltreatment Reduction

Assessing Efficiency and Workload Implications of the King County Mediation Pilot

The Impact of Juvenile Drug Courts on Drug Use and Criminal Behavior

Missouri’s Crossover Youth: Examining the Relationship between Maltreatment History and Risk of Violenc

Assessing and Improving the Reliability of Risk Instruments: The New Mexico Juvenile Justice Reliability Model

School Policies, Academic Achievement, and General Strain Theory: Applications to Juvenile Justice Settings

Measuring Recidivism in Juvenile Corrections

Philip W. Harris
Department of Criminal Justice, Temple University, Philadelphia, Pennsylvania

Brian Lockwood
Department of Criminal Justice, Monmouth University, West Long Branch, New Jersey

Liz Mengers
Council of Juvenile Correctional Administrators, Braintree, Massachusetts

Bartlett H. Stoodley
Maine Department of Corrections, Augusta, Maine

Acknowledgements
This is based on Harris, Lockwood, and Mengers (2009). Development of the Council of Juvenile Correctional Administrators’ (CJCA) standards for measuring recidivism was supported by a conference-support grant to CJCA from the Office of Juvenile Justice and Delinquency Prevention, U.S. Department of Justice (Grant #2007-JL-FX-0049). The opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the U.S. Department of Justice.

Correspondence concerning this article should be addressed to Philip W. Harris, Department of Criminal Justice, Temple University, Philadelphia, PA 19122. E-mail: phil.harris@temple.edu

Key words: Juvenile justice, corrections, recidivism, outcomes measurement

Abstract

Clear communication of program outcomes and system performance in juvenile justice is often hampered by the lack of standard definitions and inconsistent measurement, especially in relation to recidivism. In juvenile corrections, developing knowledge of best practices and effective programs, and obtaining support for the replication of evidence-based programs, depends heavily on an agency’s ability to present performance data clearly and consistently to policy makers. To bring greater consistency and clarity in the use of recidivism as an indicator of system performance, the Council of Juvenile Correctional Administrators (CJCA), the national organization of state juvenile correctional chief executive officers (CEOs), developed standards for defining and measuring recidivism in 2009. This article presents these standards and describes their development and rationale.

Introduction

In juvenile corrections, recidivism—the commission of repeat offenses—is the most commonly used indicator of program and system effectiveness. Preventing recidivism, the goal of most programs for delinquent youth, is informed by criminological theory as well as state and federal policy. Indeed, the phrase “delinquency prevention” is contained in the name of the federal agency that guides national priorities in juvenile justice, the Office of Juvenile Justice and Delinquency Prevention (OJJDP).
The dominance of recidivism as the central measure of juvenile correctional program performance is, some believe, due to the ease with which recidivism data can be obtained (Maltz, 2001). Criminal justice agencies systematically record arrest, conviction, and re-incarceration data. What’s more, recidivism is tied closely to an underlying public concern, that of personal safety.

Demands for performance data on the effectiveness of juvenile justice programs are ubiquitous. State-level policy makers are often interested in reviewing recidivism data to compare their juvenile justice system’s performance with that of other states. States publishing recidivism rates create benchmarks against which every state can compare its own rates (Sentencing Project, 2010). These comparisons allow policy makers not only to establish realistic goals, but more importantly to hold juvenile justice agencies accountable for the outcomes of their taxpayer-funded services.

Juvenile correctional agencies are, therefore, under pressure to produce evidence of their achievements. To sustain and improve results of interest to policy makers, state and local governments must fiscally support agency operations and program enhancements. The support provided to an agency in legislative budget hearings, especially in times of spending cuts, is heavily contingent on its ability to document its effectiveness in reducing recidivism.

This article addresses a problem seriously hampering the ability of juvenile correctional agencies to effectively produce meaningful performance data for purposes of accountability, as well as creating outcome information to support quality improvement efforts. Specifically, this article addresses the lack of uniform research and reporting standards for tracking recidivism, provides a new set of standards for measuring recidivism adopted by CJCA, and describes the process by which the standards were developed.

Defining and Measuring Recidivism

Recidivism data have been reported in terms of a variety of measures, including re-arrest, re-adjudication, and re-incarceration. As a result, policy makers are often unable to make sense of seemingly disparate findings or draw meaningful conclusions about program or system performance. Common definitions and indicators are necessary to clearly communicate the meaning of outcome study results, to unambiguously describe the methods used to obtain research findings, to enable replication of research designs, to make possible comparisons across studies and regions, and to facilitate our understanding of program or system effectiveness.

Defining Recidivism

By definition, recidivism comprises two elements: 1) the commission of an offense, 2) by an individual already known to have committed at least one other offense (Blumstein & Larson, 1971). To have a truly operable definition, one must clarify and qualify both parts.

Regarding the first half of the definition, one must ask: What constitutes “commission of an offense?” Does the definition include status offenses? Would parole violations fall under the definition? One might assume that the phrase “commission of an offense” refers to a criminal act and thus excludes status offenses and parole violations. Consistency requires that the phrase “commission of an offense” be defined explicitly.

For the second half of the definition, one must ask: Who is considered to be “an individual already known to have committed at least one other offense?” In the case of a juvenile, must the juvenile have been found guilty of an offense? If the juvenile had been arrested but diverted prior to adjudication, is she included in this definition? A policy maker might argue that diversion does not imply innocence; in fact, it implies or requires admission of guilt. Thus, if a youth who was previously diverted comes before the court on a subsequent offense, is that not recidivism? Evaluators must agree on uniform answers to these questions or their findings will be difficult to interpret or compare.

Measuring Recidivism

The terms “define” and “measure” are often confused by non-researchers. Whereas “define” refers to the meaning of a word—in this case, recidivism—“measure” refers to a method of systematically determining its extent or degree within a given sample. In program evaluations, the measures are the types of data used to determine recidivism levels. These data types include, among others, self-reports of re-offending, arrest records, and court records of adjudication and disposition.

The distinctions among these measures are important, as their use will generate vastly disparate results. Snyder and Sickmund (2006) found that when measuring recidivism as re-arrests, there is an average; however, when measuring recidivism as re-incarceration, they find only a 12% recidivism rate (see also Mears & Travis, 2004). In general, we can expect measures derived from actions occurring later in the case processing system to produce lower recidivism rates. The shrinkage stems from the decisions of officials to remove (dismiss or divert) some cases at each decision point, allowing only the remaining cases to continue to the next stage in the process.

Choosing appropriate measures of recidivism also requires consideration of implications for which cases are excluded. For example, requiring a finding of guilt excludes diversion cases, a decision option often favored by policy makers and juvenile justice advocates.

Recidivism and Follow-up

A program evaluation’s follow-up methods can also dramatically affect the level of recidivism researchers detect. For example, researchers may measure recidivism from different starting points, such as when juveniles are released from institutions to aftercare or when their cases are terminated by the court (Barnoski, 1997; Maltz, 1984). Researchers also may employ follow-up periods of different durations (longer periods of follow-up are likely to increase the proportion of youths found to reoffend). Furthermore, instead of using the same follow-up duration for each case, researchers may terminate follow-up on a particular date, thus limiting the time of follow-up for some cases.

Another common oversight is the omission of adult offenses. If a juvenile transitions to the adult system (as a result of age or the case being waived), evaluators will not detect his or her re-offenses if data are restricted to juvenile corrections records. This omission would result in an undercount of recidivism. The major obstacle to obtaining these data is, of course, access. Political and technical barriers need to be removed if recidivism studies are to follow cases for a reasonable period of time. Our review of the juvenile justice program evaluation literature (see below) showed a preference for a follow-up of two years.

Obviously, researchers may choose not to follow cases for as long as several years, in part because data collection costs increase with time. As long as the follow-up period is sufficient to capture a large proportion of new offenses, there is little justification for incurring additional costs. Second, except for studies with randomized or well-matched controls, factors not accounted for in the research design will, as time passes, increase or decrease the probability of a new offense. Finally, internal use of recidivism data for purposes of program improvement is most beneficial if the time between the delivery of services and measurement of their impact is relatively short. Thus, consideration must be given to the pros and cons of different follow-up time periods when establishing standards.

Contextual Differences Affecting Recidivism Measurement

Sanborn (2004) notes the absence of an American juvenile justice system. Instead, the United States has 52 different juvenile justice systems, each differing in ways that can account for a large proportion of differences in recidivism rates. To this number we can add the numerous tribal juvenile justice systems operating on American Indian reservations. If system differences are not taken into account, anyone comparing state-level recidivism rates is likely to draw erroneous conclusions.

Policy differences among the states, such as jurisdictional age, influence the characteristics of juvenile justice populations. For example, New York, which treats 16-year-olds as adults, will have different recidivism rates from Pennsylvania, where the jurisdiction of the juvenile justice system ends at 18. Pennsylvania’s data would include youths 16 and 17, while New York’s would not. A comparison of New York and Pennsylvania recidivism data would be biased unless youths older than 15 were excluded from the analysis.

Other factors likely to cause recidivism rates to vary include differences in police practices, the quality of aftercare services, arrest and conviction standards, policies on waivers to the adult system, and guidelines for diverting and dismissing cases. For example, Josi and Sechrest’s (1999) experimental study of the “Lifeskills ’95” program in California demonstrated the effectiveness of an aftercare program that combined structured socialization training, positive expectations, individualized treatment, vocational training, and employment on reducing recidivism. At the same time, these researchers observed that negligence on the part of parole and correctional agencies often undermined successful reintegration.

As an example of the potential influence of police practices on the measurement of recidivism rates, Philadelphia recently implemented a training curriculum for law enforcement. Created by local policy makers and supported by the Pennsylvania Commission on Crime and Delinquency and the MacArthur Foundation’s Disproportionate Minority Contact (DMC) Action Network, this four-stage training is designed to equip police officers with the skills and understanding needed to use their discretion more effectively in reducing minority group arrests (Scott & McKitten, n.d.). These examples indicate that innovations in practice that change decision-making at critical points in the official processing of juvenile cases may produce differences in the outcomes we wish to compare.

Recidivism rates may also be affected by environmental factors within a jurisdiction. These include economic conditions, population density, levels of access to health care, and quality of education. For example, economically disadvantaged urban areas lacking social capital are likely to have high rates of crime and delinquency (Sampson & Groves, 1989). More specifically, neighborhoods characterized by sparse friendship networks, unsupervised teens, and low participation in organizations had higher rates of crime and delinquency than neighborhoods with greater social supports. More recently, Mennis and Harris (2011) found that juvenile recidivism is concentrated in specific neighborhoods and that different types of neighborhoods produce different rates, and different types, of offenses. For example, a neighborhood with well-organized drug markets increases the chances of recidivism, especially the commission of drug-related offenses. Thus, characteristics of the neighborhoods in which youth reside can influence patterns of recidivism observed by researchers.

Development of the CJCA Standards

Clear communication and more effective use of performance data would increase if definitions and measures of recidivism were standardized. This belief led CJCA to create standards for defining and measuring recidivism. The purpose of its work was to: 1) increase knowledge needed to reduce recidivism; 2) increase support for evidence-based programs, both proven and promising; and 3) support continuous quality improvement of programs and systems of services.

Methods

The strategy adopted by CJCA to develop standards for defining and measuring recidivism consisted of the following five components:

  1. A Recidivism Work Group1 was created, consisting of CJCA members and research staff2
    from their respective state agencies, to provide an agency perspective and feedback to the principal
    researchers3;
  2. CJCA included in its annual survey of member agencies questions regarding state-agency practices in measuring recidivism;
  3. The principal researchers reviewed recent program evaluations of juvenile correctional programs and related studies to gather information on how recidivism was defined and measured;
  4. The Recidivism Work Group members contributed to and commented on drafts of the standards and supporting materials; and
  5. Major findings were presented to the full CJCA membership at its January and August meetings in 2008 and 2009 to obtain consensus on standards recommended by the Recidivism Work Group.

The process of standards development was iterative, with the principal researchers reviewing and summarizing recidivism studies, the Recidivism Work Group members requesting clarification and adding measurement challenges not considered, and the CJCA membership examining summaries of findings from the literature reviews and recommendations from the Work Group. The variety of measurement practices found in the literature presented an important opportunity, since consensus among the CJCA members was seen as essential. Our emphasis on consensus was based on recognition that successful implementation of the resulting standards would depend heavily on each member’s willingness to champion the standards within their agencies and states. In no case, however, did consensus supersede the body of research reviewed.

Measurement Practices in Juvenile Corrections and Program Evaluation

Occasionally, juvenile correctional agencies conduct in-depth studies of system-wide outcomes, including recidivism rates. Contracted external evaluators generally perform these studies. In January 2009, CJCA’s Recidivism Work Group collected ten of these reports from state agencies, listed in Table 1.4 Six of these studies measured recidivism as re-incarceration (e.g., “return to custody”), while four used adjudication (“re-adjudication”), and one used re-arrest. Follow-up periods used in these studies also varied, ranging from one to three years, with no particular time period dominating.


Table 1 Recidivism Measurement by State Juvenile Correctional Agencies

State Recidivism Measure Other Information

Arizona

Return to custody

Up to 36 month follow-up; Differentiates new offense from technical violation

Colorado

Filing for new offense

12 month follow-up

Kansas

Return to custody

12 month follow-up

Louisiana

Re-adjudication and return to custody

36 month follow-up; Includes adult cases

Maine

Re-adjudication

18 month follow-up; First adjudication cases only

Massachusetts

Re-adjudication

24 month follow-up

North Carolina

Rearrest; Re-adjudication

24 month follow-up; Includes adult cases

Ohio

Return to custody or adult sentence

Not reported

Virginia

Return to custody

Up to 36 month follow-up

Wisconsin

Return to custody

Up to 24 month follow-up; Differentiates new offense from technical violation

 



Aside from these occasional reports, most agencies also routinely monitor recidivism data. Through its annual survey, CJCA gathered information on how state agencies conduct such studies.5 Of the 51 responding agencies, 40 reported that they track recidivism data.

Most agencies use more than one measure for recidivism (see Table 2). Relatively few agencies (28%) use arrest (5% use arrest only), but nearly one-half (48%) use adjudication and/or commitment decisions. Also notably, less than one-half (45%) follow clients into the adult system, perhaps because of difficulty in obtaining this information. In fact, only 32 of 40 agencies have access to data on youths transferred to the adult system. Moreover, most of the agencies (60%) follow juveniles for at least 24 months after their release to the community. Differences in follow-up periods may also be related to data access, as some agencies may have more difficulty in obtaining long-term follow-up data than others.


Table 2 Recidivism Measures Used by Juvenile Correctional Agencies

Recidivism Measure Number of States Using this Measure Percentage of States Using this Measure

Arrest (total)

11

28

Arrest only

2

5

Arrest plus one or more other actions

9

23

Adjudication (total)

19

48

Adjudication only

8

20

Adjudication plus one or more other actions

11

28

Commitment to juvenile corrections (total)

19

48

Commitment to juvenile corrections only

4

10

Commitment to juvenile corrections plus one or more other actions

15

38

Commitment to adult corrections (total)

18

45

Commitment to adult corrections only

2

5

Commitment to adult corrections plus one or more other actions

16

40

Source: Harris et al., 2009. Copyright © 2009 by the Council of Juvenile Correctional Administrators. Reprinted by permission.



The program evaluation literature, which consistently uses recidivism data to measure program effectiveness in juvenile justice, is another source of information on recidivism research practices. By scouring online databases,6 the principal researchers found 45 published studies evaluating recidivism outcomes for adjudicated juveniles participating in residential and community-based programs. These studies used a variety of measures and commonly employed multiple measures and data sources to measure recidivism. A list of these studies can be found on pages 21-26 of Harris et al. (2009).

Comparing these studies with the survey data illustrates several differences in recidivism quantification, as seen in Table 3. The most striking difference is the use of commitment as a recidivism measure. State correctional agencies use this measure (juvenile commitment, 48%; adult commitment, 45%) with far greater frequency than do program evaluations (juvenile commitment, 0.4%; adult commitment, 0.4%). This difference is likely due to state correctional agencies having limited access to police and court data. However, there is some similarity: approximately one-half of program evaluations (51.1%) and state agencies (48%) measure recidivism either as adjudication or conviction.


Table 3 Recidivism Measures by State Agencies and Program Evaluations

  Probation/
Parole
Violation
aPetition Arrest Petition or Arrest

Adjudication/ Conviction

Juvenile Commit

Adult Commit Multiple Measures

State
Surveys
(n=40)

7.5% (3)

5% (2)

28% (11)

28% (11)

48% (19)

49% (19)

45% (18)

60% (24)

Evaluations
(n=45)

13% (6)

20% (9)

28.8% (13)

48.8% (22)

51.1% (23)

4% (2)

4% (2)

48.8% (22)


Another difference between the program evaluation literature and survey data is the use of re-arrests and court petitions (or filings) to measure recidivism. The CJCA survey did not specifically ask respondents about petitions, but two states reported use of petitions, or filings, in the “other” category. The Work Group therefore created a combined “arrest or court petition” variable. Using this variable, we find a much higher proportion of program evaluations (48.8%) use petition or arrest as a measure of recidivism than do state agencies (28%).

Based on these differences, it can be said that state juvenile justice agencies most often use “back-end” measures (adjudication and re-incarceration) to measure recidivism, while program evaluations most often use “front-end” system decisions (arrest and petition). As Snyder and Sickmund (2006) observed, rates of recidivism decline with each subsequent case processing decision point. Consequently, program evaluators will, on average, find higher recidivism rates than state correctional agencies.

State agencies also use a one-year follow-up period (60%) nearly twice as often as program evaluators (33.3%), as seen in Table 4. Similarly, state agencies were more likely than program evaluators to use follow-up periods of three years or longer. Undoubtedly, time constraints of program evaluation projects reduce the feasibility of longer follow-up periods. A critical observation for the purposes of developing standards is that the average maximum follow-up for both state agencies and program evaluations is more than two years. Moreover, approximately one-third of the studies use multiple follow-up points. These observations suggest a preference for not terminating follow-up before two years have passed.


Table 4 Follow-up Time Periods, State Agencies and Program Evaluation Studies

  Less than
1 Year
1 Year 1.5 Years 2 Years 3 Years More than
3 Years
Average
MAXIMUM
Followup
Variable Other Multiple Followup

State Surveys
(n=40)

15%
(6)

60%
(24)

5%
(2)

37.5%
(15)

37.5%
(15)

15%
(6)

2.2 years*

7.5%
(3)

7.5%
(3)

35%
(14)

Evaluations
(n=45)

8%
(4)

33.3%
(15)

4%
(2)

37.7%
(17)

13.3%
(6)

11%
(5)

2.4 years

35.5%
(16)

2%
(1)

33.3%
(15)

*These values are based on 38 of the 40 surveys.

State agencies and program evaluations generally use similar starting points for tracking recidivism: that is, either discharge from a residential facility or discharge from a program. However, in a small number of program evaluations of community-based services, recidivism measurement begins at the court disposition of the case, and thus includes the time period of program participation.

Feedback to CJCA Membership

The CJCA Recidivism Work Group developed standards for recidivism measurement through a collaborative process, beginning with a working document that included a literature review and the findings summarized above. To facilitate shared development of the standards, the Work Group exchanged emails with literature summaries attached. The Work Group participated in monthly conference calls during which emailed materials were clarified, detailed questions were asked and answered, detailed data elements were identified and recommendations formulated; and the draft document was posted on Google Docs, a service enabling Work Group members to suggest revisions.

The Work Group presented its findings and recommendations to the full membership at the organization’s 2009 January and August meetings, revising the recommended standards between meetings. There was general agreement on a two-year follow-up standard for tracking recidivism, and on starting the follow-up period upon release to the community (for youths in residential care) or case termination (for youths in community-based programs), both of which were consistent with the bodies of research reviewed.

The one issue on which there was disagreement involved the system decision point used to measure recidivism. Blumstein and Larson (1971) observed that as measurement moves from decision point to decision point, false positives decrease but false negatives increase. If the goal is to estimate actual offending, arrest is most likely to capture real recidivism rates (although Elliot, 1995, argues even arrest rates fail to adequately measure actual delinquency).

State juvenile correctional directors, however, did not consider arrest decisions trustworthy; instead, they indicated a strong preference for adjudication, a decision point at which prosecutors and independent fact-finders conclude that the evidence is sufficient to prove guilt. At the August 2009 meeting of the full membership, CJCA members arrived at consensus giving precedence to adjudication as the measure of recidivism best reflecting correctional goals. At the same time, they strongly recommended the collection of multiple measures of recidivism to facilitate comparisons among studies using different measures. The vote on this decision was unanimous.

The membership further agreed that research on recidivism must clearly distinguish between delinquent offenses and probation or parole violations, as well as between offenses committed following discharge and those committed during a program or confinement. Only post-discharge offending was considered consistent with the common definition of recidivism (see also Barnoski, 1997).

Another concern raised at the August 2009 meeting was the fundamental differences between youth populations of different states. As mentioned earlier, state laws differ in the age jurisdiction of juvenile court, as well as criteria and methods for transfer to the adult system. Furthermore, states differ in a number of significant ways, including population demographics, existence of large urban areas, and the jurisdictional scope of the juvenile justice agency’s authority.

As such, CJCA members agreed on the need to differentiate among youth so that comparisons of recidivism rates are based on similar samples. For one thing, some form of risk assessment data is necessary to contextualize recidivism data within varying expectations of treatment impact. For instance, low-risk youth who are unlikely to reoffend prior to treatment may not demonstrate any program impact. Second, agencies often track subgroups of youth (e.g., first-time offenders) who may not represent all delinquent youth. Selecting population subgroups for data collection will undoubtedly skew interpretation and limit opportunities to compare outcome data. These issues required the Work Group to add standards regarding the collection of individual-level data that would permit analysis of similar subgroups of youth.
CJCA adopted its final standards in October 2009 at a special all-directors’ training, funded by OJJDP. Members received a final draft of the white paper (Harris et al. 2009) in advance of the meeting. The standards were designed with database development in mind; consequently, coding instructions were included for some variables. Moreover, the standards are specifically designed to facilitate collection of common data without limiting the ability of agencies to collect more complex or additional data. The final standards, which appear on pages 30-34 of Harris et al. (2009), are appended to this article.

Conclusion

Demands for accountability within juvenile justice will undoubtedly continue to grow as governments become accustomed to increased information and as fiscal concerns mount. Many CJCA members have struggled with the issue of presenting data to policy makers that are appropriate and fairly represent the outcomes of their agency’s work. Unfortunately, comparisons of recidivism rates often appear illogical due to differences in the measures of recidivism applied. The CJCA standards attempt to address the many challenges of providing accurate and fair data on recidivism that can stand up to close examination and that can be used to compare individual programs, types of programs, and correctional agencies.

There are clearly some limitations to the findings we have used to construct these standards. First, many program evaluations are not published and were therefore not included in our literature review. Our findings may have been biased by the selection of studies for publication. Second, the standards grew incrementally through many discussions and revisions, the details of which have not been fully described. A large number of people contributed to this project; their names and affiliations are listed in Harris et al. (2009). A process such as the one used by the CJCA Recidivism Work Group naturally introduces personal experiences, preferences, and perceptions that bring a degree of subjectivity to the end result. CJCA’s goal was to achieve consensus among its members while recognizing the knowledge developed and disseminated by the research community. Finally, the standards were created by and for public juvenile correctional agency leaders. They do not reflect the views of juvenile courts, juvenile probation, or private juvenile correctional agencies. Given that these other juvenile justice organizations have similar needs for outcome data, further development of the standards may be necessary as implementation of the standards moves forward.

Implementation of these standards continues and has already encountered several challenges. First, not all agencies have access to the data recommended by the standards, especially police records and adult court records. A significant implementation task, then, is to improve access to these data and strengthen collaboration among juvenile justice agencies. Second, some agencies lack the technical expertise and support to collect and analyze the data recommended. CJCA is working to share its resources to assist in making this support available. Several states, notably Maine, Rhode Island, Kansas, and Oregon, have already begun implementing the CJCA standards. We and all of the members of CJCA are committed to making measurement of outcomes a core element of CJCA’s strategy for building a more effective juvenile justice system.

About the Authors

Philip W. Harris, Ph.D., associate professor in the Department of Criminal Justice, Temple University, teaches courses on juvenile justice policy, criminal justice organizations and management, and urban minorities and crime. He has served as a juvenile correctional administrator in Quebec, Canada, and directed research on police and correctional decision making, evaluations of juvenile delinquency programs, prediction of juvenile recidivism, and the development of management information systems. In addition to other work, Dr. Harris has written more than 100 book chapters, journal articles, and research reports on juvenile justice, classification of juveniles, program implementation, and information systems.

Brian Lockwood, Ph.D., is assistant professor, Department of Criminal Justice, Monmouth University.

Elizabeth Mengers, M.S., is a homeless services planner for the city of Cambridge, Massachusetts. When she contributed to this article, Ms. Mengers was a research associate for the Council of Juvenile Correctional Administrators.

Bartlett H. Stoodley, M.A., is associate commissioner for juvenile services, Maine Department of Corrections.

References

Barnoski, R. (1991). Standards for improving research effectiveness in adult and juvenile justice. Olympia, WA: Washington State Institute for Public Policy.

Blumstein, A., & Larson, R.C. (1971). Problems in modeling and measuring recidivism. Journal of Research in Crime and Delinquency, 8 (2), 124–132.

Cottle, C. C., Riel J.L., & Heilbrun, K. (2001). The prediction of criminal recidivism in juveniles. Criminal Justice and Behavior, 28, 367–394.

Elliot, D.S. Lies, damn lies and arrest statistics. (1995). Boulder, CO: Center for the Study and Prevention of Violence, University of Colorado.

Harris, P.W., Lockwood, B., & Mengers, L. (2009). A CJCA white paper: Defining and measuring recidivism. Available at http://www.cjca.net.

Hoge, R.D., Andrews, D.A., & Leschied, A.W. (1996). An investigation of risk and protective factors in a sample of youthful offenders. Journal of Child Psychology and Psychiatry 37, 419–424.

Ingram, D.D., Parker, J.D., Schenker, N., Weed, Hamilton, B., Arias, E., & Madans J.H. (2003). United States Census 2000 population with bridged race categories. National Center for Health Statistics. Vital Health Statistics, 2,135.

Josi, D. A., & Sechrest, D. K. (1999). A pragmatic approach to parole aftercare: Evaluation of a community reintegration program for high-risk youthful offenders. Justice Quarterly, 16, 51–80.

Katsiyannis, A., & Archwamety, T. (1997). Factors related to recidivism among delinquent youths in a state correctional facility. Journal of Child and Family Studies, 6, 43–55.

Lowenkamp, C.T., & Latessa, E.J. (2005). Evaluation of Ohio’s reclaim funded programs, community corrections facilities, and DYS facilities. Cincinnati, OH: University of Cincinnati, Division of Criminal Justice. Available at http://www.uc.edu/ccjr/Reports/ProjectReports/Final_DYS_RECLAIM_Report_2005.pdf

Maltz, M. Recidivism. (1984, 2001). Orlando, FL: Academic Press, Inc., Internet edition available at http://indigo.lib.uic.edu:8080/dspace/bitstream/10027/87/1/recidivism.pdf

Mears, D P. & Travis, J. (2004). Youth development and reentry. Youth Violence and Juvenile Justice, 2 (1), 3–20, 2004.

Mennis, J., and Harris, P. (2011) Contagion and repeat offending among urban juvenile delinquents, Journal of Adolescence, Available at http://www.sciencedirect.com/science/article/pii/S0140197110001752

Myner, J., Santman, J. , Cappelletty, G.C., & Perlmutter, B.F. (1998). Variables related to recidivism among juvenile offenders. International Journal of Offender Therapy and Comparative Criminology 42, 65–80.

Puzzanchera, C., Adams,B., & Sickmund, M. (2010). Juvenile court statistics 2006–2007. Pittsburgh, PA: National Center for Juvenile Justice.

Rowe, D.C., & Farrington, D.R. (1997). The familial transmission of criminal convictions. Criminology, 35, 177–201.

Sampson, R.J., & Groves, W. B. (1989). Community structure and crime: Testing social-disorganization theory. American Journal of Sociology, 94, 774–802.

Sanborn, J.B. & Solerno, A.W. (2004). The juvenile justice system. Cary, NC: Roxbury Publishers.

Scott, N. & McKitten, R. (n.d.). Pennsylvania Commission on Crime and Delinquency commissions two DMC curricula in March. Models for change: system reform in juvenile justice. Available online at http://www.modelsforchange.net/newsroom/52.

Sentencing Project. (2010) State recidivism studies. Washington, DC: The Sentencing Project. Available online at http://sentencingproject.org/doc/publications/inc_StateRecidivismStudies2010.pdf .

Snyder, H. N., & Sickmund, M. Juvenile offenders and victims: 2006 national report. Washington, DC: Office of Juvenile Justice and Delinquency Prevention, 2006.

Stoolmiller, M., & Blechman, E.A.(2005). Substance use is a robust predictor of adolescent recidivism. Criminal Justice and Behavior, 32, 302–328.

Appendix A: The CCA Standards7

Defining and Measuring Recidivism

The first step in developing standards for the measurement of recidivism is to define the term. Recidivism is defined as commission of an offense that would be a crime for an adult, committed by an individual who has previously been adjudicated delinquent.

Because most delinquent offenses and crimes are not known to the justice system, recidivism is typically measured in terms of actions taken by justice system officials. Below are the actions most likely to be used for the measurement of recidivism.8

  1. Arrest: An arrest for any offense that would be a crime for an adult. Source of information: Police department files.
  2. Filing of Charges: Filing of charges with the juvenile court or adult criminal court based on accusations of an offense that would be a crime for an adult. Source of information: Juvenile court files.
  3. Adjudication or Conviction: Adjudication by a juvenile court or conviction by an adult criminal court of guilt, based on charges filed by the prosecutor. Source of information: Juvenile court files if tried as a juvenile, or criminal court files if tried as adult.
  4. Commitment to a juvenile facility: Commitment to a juvenile residential facility by a juvenile court following an adjudication of delinquency. Source of information: Juvenile court files.
  5. Commitment to an adult facility:9 Commitment to an adult residential facility following a trial in which the defendant was found guilty of a crime. Source of information: Criminal court files.

Standards for Measuring Recidivism (these standards apply to all measures of recidivism)

  1. When reporting program or system outcomes, population parameters of the study should be specified: e.g., age boundaries, public agency programs only (versus a combination of public and private programs), first-time offenders only, secure care programs only. At a minimum, age and gender boundaries of the population should be delineated. Any comparisons of outcome data can, then, take into account differences in populations studied.
  2. The source or sources of data for each data element should be clearly identified as well as who is responsible for collecting the data, and the frequency of data collection.
  3. Adult convictions should be included to ensure that offenses occurring at some point in the follow-up time period are not excluded. It should not matter whether the offense resulted in adult system processing.
  4. All recidivism tracking should include adjudication or conviction as a measure of recidivism. More than one measure of recidivism should, however, be used in order to increase opportunities for comparison. Multiple measures of recidivism, such as re-arrest for a new offense, or adjudication and reincarceration for a new offense, make comparisons more meaningful and provide options for selecting appropriate comparison data. Since not all states will collect exactly the same data, and since some data sources are known to store more reliable data than others, reporting several measures of recidivism increases the chances that two states will have collected at least one measure on which comparisons can be made.
  5. Measurement of recidivism should start with the date of disposition. Recidivism should be reported separately, however, for the following categories of cases:
    1. Youths who are adjudicated for new offenses while in custody.
    2. Youths released from custody to the community or who are under court-ordered supervision.
    3. Youths discharged from juvenile court jurisdiction.
      Aggregate recidivism rates should not include category a. above: Youths in custody.
  6. The follow-up period for tracking an individual’s recidivism should be at least 24 months from either of the two date options mentioned in Item 5 above, and should include data from the adult criminal justice system. Outcome reports may examine recidivism at shorter time intervals, such as 6 months, 12 months, 18 months, and 24 months. In order to measure known [offense episodes] occurring within 24 months, data collection will need to continue to 30 months to account for a time lag between arrest and adjudication/conviction.
  7. Sufficient data about individual youths should be recorded to make possible appropriate comparisons and future classification. At a minimum, the data recorded should include characteristics often associated with risk of re-offending (see item 13, below), such as demographic information (age [in years], gender, race, ethnicity) and offense history (age at first arrest, number of adjudications and types of offenses (see item 12, below). Special needs youth (mental health, substance abuse, and special education) should be clearly identified, since the probability of this population being arrested and reincarcerated is disproportionately high.
  8. Timeframes must be clearly recorded, since recidivism is always time specific:
    1. Record date of adjudication or conviction – all cases.
    2. Record date of disposition or sentencing – all cases.
    3. In the case of persons committed to residential facilities, record the date the offender is released to the community.
    4. For all youths, record the date when juvenile court jurisdiction was terminated.
    5. No matter what measure of recidivism is used (e.g., re-arrest, new adjudication/conviction, or reincarceration), it is the date of the offense event that should be used to determine the date when the recidivism event occurred.
    6. In order to determine the completeness of the data, the date that the data were last updated should be recorded.
    7. In order to create the possibility of reporting recidivism following termination of all court-ordered services, the date of discharge from court jurisdiction should be recorded.
  9. Typically, a delinquent event will produce more than one charge. All charges should be recorded if there is more than one; the most serious charge should be identified, and the charges on which the youth was adjudicated or convicted should be recorded.
  10. If more than one offense [episode] is being processed at the same time, the information in item 9, above, should be recorded for each offense.
  11. Probation or parole technical violations confirmed by the court and related dispositions should be recorded separately from data on new offenses. Technical violations may result in incarceration or re-incarceration, but they do not imply commission of an offense.
  12. For system comparison purposes, offense type is more useful than a more precise offense term unique to a given state. The following general offense categories are recommended. In addition, the following ordering of offense categories should be used to reflect offense seriousness, with a. being the highest, and g. being the lowest.
    1. Offense against persons
    2. Property offense
    3. Weapons offense
    4. Drug trafficking/possession (felony)
    5. Other felony
    6. Drug or alcohol use (misdemeanor)
    7. Other misdemeanor or lesser offenses
  13. Different jurisdictions use different risk assessment tools. On occasion, the same tool is used but cut-off scores for classification differ. Consequently, resulting risk scores and levels cannot be used to classify all juveniles. This problem was addressed by Lowenkamp and Latessa (2005), who created a simple measure using commonly-available items: age at first arrest and offense history items. We have adopted that method here, adding drug, school, family, and peer items that are known predictors of recidivism.
    In order to group similar cases for comparison of recidivism rates, the following person characteristics should be collected for each youth. The first set of items will be used to identify demographic subgroups. The second set, labeled risk items, will be used to construct a generic risk score. The scoring plan is indicated to the right of each item.
    Demographic Characteristics
    1. Age in years
    2. Gender (female, male)
    3. Ethnicity (Hispanic or Latino: yes or no)
    4. Race (Black or African American, Asian, American Indian or Alaskan Native, Native Hawaiian or Other Pacific Islander, White)10

    Risk Items11
    The risk score based on these items can range from 0 to 9. Risk groups will be defined as: low = 0-3; medium = 4-6; high = 7-8; very high = 9).12

    1. Age at first adjudication, in years (less than 14 = 1; else = 0)13
    2. Total number of prior adjudicated [offense episodes] (3 or more = 1; else = 0)
    3. Number of prior adjudications for felony [offense episodes] (3 or more = 2; 1 or 2 = 1; 0 = 0)
    4. Youth has been diagnosed with a substance abuse problem (yes = 1; no = 0)
    5. Youth has dropped out of school and is currently not attending school (yes = 1; no = 0)
    6. Youth has been the subject of substantiated abuse or neglect (yes = 1; no = 0)
    7. One or both parents have been convicted of a crime (yes = 1; no = 0)
    8. Youth is a gang member or is gang involved (yes = 1; no = 0)
  14. If a formal risk (of recidivism) assessment was conducted near the time of disposition and prior to delivery of services to a youth, record the level of risk (low, medium, or high). Also record the specific risk assessment instrument that was used.
    Risk Classification: Low, Medium, High, Very High
    Name of Risk Assessment Instrument: _______________________________
  15. In addition to an individual’s likelihood of recidivating, neighborhood risk factors should be included in creating comparison groups of youth. The following community risk factors should be attached to each case as neighborhood environmental risk indices:
    1. A higher number of gun violence incidents in last year than average for the larger community
    2. A higher crime rate than average for the larger community
    3. A higher residential mobility rate (U.S. Census data)
    4. A higher than local average percentage living under the poverty level (U.S. Census)
    5. A lower than local average of persons over age 25 with a high school education (U.S. Census)14

1 This committee was chaired by the fourth author of this article.

2 In two states, these individuals were independent researchers working under contract with the state agency.

3 The first three authors of this article.

4 Copies of these studies are available from the first author of this article.

5 Survey questions can be found in Harris, Lockwood and Mengers (2009).

6 Including Academic Search Premier, Sage Journals Online, and PsycINFO.

7 Source: Harris et al., 2009. Copyright © 2009 by the Council of Juvenile Correctional Administrators. Reprinted by permission.

8 Other actions are available prior to adjudication in some states. Our aim in developing standards was to limit available decision points to those common to all states.

9 It is possible in some jurisdictions for a juvenile to be tried and convicted as an adult and committed to a juvenile facility to serve some or the entire sentence. This information should be obtained from criminal court files.

10 These racial categories were taken from the 2000 U.S. Census. A discussion of how to bridge different race/ethnicity coding schemes appears in Ingram et al. (2003).

11 Studies examining the predictors of juvenile recidivism have uncovered a number of individual-level factors that influence the likelihood of a juvenile re-offending. We can provide only a small sample of this research here. Research has shown that juveniles at highest risk to offend are those who have done so in the past (Cottle, Lee, & Heilbrun, 2001; Snyder & Sickmund, 2006). Other individual-level predictors of recidivism include substance abuse (Stoolmiller & Blechman, 2005), current age (Snyder & Sickmund, 2006), age at first arrest (Katsiyannis & Archwamety, 1997), participation in education (Katsiyannis & Archwamety, 1997; Myner, Santman, Cappelletty, & Perlmutter, 1998), delinquent peer relations (Hoge, Andrews, & Leschied, 1996; Myner et al., 1998), parental criminality (Rowe & Farrington, 1997), and family conflict (Hoge et al., 1996). We have selected obvious indicators of these constructs.

12 This risk measurement design is considered by CJCA to be preliminary and will be revised once data are available to be analyzed.

13 Based on 2007 data, the case rate for 13-year-olds (36.3 per 1,000) is substantially lower than for 14-year-olds (61.1), after which the increase in case rate declines in size (Puzzanchera, Adams, & Sickmund, 2010). Still, juveniles under 14 represent a small proportion of adjudicated youth.

14 These risk factors were adapted from the risk factors utilized by Communities that Care (http://beta.ctcdata.org/?page=static_files/risk_factors.html).The first two items are often available on police department Web sites. The others are common census data items. Each item should be scored yes ( = 1) or no ( = 0). The total score of these items should be used as an index of environmental risk. Each item requires a comparison. This comparison can be at the census tract level, in the case of a city, or the county level in a suburban or rural area.

OJJDP Home | About OJJDP | E-News | Topics | Funding
Programs | State Contacts | Publications | Statistics | Events