Accountability Identification is only the Beginning

Monitoring and Evaluating Accountability Results and Implementation

Image of Title Page: Purple Design

Executive Summary

The passage of the Every Student Succeeds Act (ESSA) marked the beginning of a new development cycle for accountability systems. State leaders once again have an opportunity to redesign their accountability systems based on the provisions included in ESSA and to ensure that systems improve outcomes for all students. While accountability systems differ in their theory of action and design, state systems were designed following the Council for Chief State School Officer’s (2011) principles that guide the development and improvement of accountability systems.

As states begin implementing and monitoring their accountability systems created under ESSA requirements, the number of stress points across a system becomes more evident. Additionally, effective accountability implementation extends beyond identifying the right schools or obtaining approval for a system that can then be treated as “set it and forget it.”  Most states’ experiences with accountability systems over the past few decades support the view that:

  • Merely rating and/or identifying schools will not lead to the desired outcomes; rather, some active supports are required; and
  • State, district, and school leaders are still learning what supports really work, especially at the needed scale and desired scope.

The correct identification of schools is a necessary but insufficient condition to build capacity and deliver support to local systems. Systems of accountability, support, and continuous improvement contain a series of feedback loops and information hand-offs that offer opportunities to collect evidence that systems are working as intended. By identifying activities and their relevant evidence throughout the design, development, and implementation of accountability systems, we can begin to develop validity arguments for our accountability and improvement systems. This paper references a framework that can support a systematic examination of the design, development, and implementation stages of accountability identification. This framework can be applied to the activities associated with each stage as follows:

Design Stage

  1. Refining the system’s overall vision (e.g., policy priorities, educational system goals, the role of accountability),
  2. Specifying indicators based on the system’s intended signals (e.g., growth and achievement, college readiness vs. career readiness, engagement), and
  3. Defining policy weights that represent SEA values and priorities (e.g., growth = achievement).

Development Stage

  1. Clarifying indicator measures and relationships among indicators through analysis (e.g., descriptive and inferential analyses, qualitative reviews of data and processes),
  2. Identifying potential data gaps or capacity concerns through the use of simulations (e.g., projections, historical data examinations, mock accountability runs), and
  3. Specifying performance expectations over time by setting defensible performance standards.

Implementation Stage

  1. Supporting the calculations and release of school designations,
  2. Helping people access, use, and interpret accountability data, which in turn informs local inquiries and information use, and
  3. Helping the SEA and LEAs deliver support to schools.

Within each of these stages and activities, SEAs can widen or narrow what they monitor to expand or limit system claims. Claims are statements or assertions about the accountability system and its impact. While claims will likely differ in granularity depending on the level of focus, they clarify the kinds of questions state leaders could be asking and the types of evidence they can consider collecting and evaluating.

 By developing a set of claims associated with accountability and improvement systems, SEAs can begin developing a logic model that identifies the assumptions, questions, data considerations, and possible evaluation approaches. These claims can help states establish a validity argument for their accountability and improvement systems. However, the validity of the full system rests on the confidence states have in the validity of each activity, as well as each preceding step or activity along the way.

The remainder of this paper describes a framework to systematically help states evaluate identification decisions. It first presents example claims associated with each activity listed above (e.g., Indicators provide fair and accurate information that informs the accountability system in the manner intended). For each claim, a series of guiding questions are then provided to help SEAs clarify the intended purpose, use, and process associated with each claim to determine assumptions. These assumptions are then used to help practitioners and designers identify sources of information, methods, or analyses that can be used to collect information to defend each claim. The framework is not intended to be prescriptive, but rather to provide examples of how states can apply this framework to begin establishing validity arguments for their accountability systems. 


You May Also Be Interested In ...