Since the passage of the Every Student Succeeds Act (ESSA) in December 2015, state education agencies (SEAs) have invested time, effort, and resources into designing their accountability systems so that they reflect the state’s vision and priorities and are in compliance with federal statute. States, especially SEA staff members, should be applauded and congratulated for getting to this point – the development and approval of a state ESSA plan is an important milestone and key to supporting a variety of educational initiatives. However, it is only the start. To successfully meet the vision, priorities, and goals of the state accountability system requires an effective implementation plan. Much like in the design phase, implementation requires thoughtful consideration and intentional planning so that the system’s annual outcomes reflect its intended design. And, as with the design phase, most states have a short amount of time to get their systems up and running. ESSA requires states to begin identifying schools for comprehensive support and improvement (CSI) by the 2018-2019 academic year. This means that operational infrastructures, such as data and reporting systems, processes, business rules, and validation procedures, need to be in place soon.
In support of the state’s validity argument for accountability, its implementation plan should reflect the design of the accountability system with fidelity. This means that the scores and ratings for schools and districts are correctly computed, schools and districts in need of support are appropriately identified, and that the claims made about schools and districts by the system are accurate. This paper outlines considerations for meeting these goals. Because each state has unique priorities and requirements for its accountability system, the paper is not a step-by-step “how-to” manual or specification for operational implementation and quality control. Instead, it describes a framework that states can use to guide the development of their accountability implementation plan and put guardrails in place to validate the various outcomes of the accountability systems.
Figure 1 is a visual representation of a framework for a state’s accountability implementation workflow. The workflow includes three main stages: input, process, and output. Each stage includes components such as data files, data systems, business rules, reported data, and reporting system.
Figure 1. An Accountability Implementation Workflow Framework
The full paper explicates the framework by first stating the high-level objective for each component and then elaborating on the specifics by asking guiding questions about the following five W’s from the state’s organizational structure and processes.
- What: the key tasks or elements in this stage;
- Who: the people, department or organization responsible for the tasks or elements in this stage. Note that this can include people outside of an SEA, which would require additional collaboration and coordination efforts;
- When: the timeline or due dates for key tasks or elements in this stage;
- Where: the sources of data, documentation and other relevant materials and resources for this stage; and,
- Why: the rationale for key decisions related to this stage.
The objectives and guiding questions in the framework are important for practitioners to consider not only in the initial planning and implementation of the state’s accountability system, but also for the ongoing monitoring, evaluation, and continuous improvement of the system.
A key characteristic that underlies the objectives for all stages of the operational implementation workflow is the commitment to quality. Threats to the quality of an accountability system include errors in assessment or accountability data, misspecification or misunderstanding of business rules, and lack of stability in the data or reporting systems. Without a comprehensive and actionable plan for quality control, the state’s accountability system is vulnerable to errors or outages. Given the broad impact, high profile, and politically-charged nature of assessment and accountability in K-12 education, a few quality issues could lead to mistrust or fuel opposition, undermining even the most carefully designed and technically sound accountability systems.
The final part of this paper offers recommendations for best practices to help mitigate or minimize the threats to quality in accountability implementation. The recommended practices are rooted in tried and true quality control procedures or processes from implementing operational assessment programs and include:
- Issues tracking logs to record defects or unexpected activities and outcomes so that follow-up can occur to mitigate any potential risks;
- Specifications that document in detail the steps for all tasks in the implementation plan;
- A replication process in which multiple people are assigned to independently carry out the steps described in the specifications and verify that they yield the same results;
- Test cases that represent common or typical sets of condition as well as atypical, extreme or even out of bound conditions to determine whether accountability system is operating as intended; and,
- A reasonableness review process that takes a more macro view of the accountability results by considering the meaning and implications of the outcomes and looking for patterns or trends that are unusual.
The full paper expands on each of these practices by providing guidelines and considerations for implementation in an accountability setting.
Having a solid implementation plan and sound quality control processes are instrumental to meeting the vision, priorities, and goals of the state’s accountability system. Many states already have implementation plans and processes in place for their accountability systems. States are encouraged to share their resources and lessons learned with one another to leverage their experiences and to promote a culture of collaboration as each state enters the next phase of implementing its thoughtfully-designed accountability system.