Scheduling SAT and ACT Practice Assessments for Maximum Impact
Assessments help us measure student learning and plan for the next round of instruction, but how do you know when you should schedule them?
Most of us educators agree on the value of assessments. They help us measure student learning, describe student growth, and plan for the next round of instruction. However, we often tend to plan our assessment calendars based on when it is most convenient to administer the assessment.
Admittedly, assessing students interrupts the instructional flow and takes a lot of effort. Even the smallest assessments still take time to plan and analyze, whether it be an eight-question quiz or a two-question ticket-out-the-door. Selecting the right questions to measure student learning is also time-consuming. Each assessment item must be a valid way for a student to demonstrate their learning. Then, we have to consider the instructional time used by the students to take the assessment. And finally, scoring the assessments and reviewing the data takes time.
Any act of assessment is a commitment of time and effort. So why do we start and end with the assessment dates, rather than the analysis and application dates?
Types of Assessments
Before we get into it, let’s first go over the various types of assessments we use for data-informed instruction. They include:
● Screeners
● Diagnostics
● Short-cycle formative assessments (measurements as or for learning)
● Larger formative assessments for progress monitoring and benchmarking
● Summative assessments (measurements of learning)
Selecting the right assessment is a key step in planning a data-informed instructional plan for your learners. The next, and often overlooked, component of the process is timing the assessment so the data can serve its purpose.
Most educators understand the timing for a summative assessment. Summative assessments come at the end of the learning cycle. Their purpose is to describe what was learned during the data-informed instruction. A common analogy is that summative assessments are an “autopsy.” They come after the learning is done.
Everyone strives for the same clarity with formative assessments. Educators place short-cycle and benchmarking assessments throughout the learning cycle to help drive instruction. Unfortunately, when to schedule the formative assessments isn’t as clear. Schools across the nation have fallen into a predictable trap:
We schedule assessments based on when it is convenient to collect the formative data, not when it is convenient to apply the formative data.
Assessment Timing
Let’s take a look at an example of a real-life school and how their formative assessment timing could be improved.
School A used Horizon Education assessments to support ACT readiness. In their first year, they used PreACT/ACT-aligned grade-level assessments to establish a baseline for test readiness in all grades. The tests were administered in mid-December, in the final week of school before winter break. They had an 88% test completion rate, which matched their attendance rates for that week. School A met with Horizon Education for their Data Debrief in the second week of January. The department chairs were reviewing the data in advance of their department meetings scheduled for January 30th. The intention was to have teachers use the data for item review starting in February. Their strategy was sound. They had dedicated time on January 30th to review the data and teachers understood the expectation to use the data to inform instruction.
What can we learn from School A?
- Strength - The school was thinking about ACT readiness in advance. As early as December, they knew they wanted February instruction to be data-responsive.
- Strength - The school empowered team leaders (department chairs) to guide data analysis for each department. Data literacy was a distributed responsibility across the entire campus.
- Opportunity - On January 30th, the data was 45+ days stale. The long gap between the test administration and the data analysis meant teachers were not looking at recent data. The data-responsive instruction in February did not take into account any of the learning that happened in January.
- Opportunity - The assessment dates were picked because, in the Principal’s own words, “Nobody is introducing new content that week. It isn’t the best week for instruction around here.” Based on that sentiment, we can also question the quality of the data collected. School leaders were placing the assessment windows in a way that wouldn’t interrupt instruction, but they were still expecting to yield high-quality data.
School A’s large gap between the administration and analysis of the data is understandable for a school that is in their first year of implementation. Adopting a new test and learning how to make use of the data is a process. While a large gap between data collection and application may be acceptable in year one, it is important that School A works to shorten that gap in subsequent years. Ideally, assessment administration, analysis, and instructional changes should take place within days of each other, not weeks.
Data Quality
Additionally, School A should think about data quality. School leaders are relying on this data to inform an 8-week effort to improve test readiness. Part of School A’s accountability rating is based on the ACT administration in spring, which means school leaders are focused on improving scores. Given the emphasis on test readiness, it may not be to the school’s advantage to collect data days before winter break when they can anticipate that students will be absent or distracted. Absent or unengaged assessment subjects create “noise” in the data. From a data analysis perspective, it is difficult to differentiate between an unengaged and a struggling learner. Amplified across a classroom or a grade level, this can lead teachers and leaders down the wrong path when planning changes to the instructional program. In some cases, bad data is more damaging than no data at all. The whole idea of data-informed instruction hinges on the quality of the data.
Effective assessment timing is crucial in data-informed instruction, as seen in School A. Assessments should be scheduled not for convenience but to ensure timely data analysis and application. Planning should also prioritize instructional changes post-assessment, focusing on quality data and student engagement to enhance learning outcomes.
Let us help you create an effective assessment schedule and show you how to capture and understand your students’ data. Schedule a demo of the Horizon Education product to see how we can help you make better use of student data.