Sunday, March 12, 2023

Should we take the interim? And then what? Part One

Draft

Should we take the interim? 

Yes. 

Reason One: It's a giant research study to see if it is possible to measure growth over time instead of on a one day high stakes assessment. That's what the legislation originally asked for TEA to study. Part of me wants to say it can be done. And, that's not really how the interim is being used right now. Reminds me Dr. Ian Malcom on Jurassic Park that says, "Your scientists were so preoccupied with whether they could, they didn't stop to think if they should." Right now, the interim is supposed to predict the likelihood of passing the STAAR at the end of the year.  So many variables are in place socio-emotionally, culturally, academically, and within each subject domain and test design, that I fear we are not measuring what we think we are anyway. It's a good idea to see if this works or not. 

Reason Two: Prove them that teachers are the ones that know best, not an assessment. I'd really like to see the data that says it predicts what it says it does. But from what I've seen in reports from campuses last year and their STAAR results, the interim data didn't match any of the projections from the interim for ELAR. So...let's take the thing and then bust out our best to make sure kids are learning beyond the progressions and predictions. 

Reason Three: It gives the kids an "at bat" with the new format and item types. I'm ok with that rationale...except: Have we explicitly taught digital reading skills and transfer of knowledge and strategies for the new item types, TEKS, and skills? Have we had enough massed and distributed practice on these skills before weighing the baby again? If we used the interim as an instructional tool, maybe. We could use the interim as a guided or collaborative practice. But as another source of decision making data? Not sure that's accomplishing our goals to make kids do things alone that we already know they don't have enough experience to do well. Sounds like a good way to disenfranchise struggling learners with further beliefs about how dumb they are. It's like signing up for fiber internet and paying for it before the lines get to your neighborhood.  

No. It's a giant waste of time for kids and teachers. 

Reason One: After examining the data, I have NO idea what I'm supposed to do in response to help the teachers or the kids. More on that later. 

Reason Two: It's demeaning and demoralizing. Do I really want to tell a kid in March, a few days before the real deal that they have abysmal chances of meeting expectations? Do I really want to tell teachers that x of their z kids aren't in the right quadrant to show growth when they have less than two weeks after spring break to do something about it? If they even believe that the kids took the exam seriously? They already know the kids didn't use their strategies and purposefully blew off three or more days of precious instructional time while taking the dang thing. 

Reason Three: Did we do something about the last data we collected on the interim? Do the kids know their results? Have they made a plan to improve? Do we have a specific plan? Have we fixed the problems that caused the first set of results? People are having data digs and meetings to tell teachers what to do and how these predictions are going to play out for accountability. We're having tutorial programs and hours to meet 4545. We're doing some stuff, but is it really a detailed and reasoned response to resolve the causes of the data? Have we fed the baby enough to cause weight gain before weighing it again? No. 

Reason Four: The data is correlational, not based on cause. The data on the interim tells us the correlations between one data collection (STAAR last year) and the next assessment. Results are correlated to the probability of success or failure and do not pinpoint the cause of the success or failure. When working with human subjects, it is humane to use correlational data to make instructional decisions about nuanced human intricacies for individuals in such complex settings and soul crushing accountability for personal and collective judgments? 

An additional problem with the interim is that you don't have a full trend line until you have three data points. Statistically, it doesn't make sense to take last year's STAAR results (which was a different test using different standards) and pair it with a second interim. There is no trend line until the third assessment even if the assessments were measuring the same thing. 

Yet, that's what teachers were asked to do: make some decisions about indictments on their instructional practices and resulting student performance on data that doesn't mean what they were told it meant. Furthermore, teachers are told to revisit previous CBA's and other data to determine what needs reteaching. The advice is well meaning, but in practice is too unwieldy and flawed to do anything other than make teachers want to pull their hair out and cry out in desperation and stress. 

More on that in Part Two: We took the interim. Now what?