Wednesday, February 18, 2026

Online Tests for Reading Level and Instructional Needs

Online Tests for Reading Level and Instructional Needs

An unpopular opinion...about online thingies that tell teachers what their kids' reading levels are and what they need for instruction.

An unpopular opinion...about what STAAR reports tell you about where kids are in their reading level and what they need for instruction.

From my experience and conversations: 

Online reading level screeners - I've NEVER seen one give a valid level. I've seen them spit out a bunch of stuff a kid should know how to do and what they don't. But it's ALWAYS wrong. 

STAAR - even TEA says the assessment results don't tell you a level or what a kid knows. Basically, STAAR tells you how well a kid learned the whole curriculum. And the SST will do the same. Just on more than one day. 

A Real-World Scenario

Let's examine a real world scenario. Anna is in 4th grade. Her school uses IXL. Maybe it uses MAP. Maybe it uses something even more inferior like Star Renaissance. Doesn't matter. 

Kids go to a computer lab. Oh - they don't have those anymore unless there's no library. 

I'll paint another picture. Kids take out their devices. They are in their Reading Language Arts Classroom. Sometimes, all the kids from the whole school are in the cafetori-gymnasi-torium at the same time. All the kids in the class are there. The teacher tells them where to go for the test. Some of the computers work. Others need to be charged. Some of the keys don't work. None of them have mice. Some of the browsers connect. Others don't. 

Then the kids sit there and read everything on the screen and do the best they can while the computer dutifully adapts to what they are clicking on. Kids carefully consider the answer choices and check back into the text to make sure they are right. Other kids finish and get to leave, but they keep on working. Some schools let the kids leave the room. Others make all people stay in place until all are finished with the exam. The kids who are finished read their own books quietly. The kids still testing all sit there and read everything on the screens the best they can. They are not distracted by what others are doing around them. They know they need to do the best they can and everyone is different. 

Nope. None of that happens. That is not the world any of us live in. Never have. 

Human Subjects and Cognitive Actions

So reading. It's all in the head. It's quiet. No one knows what's actually going on - nothing is observable from a process standpoint. And there's nothing really from a product standpoint other than selecting the right answer. But we never really know why that data means what it does until we sit with a kid and talk with them. We never really know WHY that data says what it does because we weren't there  - not in the room where it happened. There are simply too many other variables in setting, individuality, background, emotional maturity, other abilities, what happened in the hall or home before it happened. 

If you are really going to have valid data that tells you WHY a kid can't do a particular standard - you have to understand the reading process in general, the ways kids get it confused, and cause the act of reading to be observable. You have to talk to a kid. You have to hear what they read. You have to analyze the actions and reactions. 

A Chart to Think and Compare

As I was trying to explain the differences in IXL and what we really need to make instructional decisions about kids, I started making this chart with my learner. We brainstormed these ideas.

IXL

IRI

Computer Based

Paper Based

How do they determine the placement level? 

Set of words to read and code; then match to a leveled passage


Cannot capture fluency or accuracy; Collects data about speed and time spent

Fluency and accuracy measured and analyzed

Comprehension questions and TEKS based stuff

Underlying causes

Quick        Time Consuming

Masses

Individuals

Adaptive leveling for future computerized lessons

Diagnostic for theories of reading and approaches to difficulty

Unmonitored

Monitored by an adult; relational

Negatives: Flawed data; Kid thinks they are dumb; learned helplessness; assigned to lessons they don't need

Negatives in practice: Stay in the same level; static group based on the level instead of their actual behaviors and needs; teachers don't know how to use them properly




An Experiment

I'm working with a learner right now who has a lot of STAAR and Reading Level data from a computer. The kid she's working with is in 7th grade. The data tells her that she needs to read at a 4.5 reading level. But I watched her read that IRI (Informal Reading Inventory from a well known source). She's not reading at the 4.5 level. I'll let you know what we discover. 



No comments:

Post a Comment