Wednesday, March 6, 2024

TEA Communication Regarding December ECR Scores

TEA Communication

 TEA Communication to Testing Coordinators

Not all of us receive notifications that the testing coordinators receive. And often - the information just doesn't "trickle down." 

Background

So here's what happened. 

1. Many people shared concerns about the scores on the December retests since the scoring method was different (not AI but by a machine that doesn't learn) and that there were a lot of zeros. 

2. We don't see responses on December tests because the tests aren't released. It's expensive to release an exam and they might need to use the passages and questions again. 

3. Responses from the December test were scored by two Automated Scoring Engines. 25% of them were also routed to and scored by a human for "various auditing purposes."  (That sounds like the human scores were used for making sure the ratings were correct and not used for reporting.) 

4. If the machines didn't have scores that were adjacent or the same (same: 0, 0; 1-1, 2-2, 3-3, 4-4, 5-5; adjacent: 0-1; 1-2; 2-3; 3-4; 4-5), then the essays were routed to humans for rescoring. 

5 The ASE, computer, sent the codes to the humans with some condition codes. These are the codes: (I'm assuming there are other codes not associated with zeros, but that information is unclear. It is also unclear about how the machine is programmed other than with the rubric.)

    a. response uses just a few words

    b. response uses mostly duplicated text

    c. response is written in another language

    d. response consists primarily of stimulus material

    e. response uses vocabulary that does not overlap with the vocabulary in the subset of responses used to program the ASE

    f. response uses language patterns that are reflective of off-topic or off-task responses

 Additional language describes "unusual" responses that could trigger a condition code for review. 

6. Responses routed to a human for rescoring retain the rating of the human. The language is unclear if the 25% scored by a human are also kept because that was addressed in a paragraph that did not address condition codes. 

Implications for Instruction: Revising

Instruction can address each element of the rubric as well as the information we see in TEA's letter to testing coordinators and bulleted above. 

We can have students ask themselves some questions and revise for these elements: 

  1. Does your response use just a few words? (Pinpoint these areas with the highlighting tool: Where did you: 1) address the prompt 2) provide evidence?) How could you expand your words into sentences? How could you use a strategy such as looping  to add additional sentences? 
  2. Does your response use mostly duplicated text? This means repetition. Go back into your writing. Use the strikethrough tool as your read through your work. Strike through any repeated words and phrases. Then go through each sentence. Does each sentence say something new? Use the strikethrough tool to remove those elements. Did you use an organizational framework for your writing? QA12345 helps you write new things for each segment. 
  3. Does your response use another language? Highlight the words in your language. Go back and translate your writing into English, doing the best you can to recreate your thoughts. Leave the text in your language as you work. Then go back and delete things that are not written in English.
  4. Does your response mainly use words from the text or passage? Good! This means you are using text evidence. First, make sure the evidence helps answer the ideas the prompt wants you to write about. Second, add a sentence that explains how that text evidence connects to the ideas in the prompt.

Implications for Instruction: Comprehension and Preparation

The last bullet indicates that the program is prepared with sample answers. This means that instructional materials and our questions must also consider what answers we are expecting and how we go about putting that down in words. It also means that we should be prepared for how students might misinterpret the both or either of the text, section, or prompt. 

  1. Teach students how to diffuse the prompt for words of distinction that change the conditions of the prompt focus (today vs past). This also includes how we decide what genre to use for the response: informational, argumentative, correspondence (informational or argumentative). 
  2. Discuss the differences about prompts that ask for responses about the whole test vs sections of the text (an aquifer vs a salamander). 
  3. In developing prompts, be sure to compose prompts that can have multiple answers and have multiple pieces of text evidence to support. Be sure to compose prompts that can have WRONG interpretations and associated evidence. 
  4. Develop sets of exemplars that writers can use to match the evidence to the thesis to the prompt and finally to the passage. They need to SEE all the ways in which their thinking can go astray. 
  5. Teach them about vocabulary and bots that scan for key words and synonyms. We may not like this, but how could that be a bad thing? They think about the key ideas in the prompt and passage and make sure their vocabulary in responses match. 

Implications for Instruction: Creativity

Most of the research about machine scoring says we really aren't ready for machines to score diverse answers. But...the language in the letter here suggests that "unusual" stuff might receive a condition code. And we don't have a description of what an unusual condition code for creative responses might be other than what was describe in the previous bullet points. IDK. 

What to Ask Your Testing Coordinator

Because of the concerns we all raised, TEA isn't going to let us see the ECR's, but they are going to let us have some data. (I'd like to see the statewide data, wouldn't you?) Ask your coordinator about the report they can ask TEA to provide. It will include information about scores on ECR's from December: 
  • How many students turned in a blank response? 
  • How many responses were unscorable with a condition code? 
  • How many responses received a zero? 
Coordinators can then ask for an appointment to VIEW responses with text that received a zero. They won't be able to ask questions about scoring, the rubric, or for responses to be rescored. But they'll be able to see the responses that have a zero. Not sure that will help much - and understand that seeing more would compromise test security and cost a lot of money because the passage and question could not be reused.