Thursday, December 6, 2018

Using TEKS Resource System to Teach Drama


In the Year at a Glance, it's easy to get lost in the title of the unit and the list of TEKS. As if they are some kind of checklist. The KEY standard that guides all of them in this unit is 4A. All of the work you do with the rest of the standards hinges on this one: 


Students must look at dramatic conventions to understand, make inferences, draw conclusions, and provide text evidence about how these structures and elements enhance the text. 

The key part here is that the dramatic elements have a purpose: to enhance style, tone, and mood (2A, 2D, and to some extent 1B). More specifically, the standard gives us instances that help us know what kind of dramatic conventions we might consider: monologues, soliloquies, and dramatic irony. 

So our work with this standard can't really be boiled down to putting a single TEK on the board as the objective. It's more complicated and interwoven than that because we are working with author's purpose and craft and literary analysis. 

The performance assessment also has the opportunity to confound planning. If you're not careful, the way this assignment is presented can turn into a low rigor assessment that better fits the 6th grade TEK that asks student to compare how the setting influences plot. We are doing more than that here in 9th grade English. 



Notice that this performance assessment asks students to recreate a scene into a different time period. We can't leave out the tone and mood part either. But what is not mentioned here is that the whole point of our drama unit is to examine how writers, directors, or actors can communicate the theme, tone, and mood through the dramatic elements. 

This lesson can easily devolve into a simple retelling instead of a purposeful use of dramatic conventions. And the assessment can devolve into an activity or assignment if the teacher has not addressed the lessons about dramatic conventions during the reading phase of this process. 

So if we have read The Importance of Being Ernest, we are going to have to address the dramatic elements that are present in the text. I did a little research and found several categories of dramatic conventions: 

Dramatic Conventions
RehearsalTheatricalTechnical
hot seatingsplit focuslighting
role on the wallflashbackdialogue
still imagesflashforwardmonologue
narrationset
soliloquycostuming
spoken thoughtentrance/exits
aside
song
passing of time
use of music

Since the standards suggest monologues and the play has several, let's just pick that one. 

How does Wilde use the monologues to communicate his theme? That has to be the focus of the modeling and gradual release of the analysis as students read the scenes and selected monologues.

Applying it to writing: 

When it's time to write for the assessment, the writers must first explore what theme they wish to convey. THEN they can write a scene where a monologue is used purposefully to convey the theme. Students can make decisions about what character would best deliver those lines and describe why. They can make decisions about how and why a character would compose and deliver those lines. They can lift particular lines and phrases from the monologue that best support the theme. 

Students could then trade their compositions and see if the other groups or teams could discern the theme and debate why the character was or was not the best choice to deliver the message. Students can discuss the evidence that was or was not effective in delivering the author's message. 

Our work here is NOT about adapting a play to a new setting. What are kids actually learning? They are learning how authors use the dramatic elements and conventions of drama to make meaning. And they are learning how to do them same for their own messages. 

Note: Students in English II could read the same play, but focus on the motifs and archetypes used by Wilde to communicate the themes. 


STAAR Lexiles and Passing Standards for Meets and Masters


WARNING: Facts and opinions ahead...

Good morning Shona,
Can you help me?  My director asked this question this morning:
“Can you tell me anything about crazy Lexile for reading STAAR grade 3 for Meets and Masters?”
I don’t know what she means by Meets and Masters as the students would all see the same passage.  I guess maybe she wants to know how the Lexile measure is determined for STAAR???
Can you help me with any of this?

Just the kind of question I like to geek out about! 

Two different "things" are at play here. Lexiles actually have nothing to do with the cut points for passing. Those cut points come from separate calculation. And Lexiles don't really tell you how hard something is to read either. It's a bunch of hooey, but don't tell anyone I said that. More on that at the end. 

Two Uses of Lexiles


1. So the passages go through a review process. Lexiles are only  one of the measures used to see if passages are at the right reading level. I wrote a blog post about that recently.These Lexiles for the passages are not reported by TEA. But you can cut and paste the released test text, put it through a Lexile calculator, and find out. You probably won't like the results. That's how we traditionally see Lexiles used - to measure the complexity of  a text. But that's not what we see above on the Scale Score conversion chart. 

2. 3rd -8th grade students will get a confidential student report after taking STAAR that also reports their Lexile level. The Lexile assigned to the student comes from some wonky psychometric stuff. Basically, TEA and Lexile folks did a research study. They gave kids another test and compared the results to how kids score (scale scores) on STAAR. This is supposed to tell us what level of books these kids are reading with ease. 

Lexile to Scale Score Study

Here's how TEA explains it: "TEA partnered with MetaMetrics to conduct a series of studies to examine the relationship between the Lexile® scale and the STAAR reading scale. Student participants were representative of Texas student population in reading ability and were similar to Texas student population in demographics such as gender, ethnicity, economic status, and ELL status. Students were given a paper/pencil Lexile® Linking Test that contained multiple-choice reading comprehension questions. Students´ results on the Lexile® Linking Test were examined in relation to the students’ results on the STAAR reading test. Researchers were able to establish a link between the STAAR reading scale and the Lexile® scale. Although no high-stakes was associated with the Lexile® measure on STAAR report card, it can be used as a resource for parents and educators. With Lexile® measures, parents and educators now have information that can be used to promote and encourage growth in reading."

On the confidential student report and on the scale score conversion chart, Lexiles from that study are reported.  They add that Lexile "link" from the research study to the Raw Score conversion table for grades 3-8. Lexiles aren't used in English I and II on the student report or the Raw Score conversion table. Because it doesn't work - especially with authentic vs engineered texts and for the qualitative features of text complexity - but that's my opinion. 

 Do Lexiles tell us who will meet or exceed STAAR passing expectations? 

The cut points between who does not meet, approaches, meets,  or masters the assessment have NOTHING to do with Lexiles. Who passes and by how much is calculated from THREE separate processes. 

  • First, there is a process by which the state compares the current test to previous tests. In this way, they can see which tests are harder or easier and ensure that a passing score is fair even when the tests themselves differ somewhat. That's why you see the raw score for passing change by one or two questions each year. 
  • Second, there is a process to determine where the cut points are for passing are made. The cut points for do not meet, approaches, meets, and masters are established AFTER the test results are all in. It's a separate mathematical magic than even the scale score conversion. 
  • Third, there are graduated percentages of passing that rise each year for accountability purposes. 
The Lexiles are associated with the scale score and research study alone. They aren't connected to the cut point calculations or passing percentages. The reported numbers for student Lexiles are just where the numbers fall from the Lexile linking study.  

The hope is that by adding Lexiles to the confidential student report, teachers and parents can find books that are at the right reading levels for kids - stuff that's not at the frustration level - so they can grow. A reading sweet spot.

Are Lexiles Valid in Determining Text Complexity?

Now, whether or not Lexiles are an appropriate method to select reading levels is another argument. TEA uses the measure, but relies primarily on teacher judgement. I've written about that here: Click on the names of the reading level instruments in this presentation to see how Animal Farm stacks up with multiple forms of readability assessments. Lexiles used without teacher discretion are hogwash. 

Bottom Line? I wouldn't pay a bit of attention to what Lexiles match which levels of passing on STAAR. 

More Resource: 
Here's some more resources to help you understand how Lexiles are reported and used for STAAR.



Friday, November 30, 2018

Saphire's Prove It

I've never seen a student compose a Prove It quite like this:

Her original text: Music feeds my soul. 

Her revision:

Music feeds my soul. When I hear "September 16" by Russ, I feel relieved. When I hear "Hypnotized" by NBA Youngboy, I feel empowered. When I hear, "Changes" by Xxxtentacion, I feel sad. 

Powerful.







If Prewriting Doesn't Lead to a Better Draft...

I was working with a group of students to apply the lessons I had learned from Victoria about how we need to avoid listing and clustering ideas that lead to shallow development. One of the things she said really struck me: "If our prewriting doesn't lead to a better draft, then we are wasting our time."

We started with a lesson I learned from Jennifer Wilkerson, where kids create anchor drafts and then shift them to match the prompts and genre charges. The link to the lesson is here. 

What I realized is that the prewriting should help establish links between the ideas and begin to help the writer shape the text structure/format. One of the things I'm thinking about a lot is that we ask kids to put ideas in these graphic organizers that have text structure formulas that just don't fit. You can't organize ideas in a graphic organizer if you don't have any ideas yet. You don't know what structures will fit your ideas until you understand and think about how your ideas are related or connected. You don't know what structure your ideas will need until you understand your purpose. Let me say it again: You can't organize nothing.

For this essay, I took some prewriting I had already done and shifted it to the STAAR prompt about a time you faced a challenge. I looked at the ideas on my chart and found the connection. The dumpster examples was the perfect fit for the purpose: a challenge.
After thinking about it, I realized that the ideas that connected for this purpose we a natural fit for a problem/solution type essay structure. That structure linked the ideas in a way that would solve the problem we see in a lot of student writing where they just list the ideas that they have brainstormed. 

Here's the essay that I modeled for kids: 

Dumpster diving is an embarrassing hobby, but it has become an important and entertaining hobby in my life. Somebody once asked me why in the world would I dig in the trash. 

It began during a difficult time in my life. I was starting over after a divorce. The house was empty except for my son's bedroom stuff and a rocking chair from his nursery. There wasn't much money to buy new stuff. But, there was still no place to sit and no place to eat. I needed a cheap and quick solution. 

I noticed that the neighbors were moving out of the rent house and left tons of junk by the dumpster. There was a broken coffee table from 1980 something. Ugly. But it had a nice shape. The legs were broken, but I could use the top. I dragged it into the back yard, jumped in my son's 1989 Dodge Ram truck and headed down other neighborhood alleys - hoping no one would see me. 

It wasn't too long until I found a rolling table with no top. A few blocks later, I found the side of an old dresser with the most beautiful blue finish. At home, I screwed the pieces together and added trim from the frame of a broken mirror. Now I had a kitchen table. I painted a checkerboard on the top, thinking of the games my son and I could play after dinner. The discarded trash turned into a useful, creative centerpiece in the kitchen. One room down...more dumpsters to visit! 

Wednesday, November 14, 2018

Beyond Heat Maps: Data Analysis is More than "They Failed"

Data analysis must show us the CAUSES of student misunderstandings and lead us to instructional decisions about HOW we can respond. Heat maps are only the first layer of analysis. The cut points on the heat map are correlated to passing standards. How does knowing they failed tell us what the kids don't understand and what part we can play in making that better?  The cut points have nothing to do with WHY the scores landed there or WHAT we should do to fix that. I don't wanna rant about all that in this post. If you want me to explain, come have coffee. I like coffee.

We have to go beyond the heat map.

Because what we really need to see is what the data analysis should show us about what we are going to DO when there are 30 faces looking at us in the classroom. 

Because what we really need from data is something practical that puts feet to our prayers and rubber to the road. 

First, statistical analysis over multiple items and years, trend data, can tell us what level of the gradual release model that needs to be addressed to correct the misconceptions. In the data I analyzed for a district about F19B, the data indicated that additional work was needed in Quadrant One (Modeling, Thinking Aloud, Direct Instruction/Delivery) and Quadrant Two (Shared and Interactive processing of the content AND the processes/steps used to complete the tasks). Here's the lesson we used to help teachers understand the nuances between each phase as applied to ELAR texts and instruction. 

Second, statistical analysis can also tell us what kind of instructional strategies we should employ because of where the kids are in the learning process. Fisher, Frey, and Hattie talk about how important it is to choose the right instructional strategy for when and where the students are in the learning process: Surface, Deep, and Transfer. Basically, if you are using the wrong strategy at the wrong time, the data will show it. You can read about that here: Visible Learning for Literacy. Someday, when I have more time, I'll explain how you can use the data to point to which strategy level should be used. 

Third, item analysis patterns and trends over multiple items help us name the cognitive gaps in reasoning, content, alignment to curriculum and assessment, mismatches in materials, and even test taking processes. When you give a NAME to the thinking error, you can design a response. When we looked more deeply at the item analysis (the spread of answer choices) for F19B items, we realized some important issues about our daily instruction that we could change and make a huge difference - and quickly.

We analyzed three items, y'all. Three. But what we learned changed everything about how we were going about our work. We found simple, clear and articulated understandings about what we needed to change about what we were reading and how we were reading it. The analysis showed us what content we needed to cover, what processes we needed to teach, and what reasoning and thinking lessons we had left untaught. 

And no one complained that no one understands ELAR. No one blamed the SPED kids. No one argued that the questions were mean or tricky. No one made excuses. Because they could see what the problems were AND they didn't feel helpless to find a solution. Data analysis that day was more than "they failed" because we knew why and what to do next. 


Tuesday, November 6, 2018

Answers About STAAR Readability and Vocabulary

I participated in a prompt study at TEA recently. While there, I learned about some important resources that are used for evaluating passages that are chosen for STAAR. Some - we already knew: There is an entire PROCESS that is used to make sure everything is in line with what students will need developmentally and in terms of fairness. The primary tool used in this process is the TEACHER'S decisions. We should be proud of that.

Several readability measures are used, including Flesh-Kincaid grade level, The ETS Text Evaluator, and Lexiles. Qualitative judgement are used as well. (The ETS Text Evaluator was new to me.)

Specific word lists are also used for passages, questions, and prompts. These help ensure that the vocabulary won't interfere with the content that is being assessed. You might check out these resources. I didn't know about any of them.

EDL Reading Core Vocabulary 


The American Heritage Dictionary   This is the resource that is used to build the dictionary items for the assessment too. 


Dead Lesson Routines for ELAR

Does the reading class routine look like this?

1. Teacher gives background powerpoint/lecture about culture and history.
2. Teacher activates schema and vocabulary for the text. Kids copy definitions and forget.
3. Kids read one at a time, or teacher reads, or a recording is played. (Notice the purposeful use of passive voice there.)
4. The teacher stops periodically and interrogates with leading and funneling questions.
5. Kids annotate the text. Or sleep. Or google the answers.
6. Kids take a multiple choice reading quiz to judge comprehension.
7. Sometimes they write a literary analysis essay.
8. The teacher complains about how these kids can't read or write.


Uhhh... Ew.

It sounds exactly like what Jennifer Gonzalez of Cult of Pedagogy was ranting about on Sunday. To Learn, Students Need to DO Something. 

No wonder kids hate to read. And no wonder they have nothing to write about. GEEZ!

Problem One: I tour schools and ask to look in the classrooms and bookrooms.  This is what I see. There are no books. In the bookroom, there are tattered sets of TKM and book sets with unbroken spines from the last textbook adoption. There are no contemporary texts. There are no diverse texts.

There is nothing to read, y'all. Unless you count NewsELA passages.

Solution One: Buy books kids want to read.

Problem Two: Administrators and teachers don't really know what else to do. And some believe, "My English classes looked like this, so by God these kids ought to comply." And then they blame the kids for being awful human beings: passive, apathetic and unmotivated, and ignorant.

But good teaching is about getting KIDS to do the work. Good literacy teaching is not about what the teacher is doing. And really - heresy to some so hang on - the ELAR classroom is not even about what we are reading. Effective literacy instruction is about how the teacher helps the KIDS do the work and the thinking about any text. Look at the left side of the TTESS Rubric if you need language to describe what this looks like.

Solution Two: Teachers need models of what classroom instruction could look like. And we have to do more than tell them to give kids choice and to implement "workshop." Most teachers don't know what that means. (And it certainly doesn't fit well in the linear lesson plan mandates, but that's a problem for another day.) What could/should the classroom routine look like? Teachers need sample design features that help them know what the sequences should/could look like in a 45 minute period with kids like theirs.

I've been collecting some options on workflowy.  Click the link to see them.

So. Ditch the dead  ELAR reading routine from the 19th century and pick something that might actually work. Even better, send me your ideas. I'll add them to the list.