Monday, January 29, 2024

Is it Research? Is it Scientific Research? An Example with AI Scoring Citations

Y'all. Some of you are going to get mad at me for some of this. But stick around for a minute or two to follow the ideas. I'm going to be really direct: We are being told that things are researched. We are told that ideas are scientific. Some of that isn't true. We MUST develop the skills to evaluate what is behind the statements, policies, programs, politics, and published material. We MUST know what is research and what isn't. We must also develop skills to find the red flags and warnings. 

First, some background and note of affiliation: 

I'm a member of a group called International Literacy Educator's Coalition. We've been working on this stuff: 


ILEC advocates for educators and families to make responsive, research-informed decisions for literacy learning that realize the potential of diverse learners to be literate, critical thinkers.

We are an international group of teachers, teacher educators, literacy scholars, researchers, and concerned parents who are dedicated to promoting literacy learning practices that enable all children and youth to realize their full potential as literate, thinking human beings.   We reject top-down, science of reading mandates that force teachers to commit educational malpractice.  We believe that teachers should be able to make the research-based decisions that are best for their students. 

And recently, we've been focused on what research really is and says about the science of reading. Controversial. I know.

An Example of ILEC Work Regarding Research:


Here's a recent contribution from Dr. Paul Thomas and some of his colleagues. It provides a bit of background for how research isn't always saying what we are told it does.

Stories Grounded in Decades of Research: What We Truly Know about the Teaching of Reading. Catherine Compton-Lilly, Lucy K. Spence, Paul L. Thomas, Scott L. Decker. 2023. Reading Teacher. ILA.

                                                                                        LINK: https://ila.onlinelibrary.wiley.com/doi/10.1002/trtr.2258


ARTICLE OVERVIEW

Stories Grounded in Decades of Research illustrates key ideas including research informants that challenge the “Science of Reading” (SoR) debates. The authors draw from a “multi-faceted and comprehensive” view of literacy using reputable research support that includes authentic student-centered observations. In combination, this offers a platform for understanding not just the WHAT but WHY of responsive professional decision-making that includes child-informed references. Contrary to the ‘simple and settled’ view of The Science of Reading, the authors position literacy as “complex, multidimensional, and mediated by social and cultural practices.”

How Laymen Tell if It's Research? An AI Example

So, these folks are admirable geniuses. But I'm a regular kind of folk. How am I supposed to figure this out for myself?

Lately, I've been studying AI and essay scoring. So I went to the Pearson Website to read more about it. I know that TEA has contracted with Pearson and Cambium to work on assessment. I do NOT know if they are using Pearson's tool to score STAAR stuff. More on that later.

Have a look at Pearson's website about AI scoring. In looking at all the items in the site - what is their purpose?
Persuasive: Sales. There's a shopping cart and links to other things you can purchase.
Informational - Note the Breadcrumb: Large Scale, K-12 Assessments, and Automated Scoring.
Argumentative: The good things it provides and the development over time.
Informational/Persuasive: The concept of continuous flow and why that's a good thing.
Persuasive: It has solutions and is innovative.

Questionable: But then we get to the bottom of the page with Selected Research and White Papers and an article called Automated Scoring: 5 Things to Know, And A History Lesson (sic.). Is it informational, persuasive, argumentative, or propaganda? Is it even research?

Let's take the components one at a time and call out the guidelines of telling the difference between research, science, genre, and purpose.

Internal Links and Articles are Not Often Research:

The article, Automated Scoring: 5 Things to Know, And A History Lesson, goes to another page on the Pearson website. It's stuff they wrote about their own product. That's called bias. And it's a red flag when you are looking for research.

The pages give a bit of background information about who is involved, where they worked and on what topics. The pages answer some basic questions (and concerns) about what AI scoring is that would clear up frequently asked items. It's definitely worth reading. But it isn't research. And it's purpose is to make us think good things about the product. It's purpose is to put us at ease.

Un-cited, Unlinked, and Unproductive Searches are Not Often Research


Citations Pleas? 
Note also the callout and quoted text at the beginning of the text. It looks like they are citing something from LearnEd: News about Learning. As if LearnEd is a company and News about Learning is a publication. But there's no citation. There's no link. And when you try to find the source...nothing like that pulls up on google.

And that's another red flag when you are looking for research. Stuff with no links or references isn't research. And stuff that's made to look like something it isn't is flat out deception.

We can contact the company and ask for this publication. But a conscientious and respectable approach would already include references.

We do find helpful information in this article. We find out the background of Pearson's VP of Automated scoring. Karen Lauchbaum sounds like a person that I would admire - a PhD from Harvard and a person who was involved from the beginning in research about how computers understand language. Impressive. And, Dr. Lauchbaum sounds like a person I would like, one that wanted to be a part of solutions. But note - she has a PhD in computer science - not assessment. She studies how computers understand language - not how language is evaluated for high stakes assessment. She writes software for Pearson in the automated scoring division - and gets a paycheck from the people who sell this stuff to Texas.

The webpage is a place to begin researching. But it is not research.

Note also, that the webpage promotes a program called Write to Learn, also a Pearson Product. That's another example of sales purposes and bias. A red flag for evaluating research.

White Papers are Not Research


Note the title of the research and white papers included before the citations at the bottom of Pearson's webpage on Automated Scoring.

White papers might include research, but they also include the author's point of view. Often, the point of view is also from the entity - in this case - Pearson. That's an example of bias and a red flag for research. A lot of times, white papers give position statements and the approach a company takes with an idea or product. You can read more about it at the Lib Guide The University of Massachussetts Lowell compiled about white papers. Bottom line: white papers are often persuasion and function as mini commercials in marketing programs. White papers are not research.

Next Up:

I presented this information at CREST at the end of January 29th. I worked with a group of colleagues to evaluate the citations and to develop guidelines about helping laypeople evaluate research statements. In the blogs, I'll share what we found from each of the citations listed as research and white papers Pearson used to support their AI automated scoring programs.

No comments:

Post a Comment