Guess what? When it comes to the Common Core SBAC test and other unfair and discriminatory standardized tests, students from rich families tend to do better and student from poor families tend to do worse.
The following assessment of what influences standardized test scores comes from education researcher Christopher Tienken via education blogger Peter Greene.
Peter Greene is a fellow education advocate, an educator and one of the country’s leading education bloggers. His blog is called Curmudgucation. Christopher Tienken is an associate professor of Education Administration at Seton Hall University, a former school administrator and teacher and an expert on the factors that influence standardized test scores. His work can be found at http://christienken.com/
Being that the information presented below is academic, fact-based and intellectual, some elected officials won’t take the time to read it or perhaps understand it, but the information confirms what has been understood and discussed by opponents of the Common Core SBAC testing and other inappropriate standardized testing schemes.
The information proves – yet again – that standardized test scores are driven primarily by factors far beyond the control of the classroom teacher. Poverty, English Language proficiency and unmet special education needs are all key factors in producing lower test scores.
As Green and Tienken explain – Standardized Test Scores ARE NOT related to grades, teaching techniques, pedagogical approaches, teacher training, textbook series, administrative style or curriculum evaluation…. They are a product of the socio-economic characteristics of the students taking the test.
Thanks to the Common Core and the Common Core Testing scam, while raising taxes and cutting education programs, Connecticut will spend approximately $100 million on the SBAC testing this year to tell us that the rich do well and the poor do poorly on the fraud of a test.
Just take a look at the following;
Good News! We Can Cancel The Tests Now! (By Peter Greene)
Christopher Tienken is a name you should know. Tienken is an associate professor of Education Administration at Seton Hall University in the College of Education and Human Services, Department of Education Leadership, Management, & Policy. Tienken started out his career as an elementary school teacher; he now edits American Association of School Administrators Journal of Scholarship and Practice and the Kappa Delta Pi Record.He and his colleagues have done some of the most devastating research out there on the Big Standardized Tests.
Tienken’s research hasn’t just shown the Big Standardized Tests to be frauds; he’s shown that they are unnecessary.
In “Predictable Results,” one of his most recent posts, he lays out again what his team has managed to do over the past few years. Using US Census data linked to social capital and demographics, Tienken has been able to predict the percentage of students who will score proficient or better on the tests.
Let me repeat that. Using data that has nothing to do with grades, teaching techniques, pedagogical approaches, teacher training, textbook series, administrative style, curriculum evaluation— in short, data that has nothing to do with what goes on inside the school building– Tiemken has been able to predict the proficiency rate for a school.
“For example, I predicted accurately the percentage of students at the district level who scored proficient or above on the 2011 grade 5 mathematics test in 76% of the 397 school districts and predicted accurately in 80% of the districts for the 2012 language arts tests. The percentage of families in poverty and lone parent households in a community were the two strongest predictors in the six models I created for grade 5 for the years 2010-2012.”
Tiemken’s work is one more powerful indicator that the BS Tests do not measure the educational effectiveness of a school– not even sort of. That wonderful data that supposedly tells us how students are doing and provides the measurements that give us actionable information– it’s not telling us a damn thing. Or more specifically, it’s not telling us a damn thing that we didn’t already know (Look! Lower Poorperson High School serves mostly low-income students!!)
In fact, Tiemken’s work is great news– states can cut out the middle man and simply give schools scores based on the demographic and social data. We don’t need the tests at all.
Of course, that would be bad business for test suppliers, and it would require leaders to focus on what’s going on in the world outside the school building, so the folks who don’t want to deal with the issues of poverty and race will probably not back the idea. And the test manufacturers would lose a huge revenue steam, so they’d lobby hard against it. But we could still do it– we could stop testing tomorrow and still generate pretty much the same data. Let’s see our government embrace this more efficient approach!!
And for the original source of information read – Predictable Results (by Christopher Tiemken)
Colleagues and I used US Census data to predict state test results in mathematics and language arts as part of various research projects we have been conducting over the last three years. Specifically, we predicted the percentage of students at the district and school levels who score proficient or above on their state’s mandated standardized tests, without using any school-specific information such as length of school day, teacher mobility, computer-to-student ratio, etc.
We use basic multiple linear regression models along with factors in the US Census data that relate to community social capital and family human capital to create predictive algorithms. For example, the percentage of lone parent households in a community and percentage of people in a community with a high school diploma are two examples of community social capital indicators that seem to be strong predictors of the percentage of students in a district or school that will score proficient or above. The percentage of families in a community with incomes under $25,000 a year is an example of a family human capital indicator that has a lot of predictive power.
In all, our regression models begin with about 18-21 different indicators. We clean the models and usually end up with 2-4 indicators that demonstrate the greatest predictive power. Then we enter those indicators into an algorithm that most fourth-graders, with an understanding of order or operations, could construct and calculate. Not complicated stuff.
Our initial work at the 3rd-8th and 11th grade levels in NJ, and grades 3-8 in CT and Iowa have proven fairly accurate. Our prediction accuracy ranges from 62% to over 80% of districts in a state, depending on the grade level and subject tested.
In one study soon to be published in an education policy textbook co-edited with Carol Mullen, Education Policy Perils: Tackling the Tough Issues, I report on a study in which I predicted the percentage of students in grade 5, at the district level, who scored proficient or above on New Jersey’s former standardized tests, NJASK, in mathematics language arts for the 2010, 2011, and 2012 school years for the almost 400 school districts that met the sampling criteria to be included in the study.
For example, I predicted accurately the percentage of students at the district level who scored proficient or above on the 2011 grade 5 mathematics test in 76% of the 397 school districts and predicted accurately in 80% of the districts for the 2012 language arts tests. The percentage of families in poverty and lone parent households in a community were the two strongest predictors in the six models I created for grade 5 for the years 2010-2012.
Colleagues and I predicted the percentages of students scoring proficient or above for grades 6,7,8 during the 2009-2012 school years as well. For example, we predicted accurately for approximately 70% of the districts on the 2009 NJ mathematics and language arts tests. Recently, another colleague and I predicted the grade 8 NJ mathematics and language percentages proficient or above for over 85% of the almost 400 districts in our 2012 sample.
The results from Connecticut and Iowa are similar, with accurate predictions in CT on all tests grades 3-8 ranging from approximately 70% to over 80%. The Iowa predictions were accurate in approximately 70% of the districts.
Being a “rich” district or a “poor” district had no bearing on the results. We accurately predicted scores for “rich” and “poor” alike. The details will be published in upcoming books and journals so stay tuned.
The findings from these and other studies raise some serious questions about using results from state standardized tests to rank schools or compare them to other schools in terms of standardized test performance. Our forthcoming results from a series of school level studies at the middle school level produced similar results and raise questions about the appropriateness of using state test results to rank or evaluate teachers or make any potentially life-impacting decisions about educators or children.
So Connecticut parents and taxpayers;
When you are being abused or hearing about children and parents being abused and harassed for opting out of the unfair and discriminatory Common Core SBAC test or when you are paying more in taxes and watching important school programs and services cut, now that thanks to our elected and appointed officials we are pissing away $100,000,000.00 a year forcing children to take a test that will tell us that students from rich families tend to do better and student from poor families tend to do worse on standardized tests.