Cost of SBAC testing in Connecticut is unconscionable, unnecessary (by John Bestor)

1 Comment

Connecticut educator and education advocate, John Bestor, has written another powerful commentary piece, this time dealing with the utter waste of scarce taxpayer funds on the unfair, inappropriate and discriminatory Common Core Smarter Balanced Assessment Consortium SBAC testing scheme that is designed to fail a vast number of our state’s children.

With Governor Malloy implementing unprecedented cuts to vital state services, including public education, Malloy and the legislature should have started out by eliminating the funding for the SBAC testing scheme…long before the attacked the programs that are really helping Connecticut’s children, parents, teachers and public schools.

Published in the CTMirror and entitled, Cost of SBAC testing in Connecticut is unconscionable, unnecessary, Bestor writes;

Education activists have been speaking out and pushing back against the misguided Common Core State Standards and the flawed Smarter Balanced Assessment Consortium (SBAC) statewide test protocol for several years now, as they have become more aware of the billionaire-driven, media-complicit, and politically-entrenched “corporate education reform” agenda.

Although the computer-adaptive Smarter Balanced Assessment remains unproven and developmentally-inappropriate, proponents of the controversial test have been unable to demonstrate that SBAC is a psychometrically valid or reliable measure of student academic progress, let alone college- and career-readiness.  Nor have they convincingly countered claims that SBAC is unfair and discriminatory to students who are required to suffer through hours of supposedly “rigorous” and often incomprehensible test questions.

Despite a charge from the Connecticut Legislature’s Education Committee to evaluate the efficacy of SBAC, the Mastery Examination Task Force has failed to address the fundamental psychometric criticisms associated with SBAC which have been convincingly presented by Dr. Mary Byrne in her testimony in the Missouri lawsuit against SBAC.

The Task Force has also failed to consider the findings of over 100 California researchers who called “for a moratorium on high-stakes testing broadly, and in particular, on the use of scientifically discredited assessment instruments (like the current SBAC, PARCC, and Pearson instruments).”   Is there any chance that the Task Force would review the College Board executive’s whistle-blower commentary on the unprofessional and fraudulent development of the newly-redesigned SAT?

Although these findings resonate with education activists and an increasing number of parents across the nation, they have fallen on deaf ears with leadership in our state, even while many other states have dropped their membership with the consortium or removed tying results to high stakes until such findings are substantiated.  Perhaps, an understanding of the exorbitant costs associated with the controversial SBAC and Statewide SAT will gain the public’s attention.

Gov. Dannel Malloy and former Education Commissioner
Stefan Pryor signed the NCLB waiver agreement that coerced and committed the CSDE to (at the time) unidentified costs associated with the “next generation” SBAC assessment in order to escape unrealistic NCLB expectations.  The SBAC membership contract is renewed annually for $2.7 million a year (now estimated $2.3 million with 11th-graders out assuming CSDE was able to recover the costs for not testing juniors).

In addition, $13.5 million is paid to AIR (American Institutes of Research) to administer the SBAC test.  Another $15.3 million has been allocated to AIR (over 4 years, including this year’s pilot) to cover CMT/CAPT Science Test administration.  An adjustment was necessary to the original SBAC agreement when the CSDE switched to the unproven, newly-redesigned Statewide SAT for 11th graders which resulted in a $4.4 million three-year contract with the College Board.  Under the current state testing protocol, these expenditures will be recurring and likely to increase in future contract renewals.  These estimates do not include the untold expense associated with the substantial costs to districts for implementation, teacher time for test preparation, and student time lost to meaningful instruction.

During the recent government budget crisis and with future budgets likely to be just-as or even-more difficult, this CSDE/CSBE cost is both unconscionable and unaffordable.

Bottom line: this is an unnecessary expense as the Mastery Examination Task Force can re-design the course of statewide assessments.

Task Force members need to look afresh at the federal testing mandate required by the recently passed Every Student Succeeds Act.  This re-authorization of the Elementary and Secondary Education Act in late 2015 empowers each state to determine its own assessment practice as long as the state meets its federal obligation by measuring Reading and Math student achievement annually in grades 3 – 8, 11 and Science achievement three times during that same grade span.

No longer are we required to give one extensive summative test each year, when the requirement can be met by using interim assessments that are already given in schools and combining those with more authentic forms of assessment that are far more meaningful to students.

Rather than expend millions of dollars in massive giveaways to the greedy test industry and their lobbying business partners in the charter-school movement, there is no doubt that this assessment expectation could be accomplished more simply and more cost effectively.

Education activists and the parents who have courageously opted their children out of the unproven SBAC understand the tangled web of deceit with which the proponents of “corporate education reform” are remaking, some say destroying, American public education.

You can read and comment on his piece at: http://ctviewpoints.org/2016/06/29/cost-of-sbac-testing-in-connecticut-is-unconscionable-unnecessary/

 

NEWS FLASH – Common Core PARCC tests gets an “F” for Failure

1 Comment

Despite the rhetoric, promises and hundreds of millions of dollars in scarce public funds, a stunning assessment of the data reveals that the Common Core PARCC test DOES NOT successful predict college success.

The utter failure of the PARCC test reiterates that the same may be true for those states that have adopted the Common Coe SBAC testing scheme.

Here is the news;

The Common Core PARCC tests gets an “F” for Failure (By Wendy Lecker and Jonathan Pelto)

The entire premise behind the Common Core and the related Common Core PARCC and SBAC testing programs was that it would provide a clear cut assessment of whether children were “college and career ready.”

In the most significant academic study to date, the answer appears to be that the PARCC version the massive and expensive test is that it is an utter failure.

William Mathis, Managing Director of the National Education Policy Center and member of the Vermont State Board of Education, has just published an astonishing piece in the Washington Post. (Alice in PARCCland: Does ‘validity study’ really prove the Common Core test is valid? In it, Mathis demonstrates that the PARCC test, one of two national common core tests (the other being the SBAC), cannot predict college readiness; and that a study commissioned by the Massachusetts Department of Education demonstrated the PARCC’s lack of validity.

This revelation is huge and needs to be repeated. PARCC, the common core standardized test sold as predicting college-readiness, cannot predict college readiness. The foundation upon which the Common Core and its standardized tests were imposed on this nation has just been revealed to be an artifice.

As Mathis wrote, the Massachusetts study found the following: the correlations between PARCC ELA tests and freshman GPA ranges from 0.13-0.26, and for PARCC Math tests, the range is between 0.37 and 0.40. Mathis explains that the correlation coefficients “run from zero (no relationship) to 1.0 (perfect relationship). How much one measure predicts another is the square of the correlation coefficient. For instance, taking the highest coefficient (0.40), and squaring it gives us .16. “

This means the variance in PARCC test scores, at their best, predicts only 16% of the variance in first year college GPA.  SIXTEEN PERCENT!  And that was the most highly correlated aspect of PARCC.  PARCC’s ELA tests have a correlation coefficient of 0.17, which squared is .02. This number means that the variance in PARCC ELA scores can predict only 2% of the variance in freshman GPA!

Dr. Mathis notes that the PARCC test-takers in this study were college freshman, not high school students. As he observes, the correlations for high school students taking the test would no doubt be even lower. (Dr. Mathis’ entire piece is a must-read. Alice in PARCCland: Does ‘validity study’ really prove the Common Core test is valid?)

Dr. Mathis is not an anti-testing advocate. He was Deputy Assistant Commissioner for the state of New Jersey, Director of its Educational Assessment program, a design consultant for the National Assessment of Educational Progress (NAEP) and for six states.   As managing director for NEPC, Dr. Mathis produces and reviews research on a wide variety of educational policy issues. Previously, he was Vermont Superintendent of the Year and a National Superintendent of the Year finalist before being appointed to the state board of education. He brings expertise to the topic.

As Mathis points out, these invalid tests have human costs:

“With such low predictability, you have huge numbers of false positives and false negatives. When connected to consequences, these misses have a human price. This goes further than being a validity question. It misleads young adults, wastes resources and misjudges schools.  It’s not just a technical issue, it is a moral question. Until proven to be valid for the intended purpose, using these tests in a high stakes context should not be done.”

PARCC is used in  New Jersey, Maryland and other states, not Connecticut. So why write about this here, where we use the SBAC?

The SBAC has yet to be subjected to a similar validity study.  This raises several questions.  First and most important, why has the SBAC not be subjected to a similar study? Why are our children being told to take an unvalidated test?

Second, do we have any doubt that the correlations between SBAC and freshman college GPA will be similarly low?  No- it is more than likely that the SBAC is also a poor predictor of college readiness.

How do we know this? The authors of the PARCC study shrugged off the almost non-existent correlation between PARCC and college GPA by saying the literature shows that most standardized tests have low predictive validity.

This also bears repeating: it is common knowledge that most standardized tests cannot predict academic performance in college.  Why , then, is our nation spending billions developing and administering new tests, replacing curricula, buying technology, text books and test materials, retraining teachers and administrators, and misleading the public by claiming that these changes will assure us that we are preparing our children for college?

And where is the accountability of these test makers, who have been raking in billions, knowing all the while that their “product” would never deliver what they promised, because they knew ahead of time that the tests would not be able to predict college-readiness?

When then-Secretary Arne Duncan was pushing the Common Core State Standards and their tests on the American public, he maligned our public schools by declaring: “For far too long,” our school systems lied to kids, to families, and to communities. They said the kids were all right — that they were on track to being successful — when in reality they were not even close.” He proclaimed that with Common Core and the accompanying standardized tests, “Finally, we are holding ourselves accountable to giving our children a true college and career-ready education.”

Mr. Duncan made this accusation even though there was a mountain of evidence proving that the best predictor of college success, before the Common Core, was an American high school GPA.  In other words, high schools were already preparing kids for college quite well.

With the revelations in this PARCC study and the admissions of its authors, we know now that it was Mr. Duncan and his administration who were lying to parents, educators, children and taxpayers. Politicians shoved the Common Core down the throat of public schools with the false claim that this regime would improve education.  They forced teachers and schools to be judged and punished based on these tests.  They told millions of children they were academically unfit based on these tests. And now we have proof positive that these standardized tests are just as weak as their predecessors, and cannot in any way measure whether our children are “college-ready.”

The time is now for policymakers to stop wasting hundreds of millions of dollars, and thousands of school hours, on a useless standardized testing scheme;   and to instead invest our scarce public dollars in programs that actually ensure that public schools are have the capacity to support and prepare students to have more fulfilling and successful lives.

BREAKING NEWS – Common Core PARCC tests gets an “F” for Failure

2 Comments

Stunning assessment of the data reveals Common Core test not a successful predictor of college success.

What does this mean for Connecticut and other SBAC states?

Common Core PARCC tests gets an “F” for Failure – By Wendy Lecker and Jonathan Pelto

The entire premise behind the Common Core and the related Common Core PARCC and SBAC testing programs was that it would provide a clear cut assessment of whether children were “college and career ready.”

In the most significant academic study to date, the answer appears to be that the PARCC version the massive and expensive test is that it is an utter failure.

William Mathis, Managing Director of the National Education Policy Center and member of the Vermont State Board of Education, has just published an astonishing piece in the Washington Post. (Alice in PARCCland: Does ‘validity study’ really prove the Common Core test is valid? In it, Mathis demonstrates that the PARCC test, one of two national common core tests (the other being the SBAC), cannot predict college readiness; and that a study commissioned by the Massachusetts Department of Education demonstrated the PARCC’s lack of validity.

This revelation is huge and needs to be repeated. PARCC, the common core standardized test sold as predicting college-readiness, cannot predict college readiness. The foundation upon which the Common Core and its standardized tests were imposed on this nation has just been revealed to be an artifice.

As Mathis wrote, the Massachusetts study found the following: the correlations between PARCC ELA tests and freshman GPA ranges from 0.13-0.26, and for PARCC Math tests, the range is between 0.37 and 0.40. Mathis explains that the correlation coefficients “run from zero (no relationship) to 1.0 (perfect relationship). How much one measure predicts another is the square of the correlation coefficient. For instance, taking the highest coefficient (0.40), and squaring it gives us .16. “

This means the variance in PARCC test scores, at their best, predicts only 16% of the variance in first year college GPA.  SIXTEEN PERCENT!  And that was the most highly correlated aspect of PARCC.  PARCC’s ELA tests have a correlation coefficient of 0.17, which squared is .02. This number means that the variance in PARCC ELA scores can predict only 2% of the variance in freshman GPA!

Dr. Mathis notes that the PARCC test-takers in this study were college freshman, not high school students. As he observes, the correlations for high school students taking the test would no doubt be even lower. (Dr. Mathis’ entire piece is a must-read. Alice in PARCCland: Does ‘validity study’ really prove the Common Core test is valid?)

Dr. Mathis is not an anti-testing advocate. He was Deputy Assistant Commissioner for the state of New Jersey, Director of its Educational Assessment program, a design consultant for the National Assessment of Educational Progress (NAEP) and for six states.   As managing director for NEPC, Dr. Mathis produces and reviews research on a wide variety of educational policy issues. Previously, he was Vermont Superintendent of the Year and a National Superintendent of the Year finalist before being appointed to the state board of education. He brings expertise to the topic.

As Mathis points out, these invalid tests have human costs:

“With such low predictability, you have huge numbers of false positives and false negatives. When connected to consequences, these misses have a human price. This goes further than being a validity question. It misleads young adults, wastes resources and misjudges schools.  It’s not just a technical issue, it is a moral question. Until proven to be valid for the intended purpose, using these tests in a high stakes context should not be done.”

PARCC is used in  New Jersey, Maryland and other states, not Connecticut. So why write about this here, where we use the SBAC?

The SBAC has yet to be subjected to a similar validity study.  This raises several questions.  First and most important, why has the SBAC not be subjected to a similar study? Why are our children being told to take an unvalidated test?

Second, do we have any doubt that the correlations between SBAC and freshman college GPA will be similarly low?  No- it is more than likely that the SBAC is also a poor predictor of college readiness.

How do we know this? The authors of the PARCC study shrugged off the almost non-existent correlation between PARCC and college GPA by saying the literature shows that most standardized tests have low predictive validity.

This also bears repeating: it is common knowledge that most standardized tests cannot predict academic performance in college.  Why , then, is our nation spending billions developing and administering new tests, replacing curricula, buying technology, text books and test materials, retraining teachers and administrators, and misleading the public by claiming that these changes will assure us that we are preparing our children for college?

And where is the accountability of these test makers, who have been raking in billions, knowing all the while that their “product” would never deliver what they promised, because they knew ahead of time that the tests would not be able to predict college-readiness?

When then-Secretary Arne Duncan was pushing the Common Core State Standards and their tests on the American public, he maligned our public schools by declaring: “For far too long,” our school systems lied to kids, to families, and to communities. They said the kids were all right — that they were on track to being successful — when in reality they were not even close.” He proclaimed that with Common Core and the accompanying standardized tests, “Finally, we are holding ourselves accountable to giving our children a true college and career-ready education.”

Mr. Duncan made this accusation even though there was a mountain of evidence proving that the best predictor of college success, before the Common Core, was an American high school GPA.  In other words, high schools were already preparing kids for college quite well.

With the revelations in this PARCC study and the admissions of its authors, we know now that it was Mr. Duncan and his administration who were lying to parents, educators, children and taxpayers. Politicians shoved the Common Core down the throat of public schools with the false claim that this regime would improve education.  They forced teachers and schools to be judged and punished based on these tests.  They told millions of children they were academically unfit based on these tests. And now we have proof positive that these standardized tests are just as weak as their predecessors, and cannot in any way measure whether our children are “college-ready.”

The time is now for policymakers to stop wasting hundreds of millions of dollars, and thousands of school hours, on a useless standardized testing scheme;   and to instead invest our scarce public dollars in programs that actually ensure that public schools are have the capacity to support and prepare students to have more fulfilling and successful lives.

A lesson about Garbage In, Garbage Out and turning classrooms into testing factories

Comments Off on A lesson about Garbage In, Garbage Out and turning classrooms into testing factories

Fellow columnist and public education advocate Sarah Darer Littman left Governor Dannel Malloy, the corporate education reform industry and their obsession with standardized testing no room to hide in her latest MUST READ article in CT Newsjunkie entitled, Garbage In, Garbage Out: A Reminder for PEAC and the State Board of Education

Using the adage that “Garbage In, Garbage Out,” or “GIGO” as it is known, leads to useless or even dangerous outcomes, Littman highlights a series of recent examples that reveal the very real and serious ramifications that result from the corporate greed and testing mania that is being pushed by Malloy and other “education reform” allies.

While the corporations win and the politicians collect big campaign donations, our children, teachers and public schools lose … along with the taxpayers whose scarce resources get diverted from educating children to pumping up profits for the testing companies.

In one example she explains;

Justice Roger D. McDonough of the N.Y. Supreme Court’s 3rd District provided a reminder of this on Tuesday when he ruled in the case of Sheri G. Lederman that the N.Y. Education Department’s growth score and rating of her as “ineffective” for the 2013-14 school year was “arbitrary and capricious and an abuse of discretion.”

Lederman is a fourth-grade teacher in Great Neck, Long Island. Great Neck’s Superintendent of Schools at the time she filed the lawsuit, Thomas Dolan, described her as a “highly regarded as an educator” with “a flawless record,” whose students consistently scored above the state average on standardized math and English tests. In 2012-13, more than two-thirds of her students scored as proficient or advanced. Yet in 2013-14, despite a similar percentage of students meeting or exceeding the standards, Lederman was rated “ineffective” as a teacher.

The problem with the testing program in New York parallels the problem in Connecticut.

Despite the massive expenditure of public dollars, including more than $20 million a year in Connecticut state funds, the SBAC test and its sister version which is called the PARCC test, fail to adequately measure student achievement and have no appropriate role in the teacher evaluation process.

But the truth is irrelevant when it comes to Malloy, his Commissioner of Education, his political appointees on the State Board of Education or, for that matter, the members of the Connecticut General Assembly.

For them, the perceived value of looking “tough” on teachers and schools is more important than the reality of doing what it takes to actually ensure that every child gets the quality education they need and deserve.

As Sarah Darer Littman explains,

Four years ago, in a meeting with the CTNewsJunkie editorial board, Gov. Dannel P. Malloy made the outrageous, nonsensical claim that teachers leaving the profession had nothing to do with such punitive policies, and when provided with research to the contrary his reply was silence and a determination to stay his clearly detrimental course.

And there is more, much more.

Sarah Darer Littman’s Garbage In, Garbage Out: A Reminder for PEAC and the State Board of Education is an extremely powerful piece.

Go read it at: http://www.ctnewsjunkie.com/archives/entry/op-ed_garbage_in_garbage_out_a_reminder_for_peac_and_the_state_board_of_ed/

The fraud of computer scoring on the Common Core exams

1 Comment

Leonie Haimson is one of the nation’s leading public education advocates.  She leads the group Class Size Matters, is the co-chair of the Parent Coalition for Student Privacy and is a member of the Board of Directors of the Network for Public Education.

As part of their effort to raise awareness about the problems associated with the Common Core testing schemes – PARCC and SBAC- NPE and public education advocates released the following report:

Note to Connecticut readers:  Governor Dannel Malloy’s State Department of Education has failed to respond to multiple requests for clarification about how the SBAC essays written by Connecticut students are being scored!

The fraud of computer scoring on the Common Core exams  (From Leonie Haimson)

On April 5, 2016 the Parent Coalition for Student Privacy, Parents Across America, Network for Public Education, FairTest and many state and local parent groups sent a letter to the Education Commissioners in the PARCC and SBAC states, asking about the scoring of these exams.

We asked them the following questions:

  • What percentage of the ELA exams in our state are being scored by machines this year, and how many of these exams will then be re-scored by a human being?
  • What happens if the machine score varies significantly from the score given by the human being?
  • Will parents have the opportunity to learn whether their children’s ELA exam was scored by a human being or a machine?
  • Will you provide the “proof of concept” or efficacy studies promised months ago by Pearson in the case of PARCC, and AIR in the case of SBAC, and cited in the contracts as attesting to the validity and reliability of the machine-scoring method being used?
  • Will you provide any independent research that provides evidence of the reliability of this method, and preferably studies published in peer-reviewed journals?

Our concerns had been prompted by seeing the standard contracts that Pearson and AIR had signed with states. The standard PARCC contract indicates that this year, Pearson would score two thirds of the students’ writing responses by computers, with only 10 percent of these rechecked by a human being. In 2017, the contract said, all of PARCC writing samples were to be scored by machine with only 10 percent rechecked by hand.

NPE1

 

 

 

 

 

This policy appears to contradict the assurances on the PARCC scoring FAQ page that says:

Writing responses and some mathematics answers that require students to explain their process or their reasoning will be scored by trained people in the first years.”

On another Pearson page, linked to from the FAQ, called “Scoring the PARCC Test”, the informational sheet goes on at great length about the training and experience levels of the individuals selected for scoring these exams (which is itself quite debatable) without even mentioning the possibility of computer scoring. In fact, we can find nowhere on the PARCC website in any page that a parent would be likely to visit that makes it clear that machine-scoring will be used for the majority of students’ writing on these exams.

In an Inside Higher Ed article from March 15, 2013, Smarter Balanced representatives said that they had retreated from their original plans to switch rapidly to computer scoring, “because artificial intelligence technology has not developed as quickly as it had once hoped.” Yet the standard AIR contract with the SBAC states calls for all the written responses to be scored by machine this year, with half of them rechecked by a human being; next year, only 25 percent of writing responses will be re-checked by a human being.

In both cases, however, for an additional charge, states can opt to have their exams scored entirely by real people.

The Pearson and AIR contracts also promised studies showing the reliability of computer scoring. After we sent our letter and a reporter inquired, Pearson finally posted a study from March 2014. The SBAC automated scoring study is here. Both are problematic in different ways.

According to Les Perelman, retired director of a writing program at MIT and an expert on computer scoring, the PARCC/Pearson study is particularly suspect because its principal authors were the lead developers for the ETS and Pearson scoring programs. Perelman observed, “it is a case of the foxes guarding the hen house. The people conducting the study have a powerful financial interest in showing that computers can grade papers.” Robert Schaeffer of FairTest observed that:

“The PARCC report relies on self-serving methodology just as the tobacco industry did to ’prove’ smoking does not cause cancer.”

In addition, the Pearson study, based on  the Spring 2014 field tests, showed that the average scores received by either a machine or human scorer were:

“Very low:, below 1 for all of the grades except grade 11, where the mean was just above 1.” This chart shows the dismal results:

NPE2

 

 

 

 

 

 

Given the overwhelmingly low scores, the results of human and machine scoring would of course be closely correlated in any scenario.

Les Perelman concludes:

“The study is so flawed, in the nature of the essays analyzed and, particularly, the narrow range of scores, that it cannot be used to support any conclusion that Automated Essay Scoring is as reliable as human graders. Given that almost all the scores were 0’s or 1’s, someone could obtain to close the same reliability simply by giving a 0 to the very short essays and flipping a coin for the rest.”

As for the AIR study, it makes no particular claims as to the reliability of the computer scoring method, and omits the analysis necessary to assess this question.

As Perelman observes:

“Like previous studies, the report neglects to give the most crucial statistics: when there is a discrepancy between the machine and the human reader, when the essay is adjudicated, what percentage of instances is the machine right? What percentage of instances is the human right? What percentage of instances are both wrong? … If the human is correct, most of the time, the machine does not really increase accuracy as claimed.”

Moreover, the AIR executive summary admits that “optimal gaming strategies” raised the score of otherwise low-scoring responses a significant amount. The study then concludes because that one computer scoring program was not fooled by the most basic of gaming strategies, repeating parts of the essay over again, computers can be made immune from gaming. The Pearson study doesn’t mention gaming at all.

Indeed, research shows it is easy to game by writing nonsensical long essays with abstruse vocabulary. See for example, this gibberish-filled prose that received the highest score by the GRE computer scoring program. The essay was composed by the BABEL generator – an automatic writing machine that generates gobbled-gook, invented by Les Perelman and colleagues. [A complete pair of BABEL generated essays along with their top GRE scores from ETS’s e-rater scoring program is available here.]

In a Boston Globe opinion piece , Perelman describes how he tested another automated scoring system, IntelliMetric, that similarly was unable to distinguish coherent prose from nonsense, and awarded high scores to essays containing the following phrases:

“According to professor of theory of knowledge Leon Trotsky, privacy is the most fundamental report of humankind. Radiation on advocates to an orator transmits gamma rays of parsimony to implode.’’

Unable to analyze meaning, narrative, or argument, computer scoring instead relies on length, grammar, and arcane vocabulary to assess prose. Perelman asked Pearson if he could test its computer scoring program, but was denied access. Perelman concluded:

“If PARCC does not insist that Pearson allow researchers access to its robo-grader and release all raw numerical data on the scoring, then Massachusetts should withdraw from the consortium. No pharmaceutical company is allowed to conduct medical tests in secret or deny legitimate investigators access. The FDA and independent investigators are always involved. Indeed, even toasters have more oversight than high stakes educational tests.”

A paper dated March 2013 from the Educational Testing Service (one of the SBAC sub-contractors) concluded:

“Current automated essay-scoring systems cannot directly assess some of the more cognitively demanding aspects of writing proficiency, such as audience awareness, argumentation, critical thinking, and creativity…A related weakness of automated scoring is that these systems could potentially be manipulated by test takers seeking an unfair advantage. Examinees may, for example, use complicated words, use formulaic but logically incoherent language, or artificially increase the length of the essay to try and improve their scores.”

The inability of machine scoring to distinguish between nonsense and coherence may lead to a debasement of instruction, with teachers and test prep companies engaged in training students on how to game the system by writing verbose and pretentious prose that will receive high scores from the machines. In sum, machine scoring will encourage students to become poor writers and communicators.

Only five state officials responded to our letter after a full month.

Dr. Salam Noor, the Deputy Superintendent of Oregon, Deputy Commissioner Jeff Wulfson of Massachusetts ,  Henry King of the Nevada Department of Education and Dr. Vaughn Rhudy from the Office of Assessment in West Virginia informed us that their states were not participating in automated scoring at this time. Wyoming Commissioner Jillian Balow also replied to our letter, saying that she shared our concerns about computer scoring, and that Wyoming state was not using the SBAC exam as we had mistakenly believed.

In contrast, Education Commissioner Richard Crandall responded to local parent activist Cheri Kiesecker that Colorado would be using computer scoring for two thirds of students’ PARCC writing responses:

Automated scoring drives effective and efficient scoring of student assessments, resulting in faster results, more consistent scoring, and significant cost savings to PARCC states. This year in Colorado, roughly two-thirds of computer-based written responses will be scored using automated scoring, while one-third will be hand-scored. Approximately 10 percent of all written responses will receive a second hand scoring for quality control.”

He added that parents would never know if their child’s writing was scored by a machine or a human being, because different items on each individual test sheet are apparently randomly assigned to machines and humans.

On April 5, 2016, the same day we sent the letter, Rhode Island Education Commissioner Ken Wagner spoke publicly to the state’s Council on Elementary and Secondary Education  about the automated scoring issue. He claimed that:

the research indicates that the technology can score extended student responses with as much reliability – if not more reliability – than expert trained teacher scores …..” (Here’s the video, watch from about 11 minutes on)

He repeated this astonishing claim once again – that the machines outperform even most highly trained experienced teachers:

The research has … not just looked at typical teacher scores but expert trained teacher scores and then compared the automated scoring results to the expert trained teacher scores and the results are either as good or if not…better….”

This is appears on the face of it an absurd claim. How can a machine do better than an expert trained teacher in scoring a piece of writing?

Wagner went on to insist that:

“SAT GRE GMA, those kinds of programs have been doing this stuff for a very long time.”

Yet as we have seen, the GRE scoring method is unable to distinguish nonsense from meaningful prose. And to its credit, the College Board uses trained human scorers exclusively on writing samples for the SAT and AP exams.

The following 18 states and districts have failed to respond to our letter or those of other parents as to whether they are using computers to score writing samples on their PARCC and SBAC exams: CA, CT, DE, DC, HI, ID, IL, LA, MD, MI, MT, NH, NJ, NM, ND, SD, VT, and WA.

The issue of computer scoring — and the reluctance of the states and companies involved in the PARCC and SBAC consortia to be open with parents about this—is further evidence that the ostensible goal of the Common Core standards to encourage the development of critical thinking and advanced skills is a mirage. Instead, the primary objective of Bill Gates and many of those promoting the Common Core and allied exams is to standardize both instruction and assessment and to outsource them to reductionist algorithms and machines, in the effort to make them more “efficient.”

Essentially, the point of this grandiose project imposed upon our nation’s schools is to eliminate the human element in education as much as possible.

As a recent piece by Pearson on Artificial Intelligence (or AI) argues:

“True progress will require the development of an AIEd infrastructure. This will not, however, be a single monolithic AIEd system. Instead, it will resemble the marketplace that has developed for smartphone apps: hundreds and then thousands of individual AIEd components, developed in collaboration with educators, conformed to uniform international data standards, and shared with researchers and developers worldwide. These standards will enable system-level data collation and analysis that help us learn much more about learning itself and how to improve it.

If we are ultimately successful, AIEd will also contribute a proportionate response to the most significant social challenge that AI has already brought – the steady replacement of jobs and occupations with clever algorithms and robots. It is our view that this phenomena provides a new innovation imperative in education, which can be expressed simply: as humans live and work alongside increasingly smart machines, our education systems will need to achieve at levels that none have managed to date.”

Here, Pearson appears to be suggesting that the robust marketplace in data-mining computer apps supplied with artificial intelligence will lead to a proliferation of jobs for ed tech entrepreneurs and computer coders, to make up for the proportional loss of jobs for teachers. This provides further evidence that their ultimate goal as well of their allies in foundation and the corporate world is to maximize the mechanization of education and minimize the personal interaction between teachers and students, as well as students with each other, in classrooms throughout the United States and abroad.

More information about the lack of evidence for machine scoring is in this issue brief here. If you are a parent from one of these states: please send in your questions, especially bullet points #1 to #3 above. The email addresses of your State Commissioners are posted here. And please let us know if you get a response by emailing us at [email protected].

The Common Core SBAC Test is a poor measure for kids and teachers alike

Comments Off on The Common Core SBAC Test is a poor measure for kids and teachers alike

Mia Dimbo is a Connecticut educator and public school advocate.  As a teacher in the Bridgeport, Connecticut Public School System, Ms. Dimbo works in an environment in which many of her students face the significant challenges associated with poverty, a lack of proficiency in the English Language and unmet special education needs.

In this powerful commentary piece she explains why the simplistic “test and punish” strategies espoused by the corporate education reformers are failing to have a positive impact on students, parents, teachers and public schools in Connecticut and across the nation.

The 2016 Session of the Connecticut General Assembly is coming to a close today, but Connecticut’s state legislators still have time to approve legislation reducing Connecticut’s overemphasis on standardized testing, legislation that would require Dannel Malloy and his administration to set aside their disastrous teacher evaluation program and develop one tha tis not dependent on the unfair, inappropriate and discriminatory Common Core SBAC testing scheme.

If any elected official is uncertain how to proceed on this important issue they should read Mia Dimbo’s, Test a poor measure for kids and teachers alike.  The courageous teacher writes;

When I sit at dinner with my family, I often think about my students. I have been a teacher in Bridgeport for many years and have seen the disparities between my own son, who lives in a suburban home and attends a suburban school, and the challenges my students face in a high-poverty, urban community.

I know my students have the potential to succeed. I also know that my students go home praying that no bullets will pass through their windows, and hoping they will have food to eat. I understand that it is often a world of “haves and have nots.” So I work hard to provide the education and knowledge they will need to grow and achieve. My students deserve an academic experience that lifts them up and helps them overcome the obstacles they face.

Respecting the potential and humanity of each student should be at the heart of our public school education system. Far too often, however, students in high-poverty schools must confront not only the challenges in their community, but also the burden of an impersonal, standardized testing scheme that too often results in the wrong priorities and fails to identify and address their needs.

My students deserve assessments that are free from bias and are designed to benefit them — not testing corporations. That’s why the idea of linking the state mastery exam, the Smarter Balanced or SBAC test to teacher evaluation is wrong for both teachers and students. The State Department of Education admits that SBAC “is not meant as a diagnostic measure to directly inform a teacher’s classroom instruction on a daily or weekly basis.” It in no way helps inform the instruction of my students.

A mastery exam is supposed to measure knowledge in a uniform and fair manner, and not discriminate against students on the basis of income or whether they have desktops, laptops and computer tablets at home. It is especially punishing and developmentally inappropriate for special-education students, English language learners, students below grade level, and younger students, as they must stare into a computer screen for many hours and become discouraged and frustrated with a test that does not accommodate their needs. For some, it is a crushing experience.

This is an important civil rights issue. I recently joined several of my urban teacher colleagues, who are members of the Connecticut Education Association’s Ethnic Minority Affairs Commission, and met with representatives of the Connecticut African-American Affairs Commission and state lawmakers who are members of the Black and Puerto Rican Caucus.

We explained the harmful effects of SBAC on all students, but especially on students in low-income districts like ours. We discussed the research that shows how the awkward, computerized format of the SBAC test creates a significant technology gap for students in high-poverty schools.

We talked about the unintended consequence of linking this unfair and biased test to a teacher’s evaluation, especially for urban teachers. There are much better, more accurate tools to measure the effectiveness of teachers. Urban districts like mine are often training grounds for talented, beginning educators who leave urban schools for jobs in the suburbs, where resources and learning conditions are more conductive to school success.

My colleagues and I told the legislators that the state requirement linking the invalid SBAC test and teacher evaluations is a disincentive to committed educators who want to stay in city schools. We urged them to focus their energies on enabling our cities to retain these educators, and develop innovations for cities seeking to attract and retain high-quality teachers, especially minority teachers.

Teachers know what matters most: providing engaging instruction and promoting skills that lead to lifelong learning such as collaboration, communication, critical thinking and creativity. These skills are not measured well or at all by standardized tests. Connecticut should join the majority of states that have already rejected the SBAC test, and refuse to undermine the integrity of teacher evaluations. Senate Bill 380, currently before the state legislature would do just that.

Eliminating SBAC from teacher evaluation will increase reliability and validity. Evaluations currently include the review of multiple measures of student performance, growth and development, including tests that are designed specifically to measure the progress of classroom learning. I assess my students using classroom-based projects, assignments and tests that give me immediate feedback so that I can target my instruction to help them achieve at the highest levels. I want to be evaluated based on the growth of my students during the course of the school year, in the subjects and skills that I teach.

As a teacher, I have chosen to dedicate my life to helping my students achieve within and outside of the classroom. There is nothing more important than the education of our children, and we owe it to our students to assess that education in a manner that is honest, valid and fair.

It’s what we should all want. Legislators must reach this same conclusion for the sake of our children and our future.

This commentary piece first appeared in the CT Post on May 2, 2016.  You can read and comment on Mia Dimbo’s article at: http://m.ctpost.com/opinion/article/Test-a-poor-measure-for-kids-and-teachers-alike-7388492.php

Connecticut Legislators – Now is the time to act on the inappropriate SBAC testing program!

4 Comments

Anne Manusky is a Connecticut parent, education advocate and trained academic researcher.  In this commentary piece she lays out why the Common Core Smarter Balanced Assessment (SBAC) testing system fails to provide accurate and useable information about student performance, why it should not be used as part of an effective teacher evaluation system and why Connecticut’s elected officials should defund the SBAC testing madness and use those funds to help address Connecticut’s budget crisis.

Anne Manusky writes;

As a parent and former psychological research assistant, I have had great concerns with education reform:  Common Core implementation and their reportedly ‘innovative’ tests – CT’s choice, the Smarter Balanced Assessments.

The concerns have become real, and as our elected officials review and make legislative decisions, a critical element must be reviewed: credibility of the state test, statutorily the “state Mastery test”, as well as the questionable Smarter Balanced Assessment Consortium’s interstate compact.

Currently there are two CT General Assembly bills which consider the Smarter Balanced Assessments the state “Mastery Test” (requirement of state statute):

SB 380 ‘An Act Concerning the Exclusion of Student Performance Results on the Mastery Examination on Teacher Evaluations – https://www.cga.ct.gov/asp/CGABillStatus/cgabillstatus.asp?selBillType=Bill&bill_num=SB380

And,

HB 5555 ‘ An Act Concerning the Minimum Budget Requirement and Prohibiting the Inclusion of Participation Rates for the State Wide Mastery Examination in the Calculation of a School District’s Accountability Index Score – https://www.cga.ct.gov/asp/CGABillStatus/cgabillstatus.asp?selBillType=Bill&bill_num=HB5555

The Smarter Balanced Assessment (SBA) has no psychometric analyses providing the validity and reliability of the assessment; no independent verification of this assessment exists.

These analyses are necessary to determine credibility of this test.

An FOIA request was recently submitted to the CT State Department of Education for

1) Any and all materials providing validity and reliability of the Smarter Balanced Assessments; and 2) the “deep psychometric study” the State claims to have completed.

Commissioner Wentzel made a point to provide that the Smarter Balanced testing was being reduced due to a said “deep psychometric study”.  Top state officials lop off almost two hours off the ‘Smarter Balanced’ test. Hartford Courant   http://www.courant.com/education/hc-state-officials-cutback-smarter-balanced-test-20160225-story.html.

Materials provided from the State Department of Education to substantiate these questions did not provide validation or of further psychometric testing to determine a “deep psychometric study” had been conducted.

On the other hand, 100 education researchers from California provide “The assessments have been carefully examined by independent examiners of the test content who concluded that they lack validity, reliability, and fairness, and should not be administered, much less be considered a basis for high-stakes decision making.”  Common Core State Standards Assessments in California: Concerns and Recommendations, CARE-ED, Feb 2016,  (See http://media.wix.com/ugd/1e0c79_2718a7f68da642a09e9244d50c727e40.pdf)

Testing which has no credible basis should not be used to assess children in California, or anywhere else for that matter. It then becomes even more of an issue that these tests were believed to be suitable for assessing children, it has no credibility in use for the evaluation of classroom teachers.

Unfortunately, Commissioner Wentzel even acknowledges in her CGA HB 5555 Testimony:

“…Without reliable measurements….. it would be difficult to “measure improvement and growth among our students from one year to the next.”

Why wouldn’t the State Department of  Education’s highest officer and/or staff review the validity, reliability and construct validity of what is being considered the “state Mastery test”?

How much time have CT’s children wasted on taking this “test”?

At this time, the state of CT is under a 3-year SBAC compact. This compact was found in the state of Missouri to be an unlawful interstate compact, never was approved by Congress. .https://www.washingtonpost.com/news/answer-sheet/wp/2015/02/26/judge-rules-missouris-membership-in-common-core-testing-group-is-illegal/

The current fiscal issues for the testing include the Smarter Balanced Consortium Membership fee is $8,080,331 for 3 years, as well as fees for the American Institutes of Research (AIR – which gives the Smarter Balanced) – $13,555,173 for 3 years.

These are serious issues which need to be addressed by our Connecticut Legislature.

Defund the Smarter Balanced Assessment Consortium compact, cut the unproven, invalid Smarter Balanced testing saving the CT taxpayers over 7 million in one year alone.

Also, an investigation into the State Department of Education as well as the State Board of Education’s decisions to use this “test” without review of basic statistical significance should be completed.

Anne Manusky is absolutely correct.  As Connecticut’s elected officials grapple with Connecticut’s ongoing budget crisis, they can make a significant and positive difference for Connecticut’s students, parents, teachers, public schools and taxpayers by passing Senate Bill 380 and requiring that the Malloy administration stop using the unfair, inappropriate and discriminatory SBAC test as part of the state’s mandated teacher evaluation program.

In addition, as Anne Manusky points out, the Connecticut legislature should stop funding the failed SBAC testing program and use those funds to preserve some of the vital services that Connecticut’s most vulnerable citizens need and deserve.

If Connecticut’s state senators and state representatives are committed to doing the right thing for Connecticut, they should start by reading this commentary piece and acting on its recommendations.

With money from Walmart’s Walton Foundation – They call themselves Democrats for Education Reform

1 Comment

Today’s CT Mirror includes a deceitful and extraordinarily misleading commentary piece entitled, “This legislative session, let Connecticut children win for a change.”

Shavar Jeffries, the mouthpiece for a corporate funded, New York based, charter school advocacy group that calls itself “Democrats for Education Reform (DFER)” uses the space to urge Connecticut legislators to DEFEAT a bill that, if passed, would require Governor Dannel Malloy and his administration to develop an honest and effective teacher evaluation system rather than continue with Malloy’s present program that is dependent on the results of the unfair, inappropriate and discriminatory Common Core Smarter Balanced Assessment Consortium (SBAC) testing scheme.

Jeffries, who is the founding Board President of Newark’s Team Academy Charter Schools, a board member of the charter school front called Students for Education Reform (SFER) and a Director for Eva Moskowitz’s infamous Success Academy charter school chain, instructs Connecticut’s elected officials to “stay the course” with Dannel Malloy’s failed anti-student, anti-parent, anti-teacher and anti-public school agenda.

In the face of overwhelming evidence that reveals that the SBAC testing scam is not an appropriate measure of student academic achievement or an effective tool for evaluating teachers, the highly paid spokesman for the charter school industry opines,

“Will Connecticut beat back the progress it made in adopting a modern educator evaluation system in 2012? That system recognizes great teachers for a job well done, while providing support to struggling teachers. Or will lawmakers cave to a power structure that wants to keep things the same?”

The charter school fan’s incredible statement speaks volumes. 

The truth is that it is Malloy’s shameful corporate education reform initiative of 2012, and his utter failure to properly fund public education that is taking Connecticut in the wrong direction.

Malloy, who has proposed record-breaking cuts to Connecticut’s public schools while diverting more and more scarce taxpayer funds to privately owned and operated charter schools has become a poster-boy for the insidious and devastating impact that the education reform and privatization effort is having on public education in Connecticut.

The negative consequences of Malloy’s actions are particularly evident when it comes to the absurd teacher evaluation system that he has championed.  To better understand the problems with Malloy’s teacher evaluation program start with the following Wait, What? posts;

Wendy Lecker explains – Again – Why the Malloy-Wyman teacher evaluation system is a terrible farce

Speaking out for decoupling Common Core testing from the teacher evaluation process

Why Common Core SBAC results SHOULD NOT be part of the teacher evaluation process

New York Superintendents call for an end to evaluating teachers on standardized test results

However, when it comes to DFER and its allies, the truth has no value.

In fact, it is the truth that serves as the most serious impediment to their goals.

DFER and their plan to “transform” public education by handing it over to Wall Street investors, the elite hedge fund owners, and the private companies that seek to make money off the backs of our children, teachers and public schools require a political and public policy environment in which the truth is not allowed to get in the way.

Speaking of that dystopian approach to governance, George Orwell summed it up sixty-seven years ago writing in his once fiction – now non-fiction – epic titled 1984;

WAR IS PEACE
FREEDOM IS SLAVERY
IGNORANCE IS STRENGTH

Of course, when it comes to the real actors behind the effort to undermine public education, Shavar Jeffries is but a two-bit player.  His commentary piece in today’s CTMirror is a reminder that he is just someone who will carry the water for those that would prefer to remain hidden in the dark.

It is the dark and it’s associated “dark-money” where DFER flourishes.

Much has been written here at Wait, What? and elsewhere about DFER and those behind the charter industry.

An early description of the group appeared in December 2010, when the UFT’s Michael Hisrch wrote;

Among the group’s eight-person board is hedge-fund manager John Petry of Gotham Capital, who with Eva Moskowitz co-founded the Harlem Success Academy Charter School. The board also includes Tony Davis of Anchorage Capital, the board chair of Brooklyn’s Achievement First East New York school; Charles Ledley of Highfields Capital Management; and Whitney Tilson, chief of T2 Partners and Tilson Funds and vice chairman of New York’s KIPP Academy Charter Schools.

[…]

Of DFER’s seven-person advisory board, five manage hedge funds: David Einhorn of Greenlight Capital, LLC; Joel Greenblatt, founder and managing partner of Gotham Capital and past protégé of fallen junk-bond icon Michael Milliken; Vincent Mai, who chairs AEA Investors, LP; Michael Novogratz, president of Fortress Investment Group; and Rafael Mayer, the Khronos LLC managing partner and KIPP AMP charter school director.

Orbiting the group is billionaire “venture philanthropist” and charter school funder Eli Broad, whose foundation gave upwards of $500,000 to plug advocacy related to the documentary “Waiting for Superman,” and another charter-touting film, “The Lottery.” Though not himself a DFER board member, Broad is a major funder of Education Reform Now, DFER’s nonprofit sister organization, also headed by Joe Williams.

Meanwhile, Andrew Rotherman, recently retired DFER director and EduWonk blogger, is co-founder of and a partner in for-profit Bellwether Education, described as “offering specialized professional services and thoughtful leadership to the entrepreneurial education reform field.” Rotherman sits on the Broad Prize Review Board, while DFER board member Sara Mead is a senior associate partner at his Bellwether Education and sits on the Washington, D.C., Public Charter School Board.

DFER is actually part of a much larger multi-headed beast that also includes Education Reform Now and Education Reform Now Advocacy, two tax-exempt entities that allow the billionaires and corporate elite behind the charter school industry to funnel hundreds of millions of dollars into political, lobbying and advocacy efforts.  (For an example of their approach see Wait What? post, Figures that the super-rich would turn privatization of public schools into a game)

As noted previously, DFER is also a key player behind SFER – Students for Education Reform.  The SFER story explains a lot about just how far the corporate education reformers are willing to go to corrupt the system.

For more on SFER read;

SFER – The $7 million+ “student run” Corporate Education Reform Industry Front Group

MORE ON SFER – Corporate Money in the 2015 Denver Board of Education Election

Perhaps most telling of all is that when it comes to Malloy’s disastrous SBAC tests and his dangerously warped teacher evaluation program, the only entities supporting it are the groups and individuals funded, directed or at the beckon call of these hedge fund managers and corporate elite.

NOTE:  Who else has taken Walton money?

Governor Dannel Malloy and Governor Andrew Cuomo.

Are school administrators bullying and abusing your child over the SBAC testing frenzy?

1 Comment

There is an extremely serious problem taking place in some school districts across Connecticut and parents, teachers, child advocates and elected officials must act immediately to protect our children from the corporate education reform industry and their lackeys.

With the state-sponsored Common Core SBAC testing scheme now in full-swing throughout the state, parents and guardians in numerous schools districts are reporting that Connecticut public school children continue to be abused by local school administrators, who are following orders from Governor Dannel Malloy, Lt. Governor Nancy Wyman, Education Commissioner Wentzell and the State Department of Education.

In addition to lying and misleading parents about their fundamental and inalienable right to opt their children out of the unfair, inappropriate and discriminatory Common Core SBAC tests, a disturbing number of school districts are unethically and immorally punishing students who have been opted out of the tests, while some districts are ordering students to “log-in” to the SBAC tests before the schools will honor the parents’ directive that their child is not to participate in the tests.

As previously reported, some school districts are bullying children who have been opted out by forcing them to stay in the testing rooms despite the fact that the SBAC testing regulations clearly and strictly prohibit students who are not taking the test from remaining in the testing locations.

The practice of forcing students to stay in the testing room, despite having been opted out of the SBAC program by their parents, is an ugly strategy to embarrass, humiliate and ostracize children who are inappropriately being required to sit in the testing room for hours while their peers are taking the defective and high-stakes SBAC tests that are designed to unfairly fail a significant number of the state’s children.

School administrators who refuse to place children in an alternative safe and appropriate environment (such as the school library) during the testing period are not only engaging in an unethical form of bullying, but they are violating the State of Connecticut’s Smarter Balanced Assessment Consortium (SBAC) regulations.

The SBAC protocol reads;

 “Students who are not being tested or unauthorized staff or other adults must not be in the room where a test is being administered.”

See:  Smarter Balanced: Summative Assessment Test Administration Manual English Language Arts/Literacy and Mathematics 2015–2016 Published January 3, 2016 (page 2-4) which notes;

Violation of test security is a serious matter with far-reaching consequences… A breach of test security may be dealt with as a violation of the Code of Professional Responsibility for Teachers, as well as a violation of other pertinent state and federal law and regulation.

Under the law, regulations and SBAC protocol, the Connecticut State Department of Education is required to investigate all such matters and pursue appropriate follow-up action. Any person found to have intentionally breached the security of the test system may be subject to sanctions including, but not limited to, disciplinary action by a local board of education, the revocation of Connecticut teaching certification by the State Board of Education, and civil liability pursuant to federal copyright law.

In addition, superintendents, principals or other school administrators who require or permit students who have been opted out of the SBAC tests, to remain in the testing room risk not only losing their certification to work in Connecticut pursuant to Connecticut State Statute 10-145b(i)(1), but they have violated their duties under Connecticut State regulations that require school administrators to adhere to a Professional Code of Conduct.

As if forcing students who have been opted out to remain in the testing rooms during the testing period wasn’t serious enough, some school districts are actually using an additional technique to make it appear that they are achieving extremely high participation rates.  These school districts have taken the despicable step of telling children who have been opted out of the SBAC testing by their parents that they must first sign-in (log-in) to the SBAC test program before the school will honor their parent’s directive that they are not to take the SBAC test.

This strategy is a direct result of the Malloy administration threat that any school district that allows more than 5 percent of their parents to opt out, will lose money – in this case – federal funds that are supposed to be used to provide extra support to the poorest child in the local school system.

IMPORTANT:

If your child or a child that you know is being forced to remain in the testing room during the SBAC testing, you are asked to provide that information, as soon as possible, so that appropriate action can be taken to prevent the continuation of the child abuse and to hold local school officials accountable for their reprehensible actions.

Information about any abusive practices related to the SBAC testing should be sent to Wait, What? via [email protected]or mailed to Wait, What? at PO Box 400, Storrs, CT. 06268

Bullying and abuse have no place in Connecticut’s public schools.

Please help ensure that those engaged in these abusive tactics are held accountable.

No evidence standardized testing can close ‘achievement gap’

1 Comment

In a commentary piece entitled, No evidence standardized testing can close ‘achievement gap’, and first published in the CT Mirror, Connecticut educator and public education advocate James Mulholland took on the absurd rhetoric that is being spewed by the corporate funded education reform industry.

Collecting their six figure incomes, these lobbyists for the Common Core, Common Core testing scam and the effort to privatize public education in the United States claim that more standardized testing is the key to improving educational achievement.

Rather than focus on poverty, language barriers, unmet special education needs and inadequate funding of public schools, the charter school proponents and Malloy apologists want students, parents, teachers and the public to believe that a pre-occupation with standardized testing, a focus on math and English, “zero-tolerance” disciplinary policies for students and undermining the teaching profession will force students to “succeed” while solving society’s problems.

Rather than rely on evidence, or even the truth, these mouthpieces for the ongoing corporatization of public education are convinced that if they simply say an untruth long enough, it will become the truth.

In his recent article, James Mullholland takes them on – writing;

In a recent commentary piece, Jeffrey Villar, Executive Director of the Connecticut Council for Education Reform, praises the Connecticut State Board of Education’s support for using student SBAC results in teacher evaluations. He claims, “The absence of such objective data has left our evaluation system light on accountability.” He further contends, “Connecticut continues to have one of the worst achievement gaps in the nation, the SBE appears committed to continuing to take this issue on.”

Contrary to Mr. Villar’s assertion, there is little, if any, evidence to support the idea that including standardized test scores in teacher evaluations will close the so-called achievement gap.

In some ways, it is a solution looking for a problem. Mr. Villar writes, “recently released evaluation results rated almost all Connecticut teachers as either proficient or exemplary. That outcome doesn’t make much sense.”

Other education reform groups express similar disbelief that there are so many good teachers in the state. In her public testimony during Connecticut’s 2012 education reform bill, Jennifer Alexander of ConnCAN testified that too few teachers were being dismissed for poor performance: “When you look at the distribution of ratings in those systems, you again see only about two percent of teachers, maybe five max, falling at that bottom rating category.” (Transcript of legislative testimony, March 21, 2012, p. 178.)

Education reform groups seem dismayed that they have been unable to uncover an adequate number of teachers who are bad at their jobs and continue to search for a method that exposes the boogeyman of bad teachers. But that’s exactly what it is: a boogeyman that simply doesn’t exist.

Regardless of the methodology that’s used, the number of incompetent teachers never satisfies education reform groups. They see this as a flaw in the evaluation system rather than a confirmation of the competency of Connecticut’s teachers.

However, Connecticut isn’t alone. After both Tennessee and Michigan overhauled their teacher evaluation systems, 98 percent of teachers were found to be effective or better; in Florida it was 97 percent. The changes yielded only nominal differences from previous years.

Mr. Vallar believes that including SBAC scores in teacher evaluations will decrease the achievement gap. There is no evidence to support the belief that including SBAC scores in teacher evaluations will lessen the differences in learning outcomes between the state’s wealthier and less-advantaged students.

In 2012, the federal Department of Education, led by Secretary Arne Duncan, granted Connecticut a waiver from the draconian requirements of No Child Left Behind. To qualify for the waiver, the results of standardized tests were to be included in teacher evaluations.

However, the policies of the secretary, which he carried with him from his tenure as Superintendent of Schools in Chicago to Washington D.C., never achieved the academic gains that were claimed. A 2010 analysis of Chicago schools by the University of Chicago concluded that after 20 years of reform efforts, which included Mr. Duncan’s tenure, the gap between poor and rich areas had widened.

The New York Times reported in 2011 that, “One of the most striking findings is that elementary school scores in general remained mostly stagnant, contrary to visible improvement on state exams reported by the Illinois State Board of Education.”

Most striking is a letter to President Obama signed by 500 education researchers in 2015, urging Congress and the President to stop test-based reforms. In it, the researchers argue that this approach hasn’t worked. “We strongly urge departing from test-focused reforms that not only have been discredited for high-stakes decisions, but also have shown to widen, not close, gaps and inequities.”

Using standardized test scores to measure teacher effectiveness reminds me of the time I saw a friend at the bookstore. “What are you getting?” I asked. “About 14 pounds worth,” he joked. Judging books by their weight is a measurement, but it doesn’t measure what is valuable in a book. Standardized tests measure something, but it’s not the effectiveness of a teacher.

To read and comment on James Mulholland’s commentary piece go to:  http://ctviewpoints.org/2016/04/20/opinion-james-mulholland/

Older Entries