Time to protect your children by opting them out of the unfair, inappropriate and discriminatory SBAC testing scheme

We are once again coming up on the time of year that Connecticut public school students will be told to stop learning and start testing.

Students in grades 3-8 and high school juniors will have their time and attention diverted from instructional activities in order to prepare for and take the Common Core Smarter Balanced Assessment (SBAC) test and the SAT.

These tests are useless and unscientific.  They fail to provide teachers and parents with any usable information about how to improve teaching or  student’s academic performance in relation to what is actually being taught in Connecticut’s classrooms.

Equally disturbing, these unfair and discriminatory tests are being used to categorize, rank and punish students, teachers and public schools.

As Wendy Lecker explained her in her recent piece, Failed common core SBAC/SAT tests punish students by Wendy Lecker,

Neither the SBAC nor the SAT is valid to measure student “growth.”

Administrators overwhelmingly agree that the SBAC and SAT are not user-friendly for students with disabilities or English Language Learners.

They are a worthless measure of how students are doing with what is actually taught in Connecticut classrooms.

And most troubling of all, the Common Core Smarter Balanced Assessment Consortium (SBAC) test is literally designed to fail many Connecticut’s children.

As academic studies have clearly proven, although standardized tests are fraught with discriminatory elements, the Connecticut Mastery Test (CMT) was at least intended – more or less – to measure how Connecticut’s children were doing on the curriculum that was being taught in Connecticut’s schools.

On the other hand, the SBAC test is aligned to the Common Core, a set of developmentally inappropriate standards created by the corporate education reform industry and forced upon the states by those who seek to privatize our schools and turn our classrooms into little more than testing factories and profit centers for the massive testing industry.

Costing taxpayers tens of millions of dollars, the SBAC test is worse than a colossal waste of time and money because it is being used in an underhanded attempt to tell students, especially those who utilize special education services, those who need help learning the English language and those who come from poor households that they are failures.

Connecticut’s children deserve much better…

And Connecticut’s parents can have a profound impact on this situation by telling their child’s teacher and principal that their son or daughter will not be participating in this year’s SBAC testing farce nor will they be allowed to waste their time in the SBAC preparation lessons.

Now is the time to do what is right for Connecticut’s children….Opt them out of the Common Core testing scam.

A simple letter to your child’s teacher and principal refusing to allow your child to participate in the SBAC tests is the best way to stand up for Connecticut’s public school students.

Failed common core SBAC/SAT tests punish students by Wendy Lecker

In a weekend commentary piece in the Stamford Advocate entitled, Failed common core tests punish students, education advocate Wendy Lecker writes,

Across the country, states are re-examining their embrace of the hastily implemented common core tests. Membership in the Smarter Balanced Assessment Consortium (SBAC) has dwindled from 31 to 14 states. West Virginia is the latest state to consider dropping the test for all grades.

Last year, Connecticut convened a committee to review Connecticut’s standardized tests, the SBAC and SAT. However, the committee’s final report ignored serious validity problems and concluded Connecticut should plow ahead with these expensive and questionable standardized tests.

Connecticut’s teachers’ unions, CEA and AFT, dissented from this report, because these committee members did their homework. Their enlightening minority report is based on an examination of the evidence on the SBAC, as well as surveys of teachers, administrators, parents and students conducted across Connecticut.

The minority report highlights the evidence ignored by the Mastery Committee. It notes that experts across the country admit that computer adaptive tests such as the SBAC are “in their infancy” and their validity cannot yet be established. Compounding the validity problems is the inconsistency in computer skills among different populations in Connecticut, with poor kids at a particular disadvantage; and the inconsistency in devices used. Shockingly, the minority report emphasizes Connecticut has not proven alignment between the SBAC and our state standards. There is also no evidence that the SBAC is valid to measure student “growth.”

Administrators overwhelmingly agree that the SBAC is not user-friendly for students with disabilities or English Language Learners.

The SBAC is a bust. But, though recent federal law allows Connecticut to explore other types of assessments, Connecticut remains wedded to the SBAC.

The Mastery Committee report itself reveals the problems with the SAT. The technical report on which the committee relied to “prove” validity for use in Connecticut does not mention Connecticut once. It is worthless for determining the validity of the SAT as Connecticut’s high school accountability test. Moreover, the report the committee cited to show alignment between the SAT and Connecticut high school standards revealed only a 71-percent match to Connecticut English standards, with entire categories having no strong alignment or none whatsoever. Math had an abysmal 43 percent strong alignment between the SAT and Connecticut Standards. We know what would be in 100-alignment: a teacher’s end-of-year test and what students learned in that class. And since a high school GPA is a much stronger predictor of college success than the SAT, Connecticut would do well to explore high school tests that match what students actually learn.

But instead the Mastery Committee recommends blind adherence to the SAT.

Continuing these invalid tests comes at a steep price. As the minority report noted, 90 percent of teachers stated that testing and test prep has resulted in lost learning time and restricted access to computer labs. The impact is particularly devastating in our poorest districts. A majority of districts reported technical problems during testing, again with poorest districts suffering the most.

Contrary to Connecticut’s goals, these tests drive instruction, especially in poor schools. Disadvantaged districts are most vulnerable to sanctions such as school or district takeover based on poor test results. Thus, they have resorted to interim computerized tests for test prep. Children in Bridgeport and other districts suffer through multiple administrations of i-Ready tests and/or MAP tests, and prep for these tests. They lose additional weeks of learning time. Some of these districts have direct pressure from the state to use these tests, as their Alliance District funding depends on student improvement on these measures.

Yet, according to researchers from Johns Hopkins, there is a complete “lack of a research base on i-Ready and MAP as means for improving student learning” which they find “both surprising and disappointing given their widespread use as well as their cost.”

These same districts are deprived of proven interventions that actually help students learn. For example, the judge in the CCJEF school funding case found a lack of reading and math intervention staff throughout the CCJEF districts, as well as shortages of space, time and supplies for reading and math intervention. While districts cannot afford to provide real help for kids, they are forced to spend money and time on invalid measures of student performance.

It has been three years since Connecticut implemented the SBAC and there is still no evidence that it is valid. And Connecticut implemented the SAT knowing it was invalid for use as an accountability test. As long as our leaders keep failing to learn this expensive lesson, our neediest children will continue to pay the price.

This commentary pieces was first published in the Stamford Advocate.  You can read and comment on it at http://www.stamfordadvocate.com/news/article/Wendy-Lecker-Failed-common-core-tests-punish-10906971.php

Mastery exam task force report due soon — its findings ‘predetermined’ by John Bestor

John Bestor, an educator and education advocate, recently wrote a piece for the CT Mirror educating readers about the farce called the Mastery Exam Tax Force.  The following is his commentary piece which you can read and comment on at: http://ctviewpoints.org/2017/01/04/mastery-exam-task-force-report-due-soon-its-findings-predetermined/

Mastery exam task force report due soon — its findings ‘predetermined’

In a few days the Mastery Examination Task Force will be submitting its Final Report and Recommendations to the Connecticut Legislature’s Education Committee which had asked for a study of student assessment practices in our public schools. Having monitored the progress of this task force during its one-and-a-half years of meetings, I contend that their findings were predetermined at or even before the task force began its deliberations.

My reasons for this presumption lie in 1) the composition of committee membership serving on the task force, 2) the choice of topics discussed, and 3) the available evidence that was purposefully withheld from task force consideration.

Just why is the final report and recommendations of the mastery examination task force so important?

For starters, it is a perfect example of much of what is wrong in government today and certainly in how education policy is developed in our state. As the 2015 legislative session drew to a close, the Education Committee of the legislature passed Public Act 15-238, An Act Concerning Student Assessments. It was stipulated that “the committee must study various aspects of Connecticut’s mastery test system” and report back to the Education Committee.

Our legislators responded to persistent parent and teacher concerns surrounding the efficacy of statewide student testing that had resulted in increased opting-out across our state as it had in many states across the country. As reported in the CT Mirror, the co-chairs of the Education Committee called for, in Sen. Gayle Slossberg’s words: “The group established by this legislation will look at a number of factors including age appropriateness of the exam and how much time it is taking students and report back to the legislature.”

Rep. Andy Fleischmann echoed her concerns, saying: “We are hearing that children are spending days and days with the SBAC exam. We have to find out if it’s a good exam in terms of its impact on students and teachers.”

Unfortunately, the very design of the Task Force itself doomed this committee from the start, since the Education Committee stipulated that specific education stakeholder groups would be represented on the task force. As a result, the same group of “education reform” advocates found themselves — once again — on an influential committee that was tasked with determining the future course of what student testing in our state would look like for years to come.

The Mastery Examination Task Force was comprised of four members from the State Department of Education including the Commissioner herself as chair, two representatives from the State Board of Education, two from the Connecticut Association of Boards of Education, two from the CT Association of Public School Superintendents, two from the CT Association of Schools (which oversees The Principal’s Center), three from the CT Parent Teacher Association, and two chosen at-large by the Commissioner: the Executive Director of the CT Council for Education Reform and a Southern Connecticut State University professor with a math/technology background. Another appointed education leader who joined the task force after it had started represented the State Board of Regents for Higher Education. And, four representatives from the two professional teacher organizations were included on the task force.

By my calculation, task force members predisposed to maintaining (with some minor tweaks) the current statewide student assessment protocol outnumbered the teacher representatives, 18 to four.

S0, what are some of the initial concerns associated with statewide test practices that parents and teachers brought to their legislators?

Many had expressed concern:

  • That testing was taking valuable instruction time away from students.
  • That young children were sitting for days taking high pressure tests that were not developmentally appropriate or meaningful in student learning.
  • That students were feeling stressed out over multiple days of high-stakes testing.
  • That daily school routines were disrupted by test prep, schedule changes for all students to ensure computer availability, and additional days of make-up testing for those who were absent.

Are the tests even sound? Controversies over validity, reliability, fairness, discriminatory practices have been made by critics of these statewide assessment protocols. Without consideration of all sides of this controversial test debate, the Mastery Examination Task Force has failed its mission and simply continues to fuel the many parents who refuse to allow their children to participate in these unproven statewide assessment practices.

Nowhere in the minutes of Task Force meetings is their evidence that the committee even acknowledged any findings that were contrary to its predetermined course. Although fully aware of Dr. Mary Byrne’s 2015 report to the Missouri legislature, entitled “Issues and Recommendations for Resolution of the General Assembly Regarding Validity and Reliability of the Smarter Balanced Assessments Scheduled for Missouri in Spring 2015”, SDE leadership of this Task Force failed to consider or review its revealing and compelling arguments.

Early in her report, Dr. Byrne writes: “No independent empirical evidence of technical adequacy is available to establish external validity and reliability of the SBAC computer-adaptive assessment system. Test scores derived from tests that have no demonstrated technical adequacy are useless data points, and plans to collect evidence of validity and reliability provide no assurance that technical adequacy will be established.” Appendix B of her report contains 18-pages that outline “Steps To Generate Evidence That Test Items Are Valid And Reliable.” Based on her research, Dr. Byrne had recommended that Missouri withdraw from the Smarter Balanced Assessment Consortium (which they subsequently did).

The Task Force also failed to review and discuss the research findings of over 100 California researchers who published “Research Brief #1” in February 2016, entitled “Common Core State Standards Assessments in California: Concerns and Recommendations.”

Writing as CARE-ED [California Alliance of Researchers for Equity in Education], these highly-regarded education research professionals confirmed what many others had been saying. “Testing experts have raised significant concerns about all (SBAC, PARCC, Pearson) assessments, including the lack of basic principles of sound science, such as construct validity, research-based cut scores, computer-adaptability, inter-related reliability, and most basic of all, independent verification of validity. Here in California, the SBAC assessments have been carefully examined by independent examiners of the test content who concluded that they lack validity, reliability, and fairness, and should not be administered, much less be considered a basis for high-stakes decision making.”

The waiver granted to the state that allowed the administration of the Statewide SAT to 11th grade students had been applauded by the CSDE as a worthy adjustment. But, just as they had ignored the revelations of SBAC critics, the Task Force refused to acknowledge or consider Manuel Alfaro’s series of posts in May and June, 2016.

As a former executive director of assessment design and development at The College Board, Alfaro raised serious concerns about the rushed and faulty psychometric development of the redesigned SAT. In his May 17 post, he stated that, despite public claims of transparency, “public documents, such as the Test Specifications for the Redesigned SAT contain crucial statements and claims that are fabrications.”

In a later post (5.26.16),  Alfaro reported that The College Board’s own Content Advisory Committee expressed concerns about “Item Quality,” “Development Schedule,” and “Development Process” which went unaddressed by College Board executives. Part Six (9.21.16) in a series of revealing reports by Renee Dudley in Reuters Investigates further reinforced many of Alfaro’s allegations and concerns as related to the redesigned SAT’s built-in unfairness to the neediest students taking the test.

To the best of my knowledge, this evidence — though readily available on the Internet — has not been considered or reviewed by this Task Force. For a group of educators and stakeholders promoting “rigorous” and “higher critical thinking skills” from our students, many of the Task Force members have been remarkably complicit in their acceptance of whatever agenda direction and sharing of information that SDE officials put forth.

These allegations deserve acknowledgement and review if there will be any expectation that the final recommendations from this Task Force are valued and accepted by the many parents who refuse to allow their children’s reading and math skills to be measured by these still unproven and controversial assessments. It is no surprise that only14 of the 32 states that had originally committed to SBAC continue to administer this flawed test.

One issue always left out of the assessment conversation is the historic role of testing in sustaining a discriminatory message that further promulgates ideas of inferiority, eugenics, and racial bigotry. Former President Bush’s catchy rhetoric of “soft bigotry of low expectations” belies the hard evidence associated with assessments in discriminatory practices across our country.

Continued reliance on the current statewide testing protocols only serves as a “rank and sort” mechanism that devalues our diversity, undercuts our essential humanity, and plagues our state and nation. The use of flawed test results to force school closures and encourage the proliferation of privately-run, publicly-funded charter schools is an outright disgrace.

Under whatever name you call it: “turnaround schools,” “Commissioner’s Network,” or “schools of choice,” the bottom line is that citizens in the communities designated as under-performing based on these inadequately-developed and unproven tests have been stripped of their civil rights.

As hard-pressed, economically-challenged communities are starved of financial resources, the ability of citizens to control the education destiny of their local schools is being systematically taken away from their duly elected public servants. It is imperative that Connecticut legislators no longer tolerate inadequately studied and one-sided committee recommendations, like the forthcoming Mastery Examination Task Force report, and stop colluding in the failed “corporate education reform” takeover of public education in our state.

What is the purpose of the State-sponsored Smarter Balanced Assessment Consortium (SBAC) “Mastery” Test?

The Common Core SBAC testing scheme is the unfair, inappropriate and discriminatory annual testing system mandated by Governor Dannel Malloy and his administration.

Designed to fail a vast share of Connecticut’s students, the SBAC test is aligned to the Common Core, rather than what is actually taught in Connecticut’s classrooms.

If Governor Malloy and his allies in the corporate Education reform industry get their way, the SBAC test will continue to be used to rate and rank order students, teachers and schools.  For them, it is a mechanism to ensure students, and teachers are deemed to be failures, thereby paving the way to turn even more Connecticut public schools over to privately owned, but publicly funded charter school companies and others that seek to profit off the privatization of public education.

With the Connecticut legislature’s approval, the Malloy administration has been busy turning Connecticut’s public schools into little more than testing factories and profit centers for private entities, many of which have become some of Malloy’s biggest campaign donors.

One of the areas that remains unresolved is how the SBAC testing scam will be used in Connecticut’s teacher evaluation process.  Malloy and his ilk want to require that the results of the unfair tests be used as a key tool in determining how well teachers are doing in the classroom.

Teachers, their unions and public school advocates recognize that there are much better teacher evaluation models that could be used and don’t rely on the use of standardized tests to determine which teachers are succeeding, which teachers need additional training and which individuals should be removed from the classroom.

As the CT Mirror reported earlier this week in an article entitled, Grading teachers: Tempers flare over use of student test scores;

In 2010, state legislators created the PEAC (Performance Evaluation Advisory Council), to come up with guidelines for evaluating teachers. In January 2012, the panel agreed to have nearly one-quarter of a teachers’ rating linked to the state exam scores.

Consensus then vanished, however, after the governor proposed linking the new evaluations to teacher certification and pay, and union leaders grew wary that the tests were becoming too high stakes. Complicating the issue further was the rollout of a controversial new state exam that engendered even more skepticism among union officials and many teachers about using the tests for evaluations.

Despite the overwhelming evidence that the SBAC test is NOT an appropriate tool to evaluate teachers, the Malloy administration remains committed to implementing their policy of failure.

The controversy has meant that the Performance Evaluation Advisory Council (PEAC) has been unable to come to a consensus on how to proceed with the implementation of Malloy’s teacher evaluation plan.

As a way to move the debate forward, the Connecticut Education Association and the American Federation of Teachers tried, unsuccessfully, to use this week’s PEAC meeting to push the group to, at the very least, define what purpose of Connecticut’s so-called Mastery Testing system.

In a recent CEA blog post, the union explained that at the meeting CEA’s Executive Director told the group,

“The threshold question is, ‘What is the role of the mastery test?’ I hold that it’s to give a 50,000-foot view that can inform resource allocation, curriculum alignment, professional development, and instructional strategies at the district level, at the building level, or even the classroom level.”

Adding,

“That is where we gain knowledge about things like social justice, about fiscal or community needs…

The President of the New Haven Federation of Teachers, concurred saying that the tests

“were never designed to evaluate teachers,” adding, “If we return to that, we’re going to return to teachers teaching to the test, because their jobs depend on it.”

The CEA and AFT leadership are absolutely right on this one.

SBAC is an “inappropriate tool for evaluating teachers.”

As mentioned, there are plenty of teacher evaluation models that the state could and should be using.

Rather than maintaining their war on Connecticut’s children, teachers and schools, Connecticut’s elected and appointed officials should dump Malloy’s proposed teacher evaluation program and shift to one that is fair, efficient and effective.

With Election Day close at hand, candidates for the Connecticut State Senate and Connecticut House of Representatives should be making it clear that if elected on November 8th they’ll shift gears and actually do what is right for Connecticut’s students, teachers and public schools.

Opting out of testing in Connecticut — now a civic duty by Drew Michael McWeeney

Drew Michael McWeeney is an Early Childhood Education major and teacher candidate at Southern Connecticut State University.  His powerful commentary piece first appeared in the CTMirror.  You can read and comment on it at: http://ctviewpoints.org/2016/10/13/opinion-drew-michael-mcweeney/

Opting out of testing in Connecticut — now a civic duty

Since implementation of the new teacher evaluation system by Gov. Dannel Malloy and the legislature, I have believed opting out of standardized testing was a student right. I now see it as a civic responsibility.

Under the current system, 45 percent of a teacher’s evaluation is based on student test scores. According to a 2014 Brookings Institute study, however, teachers can elect not to be evaluated on the scores if a significant number of students do not show up to take their standardized tests. This is because having too few test takers can cause the test data to produce false results, labeling a teachers’ classes either high- or low-performing incorrectly.

What Malloy and the legislature did was a direct attack on public education under the guise of raising standards. Because of this, here is the narrative the system creates: Since students are failing tests, teachers must be poor performers. Therefore since public school teachers are poor performers, let us close down public education and privatize public schools.

Having observed countless Connecticut classrooms, I can tell you that basing almost half of a teacher’s rating on student test scores is too much in the first place. Then, when Gov. Malloy makes it impossible for us teacher candidates and teachers to present other evidence to establish our effectiveness — by eliminating lesson plans from consideration, for example — he compounds the problem.

Finally, researchers at the University of Connecticut’s NEAG School of Education, in a study released last year, reported that only 58 percent of teachers surveyed felt the rating they received from the state’s new evaluation system was accurate. Of the 533 teachers surveyed, more than half found no added value in the time they spent on their evaluations.

With these and other problems, the teacher evaluation system is a catastrophe. Although our state tried addressing many shortcomings through customization, it is the highly-destructive effects of accountability reform that teachers must resist. I insist – must resist.

Yes, teachers need to be evaluated. I would expect nothing less in any job. It is even more critical, especially in fields such as education, when a teacher receives job protection under union contract. It costs school districts hundreds to thousands of dollars to both hire and retain teachers. You want to protect your community investment.

Now, I understand teachers have to follow their district evaluation plan or they could be fired for insubordination. However, what is interesting is that the reauthorization of the Elementary and Secondary Education Act, formerly known as No Child Left Behind and now called Every Student Succeeds Act, does not require that teachers be evaluated by student test scores. That was what Race to the Top required in order for states to be eligible to apply for Race to the Top money; so states incorporated student test scores in their teacher evaluation process.

Is it time for the fight to end? No. This is only the beginning. We need to fight this war on common sense. We need to fight the war Connecticut and other states, such as New York, have declared on public education by supporting a better, fairer, evaluation system for teachers. Before we demand better comprehensive education reform, we must shout battle cries of “Opt-Out.”

We need these evaluations to fail if we want public school teachers to succeed.

Judge botched rulings on education policy by Wendy Lecker

Education advocate and columnist Wendy Lecker returns to the recent CCJEF v. Rell legal decision in her weekend piece in the Stamford Advocate.  You can read and comment on her piece at:  http://www.stamfordadvocate.com/news/article/Wendy-Lecker-Judge-botched-rulings-on-education-9945947.php

Judge botched rulings on education policy by Wendy Lecker

In issuing his decision in the CCJEF school-funding case last month, Judge Thomas Moukawsher claimed he was faithfully following the dictates of the Connecticut Supreme Court. However, it is clear that the judge ignored a major warning by our highest court: that the judiciary is “ill-equipped” to deal with educational policy matters. Nowhere is this disregard of the court’s warning more evident than in Moukawsher’s rulings on high school and teacher evaluation. In these rulings, the judge contravened the mountain of academic and experiential evidence showing that what he proposes is dead wrong.

First, the judge declared that Connecticut should institute standardized high school exit exams. The judge decided that because Connecticut does not have “rational” and “verifiable” high school standards, meaning standards measured by a high school exit exam, Connecticut diplomas for students in poor districts are “patronizing and illusory.” He concluded that the cure for this problem is standardized, “objective” exams that students must pass to graduate.

In pushing this proposal, the judge relied heavily on one defense witness, Dr. Eric Hanushek, a witness whose testimony has been flatly rejected in school funding cases across the country. Hanushek claimed that Massachusetts’ status as the “education leader” in the country was a result of instituting an exit exam.

Had the judge examined the evidence, however, he would have discovered that Massachusetts’ high school exit exam has increased dropout rates for the state’s most vulnerable students. In fact, as the New America Foundation reported, decades of research on exit exams nationwide show two things: students are not any better off with exit exams, and exit exams have a disproportionately negative impact on the graduation rates of poor students and students of color. That is why the trend among states is to drop exit exams. Exit exams would widen the graduation gap in Connecticut.

Again, had the judge examined the evidence, he would have also learned that the actual major factor in Massachusetts’ improvement was the very measure he refused to order Connecticut to implement: school finance reform that dramatically increased the amount of school funding statewide. No fewer than three studies have shown that increasing school funding significantly improved student achievement in Massachusetts. Recent major studies confirmed those findings nationwide, demonstrating that school finance reform has the most profound positive impact among poor students.

The judge also missed the mark by a wide margin in his ruling on teacher evaluations; which again he insisted be “rational and “verifiable” from his unstudied perspective. Anyone who has been paying attention to education matters the past few years has surely noticed the understandable uproar over the attempt to rate teachers based on student standardized test score “growth.”

Experts across the country confirm, as the American Statistical Association pointed out, that a teacher has a tiny effect on the variance in student test scores: from 1 percent to 14 percent. Thus, it is now widely understood that any system that attempts to rate teachers on student test scores, or the “growth” in student test scores, is about as “rational” and “verifiable” as a coin toss.

Courts that have actually examined the evidence on systems that rate teachers on student test scores have rejected these systems. Last year, a court in New Mexico issued a temporary injunction barring the use of test scores in that state’s teacher evaluation system. And in April, a court in New York ruled that a teacher’s rating based on her students’ “growth” scores — the foundation of New York’s teacher evaluation system — was “arbitrary and capricious;” the opposite of “rational” and “verifiable.”

Yet despite the reams of evidence debunking the use of student growth scores in evaluating teachers, and despite these two court rulings, Judge Moukawsher insisted that rating teachers on student “growth” scores would satisfy his demand that Connecticut’s system for hiring, firing, evaluating and compensating teachers be “rational” and “verifiable.” His ruling defies the evidence and logic.

These and all of the judge’s other rulings are now being appealed to the Connecticut Supreme Court by both sides: the state and the CCJEF plaintiffs. One can only hope that that our highest court will steer this case back on course, away from these ill-advised educational policy rulings and toward a proper finding that the state is failing to provide our poorest schools with adequate funding and is consequently failing to safeguard the educational rights of our most vulnerable children.

Wendy Lecker is a columnist for the Hearst Connecticut Media Group and is senior attorney at the Education Law Center.  Her column  can be found at: http://www.stamfordadvocate.com/news/article/Wendy-Lecker-Judge-botched-rulings-on-education-9945947.php

Attention Connecticut:  SATs are WORTHLESS – A report by whistleblower Manuel Alfaro

Tomorrow, October 1, 2016, thousands of Connecticut children will – once again – be taking the SATs in the hopes of acquiring a high enough score that they can attend the college of their choice.

However, more and more colleges and universities are going test optional.  According to Fairtest, the national test monitoring entity, more than 900 colleges and universities across the country have dropped the requirement that students provide an SAT or ACT test score with their application. Colleges have taken this step because they recognize that it is a student’s grade point average – not their standardized test score – that is the best predictor of how well a student will do in college.

Meanwhile, it was just last year that Governor Dannel Malloy and the Connecticut General Assembly mandated that every Connecticut high school junior take the SAT, despite the fact that the overwhelming evidence is that the test is unfair, inappropriate and discriminatory, not to mention, it is designed to fail a vast number of children.

Instead of promoting a sophisticated student and teacher evaluation program, Malloy and other proponents of the corporate education reform agenda have been pushing a dangerous reliance on standardized testing as one of the state’s primary mechanisms to judge and evaluate students, teachers and public school.

Below are two statements that were recently posted by Manuel Alfaro on his LinkedIn account.  Alfaro is an outspoken whistleblower and critic of the College Board and their SAT.

Before coming forward to report the College Board’s unethical, and arguably illegal activities, Alfaro served as the Executive Director of Assessment Design & Development at The College Board (The SAT).

Considering Connecticut’s public officials made a profound mistake by mandating that schools use the SAT scores to evaluate students and teachers, Mr. Alfaro’s information and warnings are particularly important.

Manuel Alfaro Post #1

Residents of CO, CT, DE, IL, ME, MI, and NH, the heads of the Department of Education of your states have failed to protect the best interests of your students and your families, opting instead to protect their own interests and the interests of the College Board.

Over the last five months, I have written about several serious problems with the redesigned SAT. The problems include:

  • Development processes that do not meet industry standards; false claims made (in public documents) by the College Board about those processes; false claims made (in state proposals and contracts) by the College Board about those processes.
  • Poor quality of items—documented in letters and comments from content committee members.
  • Extensive revisions of a large percentage of operational items—the College Board claims that this happens only on the RAREST of occasions.
  • Test speediness resulting from the use of the wrong test specifications during the construction of operational SAT forms—use of the wrong specifications resulted in operational tests that, according to formal timing studies conducted by the College Board, require an additional 21-32 minutes (on top of the 80 minutes already allowed) to complete.

Under normal circumstances, the department of education of the client states would have imposed heavy penalties on the College Board; suspended administration of the flawed SATs; and demanded immediate corrective actions.

For example, in 2010, the state of Florida fined Pearson nearly $15 million, which Pearson paid. (Source: www.tampabay.com/news/education/k12/florida-hits-fcat-contractor-pearson-with-another-12-million-in-penalties/1110688.) The nearly $15 million fines were imposed because the FCAT results were delivered late. Imagine what the fines would have been if the problems had been as severe as the ones I’ve disclosed about the SAT.

The reason you are not seeing this type of reaction from the states administering the SAT for accountability is that they are partly responsible for the problem. Allow me to elaborate: Typically, to protect both the state and the testing company, an assessment contract that includes the use of an assessment created for the intent and purpose of college admission, not state accountability, would include a clause requiring that the test items be reviewed and approved for use by a content committee from the client state.

This additional step, however, costs money; requires that custom SAT forms are created for each state; and impacts administration schedules. So, even though it is in the best interest of the state, the College Board, and—most importantly—students, state officials opted not to do it. What are the ramifications of this decision?

  • The inclusion of items unfit for use in the target state
  • Performance level descriptors that are meaningless
  • Students spending time on items that should not have been included on the test
  • Teachers being evaluated (partially) using results from tests that may or may not accurately assess student performance

To illustrate the 4 points above, consider the following item (from Practice SAT Forms):

This item targets two different clusters of the Common Core Standards for Math:

Understand and apply theorems about circles

Find arc lengths and areas of sectors of circles

If students get this item wrong, it is impossible to tell whether students got it wrong because they don’t understand and are unable to apply theorems about circles to determine the measure of angle O or because they are unable to compute the length of minor arc LN, after they’ve determined the measure of angle O.

To be included in a state assessment, items have to clearly align to a single standard. The item in this example cannot be aligned even to a single cluster, much less the individual standards within each of the two clusters. Thus, this item would be deemed to exceed the content limits of the standards and would be excluded from inclusion on the state tests.

If students get this item wrong, the performance level descriptor associated with their scores will state that the students are unable to compute arc lengths; they are unable to apply theorems about circles; or both. But this is meaningless as it is impossible to determine what exactly led to the incorrect answer.

This impacts teachers in a similar way: You cannot tell if they are doing a great job at teaching students to compute arc lengths; apply theorems about circles; neither; or both. How useful are the teacher reports generated from an assessment that includes such items? They certainly cannot be used to let teachers know what they need to improve on.

The SAT contains many items like the one in the example above. University researchers should analyze all the practice SAT tests to determine the full scope of the problem. If I continue to provide examples, we will get more of the same glib responses from the College Board.

Next Steps

Demand that the heads of department of education of your states take immediate action by either:

  • Suspending SAT administrations until the College Board addresses the problems
  • Resigning, and letting an individual willing to protect the interests of your students and families take over (and fix the problems).

Manuel Alfaro Post #2

To minimize advantages resulting from the use of calculators with computer algebra systems (CAS), the College Board uses a simple trick to keep students from directly solving math questions using their CAS calculators. For example, in Item 1 below the correct solution requires a simple substitution before solving a linear equation. This item is counted towards one of the “linear equation” dimensions under Heart of Algebra. However, that simple substitution (“k=3”) makes this item a system of equations (see Item 29 below, for an example of a similar item with a different arrangement of the equations), which makes it count toward a different dimension within Heart of Algebra.

(Source: SAT Practice)

As the College Board uses this trick frequently, each operational form contains several items that are artificially “enhanced” to defeat CAS advantages. This leads to the construction of operational SAT forms that do not meet SAT content specifications because the “enhanced” items are misaligned. In the case of the form containing Item 1, the form would have too few items targeting the “linear equations” dimension and too many items targeting the “system of equations” dimensions. In some cases (Item 9, below), the enhancements push the items completely outside the scope of the entire SAT content specifications—this item should not have been included in the test at all, as it is a system of more than two equations.

(Source: SAT Practice)

Items like these are unfit for use under the classification the College Board originally assigned them because they target two different skills. As I mentioned on my September 26, 2016 post, if students get these items wrong, it is impossible to tell whether it is due to their inability to solve equations or their inability to evaluate expressions. Sadly, these enhanced items don’t promote best instructional practices, as the College Board aims to do.

In any case, the inclusion of these types of items results in SAT operational forms that are not:

  • Parallel psychometrically, as the pretest item statistics were invalidated when the College Board revised the items (and did not pretest them again) during operational form construction;
  • Parallel content-wise, as each operational form may contain several items that are misclassified or are beyond the scope of the SAT content specifications.

What does this mean? It means that the SATs are WORTHLESS; INDEFENSIBLY WORTHLESS.

Connecticut’s students, parents, teachers and public school deserve better.  Governor Malloy and Connecticut’s elected officials should immediately repeal the requirement that Connecticut students take the SAT and replace that mandate with an evaluation system that actually measures whether students are learning what is being taught in Connecticut’s classrooms.

Top Utah Republicans join corporate education reform groups to attack anti-Common Core school board candidates

In May 2016 Utah’s Republican Governor, Gary Herbert, called on the Utah Board of Education to, “move past Common Core standards and get rid of mandatory SAGE testing for high school students.”

Governor Herbert wrote,

“I am asking the State Board of Education to consider implementing uniquely Utah standards, moving beyond the Common Core to a system that is tailored specifically to the needs of our state.”

The Utah Governor’s strong action in opposition to the Common Core standards and its related Common Core testing scheme won him praise from conservatives and educators, but some of the state’s top Republicans are now joining the Utah business community and the state’s corporate education reform allies to try and keep pro-Common Core incumbents on the Utah Board of Education.

Following the loss of some pro-Common Core incumbents during the state’s summer primary, corporate education reform allies are now raising money to defend the remaining pro-Common Core, pro-corporate education reform candidates on the Utah School Board.

Earlier this month, Utah Policy.com, a Utah based political blog reported,

“Get ready for partisan, big money, races for the Utah State School Board…

[…]

[T]the primary race this year caught some GOP leaders off guard, as several well-liked (at least on Capitol Hill) incumbents were beaten June 28.

And now a “last ditch” effort is being made to save a few of the other incumbents as a group of business/reform groups are looking to raise money and set up PACs to help those endangered school board members.

Utah Policy.com added;

Over the weekend a quickly-formed school board candidate fund-raiser was put together by the Utah Technology Council, among others, with House Speaker Greg Hughes, R-Draper, and Senate President Wayne Niederhauser, R-Sandy, called in to help raise money for some of the remaining school board incumbents feeling the heat from the Utah Education Association – the main teacher union in the state.

For Hughes it is an old battle – remember the failed private school voucher fight of 2007?

There are eight seats on the Utah School Board this year.  The Utah Education Association, which endorsed Republican Governor Gary Herbert against his Democratic rival Mike Weinholtz, this year, is supporting candidates in six of those races.

Rather than find common ground with the teacher’s union over support for the governor and opposition to the Common Core, the Republican elected officials and corporate education reform advocacy groups are now targeting the union endorsed candidates for defeat, including those that are running on an anti-Common core agenda.

As one Utah based anti-Common Core group posted, Common Core’s Role in Hot State School Board Race,

The State School Board race has never drawn much attention before. But this year, the Salt Lake Tribune reported, businesses and even top-tier elected officials are personally campaigning and fundraising for and against certain candidates.

Yesterday’s headline was: “Niederhauser and Hughes ask Business Leaders to Help Defeat UEA-Backed School Board Candidates“.  Yesterday, too, business organizations such as the Utah Technology Council and the School Improvement Association joined Niederhauser and Hughes in a fundraising webinar that promoted a slate of pro-Common Core candidates who happen to be not favored by or funded by national teacher’s unions.

The anti-common Core blogger added,

“…I don’t understand why these groups have chosen to campaign against both the anti-Common Core candidates as well as against the UEA-backed candidates…

[…]

Nor do I understand why our House Speaker and Senate President don’t see the hypocrisy in speaking against big money buying votes (NEA) while both of them are personally funded by big business money (Education First).

But my bigger questions are: how do the Speaker and the Senate President dare to campaign for Common Core candidates, thus going directly against Governor Herbert’s call to end Common Core alignment in Utah?

How do they dare campaign against the resolution of their own Utah Republican Party that called for the repeal of the Common Core Initiative?

Have they forgotten the reasons that their party is strongly opposed to all that the Common Core Initiative entails?

Have they forgotten Governor Herbert’s letter that called for an end to Common Core and SAGE testing just four months ago? (See letter here.)  For all the talk about wanting to move toward local control and to move against the status quo, this seems odd.

Of course, the answer to the anti-Common Core blogger’s lament is that the Common Core has always had strong support from mainstream Republicans.  In fact, it was George W. Bush’s administration that helped foist the Common Core and Common Core testing program upon the nation.

It should come as no surprise to the education advocates in Utah that even when their Republican governor calls for an end to the Common Core, there will be some top Republican leaders, along with the business community and pro-corporate education reform groups, that would seek to undermine his position.

The sad reality is that when it comes to the federalization and privatization of public education, many Republican and Democratic elected officials have no problem undermining their local students, parents, teachers and public schools.

Will the SAT become Rhode Island’s high school “exit exam”?

Check out the Rhode Island Department of Education (RIDE) website and you’ll see that they proudly declare that;

Rhode Island has implemented a statewide diploma system to ensure access for all middle and high school students to rigorous, high quality, personalized learning opportunities and pathways.

An announcement about the details surrounding the “new diploma system” is expected later this fall, now that the public comment process on the proposed regulations has concluded (Rhode Islanders had until September 15, 2016 to weigh in on the propose changes).

Earlier this month, pro-education reform governor Gina Raimondo, whose husband is part of the education reform and charter school industry, announced that she was “open” to using the unfair, inappropriate and discriminatory SAT testing scheme as a graduate requirement in Rhode Island.

As numerous academic studies have revealed, grade point averages, not standardized test scores are the best predictors of college success.

In fact, these studies show that the SAT correlates with the income of the student’s parents and does not predict how a student will do in College.

Over the last few years, more than 850 colleges and universities have decided not to require applicants to even provide SAT scores and this list includes well over 180 “top-tier” universities and colleges.

But defending her indefensible position, Governor Raimondo claimed that the NEW SAT was better because it was aligned to the Common Core, a statement that indicates how little the governor understands about the shortcomings associated with the Common Core and its Common Core testing scheme.

Rhode Island state officials had already announced plans to drop the requirement that students pass the Common Core PARCC tests in order to graduate, a decision they reached based on the evidence that the PARCC test is not an appropriate indicator of what the child has been taught or whether they are college ready.

Why Governor Raimondo is “open” to the use of the SAT is a sad reminder about the level of ignorance on the part of some elected officials in this country.  It is also an indicator that far too many officials see students are little more than profit centers for the charter school and corporate education reform industries.

Hey Malloy, what’s the deal with the new Common Core SBAC test results?

With great fanfare and self-congratulations, Governor Dannel Malloy and his administration recently released the results of last springs’ Common Core Smarter Balanced Assessment Consortium (SBAC) tests. Their claim is that the Governor’s anti-teacher, anti-public education, pro-charter school agenda is succeeding.

The SBAC test is succeeding?

The Common Core Smarter Balanced Assessment Consortium (SBAC) testing scheme is the unfair, inappropriate and discriminatory national testing system that the Malloy administration instituted and are now being used to evaluate and label students, teachers and public schools.

As if to give the charade some credibility, Governor Malloy, Lt. Governor Wyman and their team call it Connecticut’s “Next Generation Accountability System.”

However, the testing and evaluation system is a farce that fails to properly measure how students, teachers and schools are really doing, nor does it properly evaluate the impacts that are associated with poverty, language barriers and unmet special education needs.

To showcase the extraordinary problems with Malloy’s testing scheme, the following chart highlights the results from two of Malloy’s favorite charter schools, the Achievement First Hartford charter school and the Achievement First New Haven charter school, which is called Amistad Academy.

Percent of students reaching “proficiency” in Math as measured by the 2015 SBAC tests;

DISTRICT GRADE 3 GRADE 4 GRADE 5 GRADE 6 GRADE 7 GRADE 8
Achievement First Inc. Hartford  

56.8%

 

44.4%

 

16.2%

 

20.3%

 

17.5%

 

33.9%

Achievement First Inc. New Haven – Amistad Academy  

63.3%

 

54.4%

 

34.4%

 

40.0%

 

46.1%

 

46.9%

 

Here are the core results;

  • Approximately 60% of students in both charter schools were labeled “proficient” in MATH in grade 3.
  • The percent deemed “proficient” dropped by about 10 points in Grade 4.
  • The percent “proficient” dived in Grade 5, with only 1 in 6 students deemed “proficient” in Hartford and only 1 in 3 at the “proficient” level in New Haven.
  • The number reaching a “proficient” level remained extremely low at Achievement First Hartford in grades 6, 7 and 8.
  • While the percent of students labeled proficient in at Achievement First New Haven was slightly better than its sister school in Hartford, less than 50% percent of Amistad Academy’s 6th, 7th and 8th grade students were deemed to be “proficient.”

According to Malloy’s policies, these SBAC results allow us to determine how students are doing, whether teachers are performing adequately and whether any individual school should be labeled a great school, a good school, a school that is doing fairly well or a failing school.

So, according to Malloy, which of the following statements are true;

  1. As measured by the SBAC proficiency number, while students at these two Achievement First schools are doing “okay” in grade 3, the two schools are falling short in Grades 4, 5, 6, 7 and 8.
  1. The results indicate that Achievement First Inc. has apparently hired talented teachers in grade 3, but the results prove that teachers in grade 4-8 are simply not equipped or capable to do their job. Grade 5 teachers are particularly weak, but the data indicates that Achievement First’s teachers should be evaluated as ineffective and the charter school chain should remove and replace all teachers other than those teaching in grade 3.
  1. Achievement First, Inc. proclaims that their students do much better on standardized tests, however, the SBAC results reveal that they are failing and should be labeled as failing schools.

According to Connecticut policymakers, all three statements are true, but of course, the truth is much more complex and the test results provide no meaningful guidance on what is actually going on in the classrooms.

Perhaps most disturbing of all is that these results provide no useful information about the impact of poverty, language barriers and unmet special education needs

One question rises to the top.

What if the students and teachers are not the problem? What if the problem is that the testing scam really is unfair, inappropriate and discriminatory and that the entire situation is made worse by Malloy’s absurd “Next Generation” Accountability system?