Tomorrow, October 1, 2016, thousands of Connecticut children will – once again – be taking the SATs in the hopes of acquiring a high enough score that they can attend the college of their choice.
However, more and more colleges and universities are going test optional. According to Fairtest, the national test monitoring entity, more than 900 colleges and universities across the country have dropped the requirement that students provide an SAT or ACT test score with their application. Colleges have taken this step because they recognize that it is a student’s grade point average – not their standardized test score – that is the best predictor of how well a student will do in college.
Meanwhile, it was just last year that Governor Dannel Malloy and the Connecticut General Assembly mandated that every Connecticut high school junior take the SAT, despite the fact that the overwhelming evidence is that the test is unfair, inappropriate and discriminatory, not to mention, it is designed to fail a vast number of children.
Instead of promoting a sophisticated student and teacher evaluation program, Malloy and other proponents of the corporate education reform agenda have been pushing a dangerous reliance on standardized testing as one of the state’s primary mechanisms to judge and evaluate students, teachers and public school.
Below are two statements that were recently posted by Manuel Alfaro on his LinkedIn account. Alfaro is an outspoken whistleblower and critic of the College Board and their SAT.
Before coming forward to report the College Board’s unethical, and arguably illegal activities, Alfaro served as the Executive Director of Assessment Design & Development at The College Board (The SAT).
Considering Connecticut’s public officials made a profound mistake by mandating that schools use the SAT scores to evaluate students and teachers, Mr. Alfaro’s information and warnings are particularly important.
Manuel Alfaro Post #1
Residents of CO, CT, DE, IL, ME, MI, and NH, the heads of the Department of Education of your states have failed to protect the best interests of your students and your families, opting instead to protect their own interests and the interests of the College Board.
Over the last five months, I have written about several serious problems with the redesigned SAT. The problems include:
- Development processes that do not meet industry standards; false claims made (in public documents) by the College Board about those processes; false claims made (in state proposals and contracts) by the College Board about those processes.
- Poor quality of items—documented in letters and comments from content committee members.
- Extensive revisions of a large percentage of operational items—the College Board claims that this happens only on the RAREST of occasions.
- Test speediness resulting from the use of the wrong test specifications during the construction of operational SAT forms—use of the wrong specifications resulted in operational tests that, according to formal timing studies conducted by the College Board, require an additional 21-32 minutes (on top of the 80 minutes already allowed) to complete.
Under normal circumstances, the department of education of the client states would have imposed heavy penalties on the College Board; suspended administration of the flawed SATs; and demanded immediate corrective actions.
For example, in 2010, the state of Florida fined Pearson nearly $15 million, which Pearson paid. (Source: www.tampabay.com/news/education/k12/florida-hits-fcat-contractor-pearson-with-another-12-million-in-penalties/1110688.) The nearly $15 million fines were imposed because the FCAT results were delivered late. Imagine what the fines would have been if the problems had been as severe as the ones I’ve disclosed about the SAT.
The reason you are not seeing this type of reaction from the states administering the SAT for accountability is that they are partly responsible for the problem. Allow me to elaborate: Typically, to protect both the state and the testing company, an assessment contract that includes the use of an assessment created for the intent and purpose of college admission, not state accountability, would include a clause requiring that the test items be reviewed and approved for use by a content committee from the client state.
This additional step, however, costs money; requires that custom SAT forms are created for each state; and impacts administration schedules. So, even though it is in the best interest of the state, the College Board, and—most importantly—students, state officials opted not to do it. What are the ramifications of this decision?
- The inclusion of items unfit for use in the target state
- Performance level descriptors that are meaningless
- Students spending time on items that should not have been included on the test
- Teachers being evaluated (partially) using results from tests that may or may not accurately assess student performance
To illustrate the 4 points above, consider the following item (from Practice SAT Forms):
This item targets two different clusters of the Common Core Standards for Math:
Understand and apply theorems about circles
Find arc lengths and areas of sectors of circles
If students get this item wrong, it is impossible to tell whether students got it wrong because they don’t understand and are unable to apply theorems about circles to determine the measure of angle O or because they are unable to compute the length of minor arc LN, after they’ve determined the measure of angle O.
To be included in a state assessment, items have to clearly align to a single standard. The item in this example cannot be aligned even to a single cluster, much less the individual standards within each of the two clusters. Thus, this item would be deemed to exceed the content limits of the standards and would be excluded from inclusion on the state tests.
If students get this item wrong, the performance level descriptor associated with their scores will state that the students are unable to compute arc lengths; they are unable to apply theorems about circles; or both. But this is meaningless as it is impossible to determine what exactly led to the incorrect answer.
This impacts teachers in a similar way: You cannot tell if they are doing a great job at teaching students to compute arc lengths; apply theorems about circles; neither; or both. How useful are the teacher reports generated from an assessment that includes such items? They certainly cannot be used to let teachers know what they need to improve on.
The SAT contains many items like the one in the example above. University researchers should analyze all the practice SAT tests to determine the full scope of the problem. If I continue to provide examples, we will get more of the same glib responses from the College Board.
Demand that the heads of department of education of your states take immediate action by either:
- Suspending SAT administrations until the College Board addresses the problems
- Resigning, and letting an individual willing to protect the interests of your students and families take over (and fix the problems).
Manuel Alfaro Post #2
To minimize advantages resulting from the use of calculators with computer algebra systems (CAS), the College Board uses a simple trick to keep students from directly solving math questions using their CAS calculators. For example, in Item 1 below the correct solution requires a simple substitution before solving a linear equation. This item is counted towards one of the “linear equation” dimensions under Heart of Algebra. However, that simple substitution (“k=3”) makes this item a system of equations (see Item 29 below, for an example of a similar item with a different arrangement of the equations), which makes it count toward a different dimension within Heart of Algebra.
(Source: SAT Practice)
As the College Board uses this trick frequently, each operational form contains several items that are artificially “enhanced” to defeat CAS advantages. This leads to the construction of operational SAT forms that do not meet SAT content specifications because the “enhanced” items are misaligned. In the case of the form containing Item 1, the form would have too few items targeting the “linear equations” dimension and too many items targeting the “system of equations” dimensions. In some cases (Item 9, below), the enhancements push the items completely outside the scope of the entire SAT content specifications—this item should not have been included in the test at all, as it is a system of more than two equations.
(Source: SAT Practice)
Items like these are unfit for use under the classification the College Board originally assigned them because they target two different skills. As I mentioned on my September 26, 2016 post, if students get these items wrong, it is impossible to tell whether it is due to their inability to solve equations or their inability to evaluate expressions. Sadly, these enhanced items don’t promote best instructional practices, as the College Board aims to do.
In any case, the inclusion of these types of items results in SAT operational forms that are not:
- Parallel psychometrically, as the pretest item statistics were invalidated when the College Board revised the items (and did not pretest them again) during operational form construction;
- Parallel content-wise, as each operational form may contain several items that are misclassified or are beyond the scope of the SAT content specifications.
What does this mean? It means that the SATs are WORTHLESS; INDEFENSIBLY WORTHLESS.
Connecticut’s students, parents, teachers and public school deserve better. Governor Malloy and Connecticut’s elected officials should immediately repeal the requirement that Connecticut students take the SAT and replace that mandate with an evaluation system that actually measures whether students are learning what is being taught in Connecticut’s classrooms.