Commissioner Pryor’s Education Department: Connecticut experts need not apply

With little or no fanfare, Governor Malloy’s Commissioner of Education, Stefan Pryor, has picked Shannon Marimon to serve as the Director for Educator Effectiveness and Professional Learning at the Connecticut State Department of Education.

Shannon Marimon’s job will be to oversee Connecticut’s new Educator Effectiveness and Professional Learning Program.

Although Marimon lacks any meaningful classroom or teaching experience, she is a proven member of the corporate education reform aficionados’ club.

The new position is probably one of the top three or four most important positions in the Connecticut Department of Education.

While Marimon was hired by Pryor and started on August 31, 2013 with a starting salary of $110,145.00.

She was promoted by Pryor to her new Director’s position on November 29, 2013 with a new salary of  $136,141.00

The job was officially posted in June 2013 with a closing date less than 30 days later.

According to the legal job posting, the Minimum Experience and Training Required was for “an earned advanced degree and eleven (11) years of professional experience in the field of education or related areas.”

According to the job posting, it was also required that “At least one (1) year of the professional experience must have been in a managerial capacity in an educational agency, organization, system or school.”

Now, as you watch the bouncing ball, note that although one can’t imagine that Pryor was trying to “doctor” the job posting, the advertisement did include the rather odd addition that, “A 092 Certificate (Intermediate Administrator), or 093 Certificate (Superintendent), Sixth Year Diploma in Educational Leadership, or an Ed.D. (Doctorate in Educational Leadership) may be substituted for one (1) additional year of the General Experience” and “An advanced degree and five (5) years of managerial experience in the oversight of the development or administration of an educational bureau, system, operation, school or service may substitute for the General Experience and the Special Experience.”

So, on the one hand the job posting required an advanced degree and 11 years of professional experience while the fine print apparently lowered the level of experience to an advanced degree and five years of managerial experience in just about anything related to a school or service.

So who was finally selected for this critically important and coveted role?

Shannon Marimon, who has served for just over a year in Pryor’s operation, has now been promoted to the job that that will pay between $117,084 and $149,403 a year plus benefits.

The “only” issue is that Marimon’s professional experience is somewhere between none and three or four years at the most.

In fact, she doesn’t even come close to having the experience that was legally mandated in the job posting.

And perhaps the most important fact of all is that she has no meaningful teaching experience and yet is now serving as the Director for Educator Effectiveness and Professional Learning for the State of Connecticut.

Marimon graduated from the Yale School of Management in 2010.

Before that, in 2007 – 2008, she served as the Assistant Director of Development at the Yale School of Art.

In 2009, Marimon served as a summer intern for the National Park Service were she was based in the Washington DC Commercial Services Program.

From September 2010 – October 2011 Marimon worked for thirteen months for the education reform consulting company called TNTP (The New Teacher Project) in Brooklyn, New York and Ann Arbor Michigan.  Her job was to help “ensure smooth, successful launch of technology, marketing and recruitment campaigns.”

From January 2012 – August 2012, Marimon continued to work for the same education reform consulting group for another eight months, this time working in Knoxville, Tennessee as well as Brooklyn, New York.  In this position she, “Managed alternate-route certification contracts for Milwaukee Public Schools and Arizona statewide initiative,” was “Responsible for the goal-setting and official launch process of all new TNTP Academy contracts, working closely with state and city Departments of Education, including Georgia, Pittsburgh, PA, and Charlotte, NC,” and “Maintained high-level client relationships to ensure investment and support of data-driven work.”

In none of those positions did she spend any significant amount of time teaching in a classroom.

Yet despite her utter lack of experience, Marimon was hired by Connecticut Commissioner Stefan Pryor in August 2012 to serve as an Education Consultant at the Connecticut State Department of Education.

And last month, despite the fact that the job posting sought someone with at least 11 years of relevant experience, Marimon, with her one year of experience in the State Department of Education, was promoted to her new role as the State Department’s Director for Educator Effectiveness and Professional Learning.

The leadership of the Connecticut Public School Superintendents Association may claim all is well in the Land of Oz, but they’d be hard pressed to claim that someone with no teaching experience and virtually no management experience should be in charge of the State of Connecticut new Educator Effectiveness and Professional Learning Program.

Are you telling me that out of 45,000 public school teachers, 8,000 public school administrators and hundreds of world-class education professionals working at Connecticut’s institutions of higher education there was no not a single person better prepared to develop and implement Connecticut’s new Educator Effectiveness system?

Education Commissioner Pryor hires accounting firm to “validate” accuracy of standardized test scores?

In the midst of the excitement yesterday about the results of this year’s Connecticut Mastery Test scores came the strange report that Malloy’s Commissioner of Education, Stefan Pryor, and the State Department of Education had hired the accounting firm of Blum Shapiro to “validate” Connecticut’s test scores.

At the very least, the move is a strange one, considering Connecticut spends at least $25 million on the Connecticut Mastery Test, the vendor responsible for creating and scoring the test has not changed and this is the “4th generation” of the test and has been used before by the state.

In fact, Governor Malloy, Commissioner Pryor and the Connecticut General Assembly were so confident in the Connecticut Master Test that Malloy’s education reform law mandated that all towns start using the tests results as part of each public school teacher’s performance evaluation starting this coming year.

Malloy originally proposed that the local teacher evaluation process utilize the Connecticut Mastery Test results to determine fifty percent of a teacher’s evaluation but that number was eventually reduced to twenty-two and a half percent.

Even more recently the implementation of the program was delayed a year in order to postpone the excessive local costs that the new evaluation program will have on local school budgets.

But despite the previous confidence that the Malloy Administration had in Connecticut’s standardized testing program, the Connecticut Post is now reporting that due to human error, test results were reported incorrectly last year and that, “The Department hired Blum Shapiro, an auditing firm, to look at the state’s calculations and processes relating to test data and accountability.”

According to the Connecticut Post, “… unceremoniously last week, the State Department of Education pulled down the School Performance Reporting website and Tuesday, Commissioner of Education Stefan Pryor told reporters that the site, and index, contained mistakes.”

The Connecticut Post goes on to explain, “State test scores usually come out in July. This year, they were late, released on August 13. Part of the reason, apparently, was that the 2013 CAPT and CMT assessment data released Tuesday was independently verified by an external auditor at the request of the state Department of Education.

The analysis confirmed and validated the accuracy of 2013 CAPT and CMT student assessment scores.

Why the audit? Well it seems that some mathematical errors were discovered recently in the state’s year-old School Performance Index (SPI).

Introduced last year, the index takes all scores from all students in a school (and district) over a three year period and turns it into a single number.  The new system is better, it is argued, because it doesn’t just measure students who manage to clear the goal and proficiency hurdles on tests but also captures progress of other students who are in the less desirable basic or below basic categories, as well as those in the advanced range.”

None of the other Connecticut media outlets are reporting on this latest problem so it is unclear whether the problem was just associated with the School Performance Index or if some broader issue with the validity of the testing process had been identified.

As for the broader results, you can read more about Connecticut’s 2013 Mastery Test results at:  and,0,5417497.story.

The Connecticut Post story can be found at:

Courant examines effort to develop home-grown teacher evaluation plans…But…

In a news article today, the Hartford Courant examines the effort to develop home-grown teacher evaluation plans…But the Courant article fails to explore or discover why the Malloy administration added new legislative language to limit the rights of parents, teachers, boards of education and local towns during the recent legislative session.

The Hartford Courant’s education reporter, Kathy Megan, writes about how some Connecticut communities are working to develop their own teacher evaluation programs rather than having to adopt the state’s more unfair and complex system.

The Courant writes, “Over the past few months, school systems have been developing their own versions of evaluation plans that will used for many teachers in the fall. Most are following state guidelines, but a few, like Madison, hope the state will approve their plans even though they don’t include certain components that the state says are core requirements.”

But some communities that are looking into their options are finding that the so-called “core requirements” that have been developed by Commissioner Pryor and his non-educator staff aren’t the type of programs that are best for their communities.

In fact, Malloy’s education reform law was originally designed to allow towns the option of developing their own teacher evaluation plans, but in the last hours of the recent 2013 legislative session, the Malloy administration persuaded the legislature to adopt new legislative language that severely limits a town’s options.

One change required towns to submit a letter of intent to develop their own plans no later than July 1, 2013, even though the state of Connecticut was delaying the implementation of its own mandated program.  The second change removed the role of the State Board of Education to review and approve or reject town requests in public, instead substituting a system in which the Commissioner can simply make the decision about the town’s fate on his own.

The Hartford Courant story doesn’t explore why the changes were made but does quote Malloy’s Education Commissioner, Stefan Pryor, as saying, “The department will set a high bar for the granting of waivers…We are requesting that districts express a rationale for variances from the core requirements. Such rationales must be compelling and the model must be exemplary or exceptional in its quality.”

The system developed by the state requires that 45 percent of a teacher’s evaluation be measured through testing of student achievement, 22.5 percent of which must be done using the Connecticut Mastery Test or other state standardized test scores.  The rest of a teacher’s evaluation would be based on a new teacher observation system, a survey of students and parents and measurements related to the entire school.

Madison, Hamden and Bethel are three school districts that have developed their own proposed teacher evaluation plans and will be seeking permission to use their own plans rather than the state’s mandated system.

The Courant story provides a quick highlight of some of the proposed differences between the state’s plan and how Hamden and Bethel would like to evaluate their teachers.

For example, the Courant explains that Fran Rabinowitz, Hamden’s Superintendent of Schools, “is asking the state to approve a plan that would use standardized test scores in a markedly different way from that set out in the state’s core requirements…Rabinowitz says research shows that test scores are subject to fluctuation from year to year, even for the same teachers…’All of the research out there tells us that that type of metric is faulty, at best,’ she said.”

The Courant story provides a quick snapshot of the status of the teacher evaluation issue, but unfortunately doesn’t get into the important question of why Malloy, Pryor and the legislature decided to adopt language that will have such a negative impact the right of individual towns to develop their own, more locally appropriate plans.

Check back for much more about this developing story.  In the meantime, you can read more about the issue in the Courant’s article here:,0,2112924.story

Or in an earlier Wait, What? post here: Teacher Evaluation Program: Malloy, Pryor and General Assembly slam door on a locally developed plans

It’s about time legislators stopped listening to propaganda and started paying attention to research” (Sarah Darer Littman)

The sentence comes from columnist and fellow education advocate Sarah Darer Littman latest commentary piece in this weekend’s CTNewsjunkie.

The topic:  Education Reform in Connecticut

Compared to what is actually taking place in Hartford and state capitols around the country, she might have begun her piece with the term, “when pigs fly” or “when Hell freezes over” or any number of other adynata. [Turns out the phrase is called an Adynaton, a figure of speech in the form of hyperbole that is taken to such extreme lengths as to suggest a complete impossibility].

Sarah Darer Littman’s piece stands as a beacon of truth compared to the drivel Rae Ann Knopf, the executive director of the corporate driven, Connecticut Council for Education Reform, had published on CTNewsjunkie earlier in the week.  The two pieces should be read in tandem to get the full effect.  Read Knopf’s corporate education reform argument and then Sarah Darer Littman’s piece entitled Legislate Based On Research, Not Hyperbole.

The corporate education reform advocates falsely claim that not only will Malloy’s education reform legislation be good for children and our schools, but the cost of these unfunded mandates will be negligible, when such a statement couldn’t be further from the truth.

As Darer Littman writes,

“One hopes our legislators have been paying attention to the experience of our neighbors in New York as they listen to advocates from the Big Six (ConnCan, CCER, CBIA, CAPSS, CAS, and CABE). According to March report by the New York State School Boards Association and based on an analysis of data from 80 school districts, the districts outside the state’s five largest cities expect to spend an average of $155,355 on the state’s new evaluation system this year.

That’s $54,685 more than the average federal Reach To the Top grant awarded to districts to implement the program.

“Our analysis . . . shows that the cost of this state initiative falls heavily on school districts,” says Executive Director Timothy Kremer of the New York State School Boards Association. “This seriously jeopardizes school districts’ ability to meet other state and federal requirements and properly serve students.”

At a time when Connecticut’s towns and cities already face the potential for significant state aid reductions based on Gov. Dannel P.  Malloy’s proposed budget, is it any wonder that the Connecticut Conference of Municipalities testified in favor of delaying a system that is proving costly and problematic elsewhere?”

Darer Littman then turns her attention to the even more important point that Malloy’s entire teacher evaluation system is a farce and insult to the notion of creating better schools and ensuring that our state’s children are provided with the educational opportunities they need and deserve.

Calling Darer Littman’s piece a “must read” piece is a truly an understatement.

You can find it here:

Connecticut’s teacher evaluation plan – even worse than we thought (by Wendy Lecker)

When you take a break from digging out from the “Great Blizzard of 2013,” I strongly urge you to take a moment today to read Heart Newspaper columnist and fellow blogger, Wendy Lecker’s, latest commentary piece entitled “Connecticut’s teacher evaluation plan – even worse than we thought.”

Wendy’s article is the clearest description to date of the dishonest, disastrous and counter-productive evaluation system that Governor Malloy, Malloy’s Commissioner of Education, Stefan Pryor, and his State Board of Education are trying to foist upon the teachers, administrators, students and parents who are part of Connecticut’s public education system.

The waste of time, energy and money associated with this abomination is staggering.

Even in a time of unlimited public resources, the program the Malloy and his Department of Education is pushing would be inappropriate, but now, as Connecticut continues to struggle through the greatest economic troubles of our times; their plan is nothing short of a criminal waste of taxpayer funds.

As Wendy Lecker writes;

“It turns out state’s proposed teacher evaluation program is far worse than I originally believed it to be.

Connecticut’s plan involves using “indicators of student growth” to form 22.5 percent of an evaluation. For grades and subjects covered by the CMTs or CAPTs, teachers must use those scores as a basis for their evaluation.

In my previous columns, I wrongly assumed that Connecticut would use the unreliable “value-added” model (VAM) as the foundation of this 22.5 percent. However, it has come to light that Connecticut’s model is much worse.

The value-added model would be bad enough. VAM is a flawed attempt to isolate the teacher effect on a student’s test scores. We have all heard that a teacher is “the most important in-school influence on students.”

There is no denying that teachers have a profound influence on students’ ways of thinking, their emotional development and other crucial aspects of children’s intellectual growth that cannot be measured on standardized tests. However, those who trumpet this claim refer to a teacher’s influence on a student’s test scores.

But decades of evidence prove that out-of-school factors account for the vast majority of a student’s test scores. Even those claiming teachers’ outsize influence on test scores only attribute 7.5 percent to 8.5 percent of a test score to variation in teacher quality.

Therefore, VAM’s goal is to tease out that 8.5 percent. As I have previously shown, a large and growing body of evidence proves that VAM fails at this task. VAM ratings based on test scores have a 50 percent misclassification rating, with a variance based on the test, the year, the class and even the statistical model used. It is dangerous to use this measure for even 22.5 percent of a rating because it is so unstable. Because it varies so wildly, the test-score-based rating will become the tipping point in most evaluations, despite its small percentage. Moreover, being a so-called hard number, it will inevitably be the main focus of evaluations.

Apparently, in thinking that state education officials would use VAM, I was giving them too much credit.

Connecticut is not using VAM. Instead, Connecticut is using something much worse: a “student growth” model.

Here is how it works. At the beginning of the year, a teacher in a subject covered by the CMTs or CAPTs chooses a goal. It can be that X percent of the class will move from proficiency to goal. Or, it can be that the average vertical-scale score of the class will increase by X percent. (Recall from an earlier column that vertical-scale scores basically only measure whether a child is a good test-taker.) Testing experts use statistical models to predict test-score increases. Teachers, I guess, are supposed to use their intuition — about children they have just met. Then, the teacher will be evaluated on whether she meets that goal.

Let us put aside the lunacy of having a teacher predict score increases and focus on Connecticut’s model. Unlike VAM, which tries and fails to isolate teacher effect, “student growth models” do not even attempt to isolate that 8 percent. There is no mechanism in Connecticut’s system that even tries to distinguish between all the factors affecting student test scores and the one factor upon which a teacher’s job will depend.

Lecker provides even more details in her latest commentary piece.

In the coming weeks we’ll dig even deeper into this absurd plan, but if you want to get a basic primary on how the education reformers are wasting our tax dollars, undermining the teaching professional and destroying our public schools, I urge you to start by reading – and then re-reading Wendy Lecker’s great piece.

Wendy Lecker: Connecticut’s teacher evaluation plan – even worse than we thought

Find it at:

Evaluate this…

Last Tuesday the Connecticut State Department of Education held a meeting to outline the new Teacher Evaluation Program that will be tested in 16 towns this year.  The plan is then to expand the evaluation process to every district in the state.

According to a statement issued by Governor Dannel Malloy, the State Board of Education’s approval of the new teacher evaluation system was “a significant step forward in the implementation of our education reform program. We look forward to the upcoming pilot of the new system.”

Now, Malloy’s Commissioner of Education, Stefan Pryor, and his new administrative team are rushing to put the evaluation system in place.  Heading up the effort for Pryor is the new “interim chief talent officer for the state Department of Education.”  You know the “corporate reformers” have taken over when they start creating titles like “interim chief talent officer.”  The title alone warrants a six figure salary.

Local education officials have correctly raised concerns that the rush to put the new evaluation system in place means there will be insufficient time to develop a proper plan and implementation process.

But as usual, the Commissioner of Education has turned a deaf ear.  Having never run a classroom, let alone a school or school system, the notion that layering an evaluation system on top of a day-to-day education program is not something that seems to concern him.

In addition to a lack of time, state officials are also overlooking the lack of resources to pay for this massive experiment.

As to those financial implications, the best line of the day goes to Joe Cirasuolo, the executive director of the state’s superintendents association.

Cirasuolo recently said, “We don’t have enough administrative personnel to carry this out [statewide]. We are going to be laying off teachers to carry out these evaluations.”   Cirasuolo was one of the strongest supporters of Governor Malloy’s “education reform” plan.

Readers may recall that after writing that I believed that the associations representing the superintendents and boards of education were doing their members a tremendous disservice supporting Malloy’s bill, Cirasuolo was so incensed that he sent out a number of emails attacking my comments.

Odd that now, after the damage is done, he has the gall to note that the lack of funding means that, as a result of Malloy’s bill, there will be fewer teachers to educate our children and higher taxes paid Connecticut’s middle class.

Meanwhile, I’m still waiting for Commissioner Pryor to answer what I consider the most fundamental question of all, but alas another month has gone by without an answer.

Considering that standardized test scores are driven primarily by poverty, language barriers and special education needs, the question is how do you compare the following situations;

Teacher A has no change in their CMT scores from year 1 to year 2.  Teacher A works in a suburban classroom where there is virtually no poverty; no language barriers and individualized special education plans (IEPs) are properly implemented.  More than 8 in 10 of Teacher A’s students are at goal.  The number of student’s in Teacher A’s class remains at 18.

Teacher B’s CMT scores go up 5%.  Teacher B works in an urban classroom where most students are poor and minority, but students have few language barriers. Teacher B’s class size drops from 29 to 27 students.

Teacher C’s CMT scores drop by 1%.  Teacher B works in an urban classroom where most students are poor and minority and more than 40% of the students go home to households that don’t speak English.  Teacher B’s class goes from 28 to 32 students.  The extra four students in Teacher C’s classroom are all non-English speaking.

When it comes to using standardized test scores to measure teacher performance, which teacher did better, Teacher A or Teacher B or Teacher C?

Commissioner Pryor, please just answer the question…

CMT Scores and Teacher Evaluations – But Wait – That’s Like Comparing Apples and Tomatoes

Governor Malloy, Education Commissioner Stefan Pryor and the rest of the “education reformers” continue to claim that Connecticut needs a “one-size-fits-all” approach to teacher evaluations in which teachers are, at least in part, rewarded, promoted or let go based on how well their students do in Connecticut’s standardized tests.

Malloy, now famous for his -” I’d don’t mind if they teach to the test as long as the test scores go up” – statement has been leading the mob mentality that is claiming that it is imperative that 20-40% of a teacher’s annual evaluation be based on their student’s annual test scores.

What is never articulated is what counts as a “good” or a “bad” change in test scores.

Thinking of the following example as if it was a question on a standardized test;

Teacher A is a 4th grade teacher in New Britain.  This year, 25.7 percent of teacher A’s class scored “at goal” on the Connecticut Mastery Test in reading (up from 22 percent last year).

Teacher B is a 4th grade teacher in Hamden.  This year, 56 percent of teacher B’s class scored “at goal” in the CMT in reading (up from 54 percent last year).

Teacher C is a 4th grade teacher in Fairfield.  This year 78 percent of teacher C’s class scored “at goal” in the CMT in reading (down from 79 percent last year).

Presently, in New Britain, 22 percent of 4th graders are at goal in reading and 27 percent are at goal in math.  On the other hand, in Hamden, where poverty and language barriers are not as great as in New Britain, 54 percent of 4th graders are at goal in reading and 58 percent are at goal in math. Finally, in Fairfield, 79 percent of 4th graders are at goal in reading and 84 percent are at goal in Math.

Do any of the three teachers deserve a merit bonus?  Do any of the teachers need some extra professional development support? Do any of the teachers need to be put on the “watch list” for unsatisfactory performance?

If the number of students testing a goal is going up – is that a sign of the teacher’s success?  If a teacher maintains test scores is that good or bad?  What about a teacher whose sees the number of students testing at goal actually drop?

In this case, the New Britain teacher saw a 15 percent increase in the number of students testing at goal, the Hamden teacher had a 5 percent increase and the Fairfield teacher saw a slight decline.  Which teacher is succeeding?  Which is failing?

Of course, without knowing the total number of students taking the test in each class we can’t even be sure the information is statistically significant.  It may be that in all three situations the change is within the standard margin of error and therefore no conclusion can be reached in any of the cases.

Meanwhile, what would we do if one 4th grade class in Meriden has a 7 percent drop in the number of Latino students and sees a 5 percent increase in the number of students at CMT goal, while the same sized class in an elementary school across town has a 10 percent increase in the number of Latino students and the number at goal in that class drops by 2 percent?  Which teacher has done a better job?

These are very real issues.  In New York, the failure to account for these issues has destroyed the entire credibility of their new teacher evaluation effort.

But don’t let the details stand in the way of progress.

While the “reformers” continue to yap about the need to link test scores to teacher evaluations, they still haven’t begun to articulate how including standardized test scores in teacher evaluations are going to help determine who gets a bonus in pay or who gets punished.

What is clear is that among those who profess to know that attaching test scores to teacher evaluations is definitely the way to go is the Connecticut Association of Public School Superintendents.

The spokesperson for the Superintendents has repeatedly joined in Malloy’s claim that teachers must be held accountable for their students standardized test scores — despite the fact that test scores are driven by wide range of factors far beyond the teachers’ control.

The logically absurd claims being made by the superintendents, and other reformers, got me wondering about how they could consistently get away with comparing apples and tomatoes without ever admitting that the comparison is fundamentally bogus.

Then again, maybe they are on to something…

These superintendents are paid big bucks to run their local school systems.

True they are supporting legislation that undermines the rights of their own boards of education, local elected officials and taxpayers but they must know what they are doing.  They are all certified to be superintendents (well all but one).

So, perhaps this whole apple vs. tomato approach might also serve as a useful mechanism to judge the effectiveness of Connecticut’s superintendents

Let’s look at the data.

The following chart shows what taxpayers are getting for the money they pay superintendents.

The data measures the superintendents’ cost per student, their cost per school employee, their cost per poor student (that is students who receive free or subsidized lunches) and the cost per students who don’t speak English.

We might say that it is a good way to determine how superintendents are allocating their resources.  There will certainly be differences from town to town, but the fundamental cost per unit has to be somewhat similar, right?

Like mastery tests, these costs per unit measurements will provide the state (and taxpayers) with an opportunity to determine which superintendents are doing well and should be rewarded for their efficient operation of services, which need a dose of professional development to help them get a hold of their financial operation and which need to be removed for their failure to get their job done correctly.

The data tells us;

  • A superintendents’ cost per student ranges from a low of $9 in Waterbury and $11 in Hartford to a high of $85 in Weston and $62 in Wilton.
  • A superintendents’ cost per school employee ranges from $60 in Waterbury and $73 in Hartford to $537 in Wilton and $463 in Brookfield.
  • A superintendents’ cost per low-income student ranges from $11 in Hartford and $12 in Bridgeport to $6,396 in Weston and $5,831 in Wilton and.
  • And a superintendent’s cost per non-English Speaking Student ranges from $61 in Hartford and $87 in Waterbury to $16,071 in Darien and $13,591 in Weston.
Town Annual Salary Salary per Student Salary per Employee Salary per Low Income Student Salary per non-English Speaking Student










































New Fairfield






New Haven






New London






New Milford


















Region #15 (Southbury, Middlebury)


















West Hartford






West Haven






























Now, while it is true that all this may be comparing apples and tomatoes, certainly there is validity in the saying that what’s good for the goose should be good for the gander.

If teacher evaluations are going to be dependent, at least in part, on standardized test scores, then certainly superintendent evaluation should be dependent, at least in part, on how well they do handling standardized per unit expenditures.

Let’s face it, are you really telling me that the Darien superintendent should be spending $16,000 for a student who doesn’t know English when Hartford is only spending $61?