Five Stages of Education Policy

Anger. Denial. Bargaining. Depression. Acceptance.

Yeah, I know. It is overdone. It is almost trite. But golly, it works so well for so man things besides death and dying. Marriage is a good example, at least according to M. Scott Peck in the “Golf of Your Dreams.”

So, does it apply to education policy? Yep.

First, someone gets angry about something. “This metric doesn’t mean anything to me. Compare it to someone else. Yep, it’s low. Shameful. We have to fix this.”

“Nope. No way. You got it wrong. Your metric is in error. You are comparing us to the wrong thing.” 

“Tell you what. Let’s measure it this way. We deserve to get credit for these students. Sure, I understand it is against the intent of the measure, but this is really only a fraction of our students.” “You know, everyone else is doing it this way.”

Sigh. Heavy Sigh. Bitch. Gripe. Sigh. Bitch. Gripe.

“Fine. We’ll do it this way. It’s just that the measures aren’t really good enough and the financial rewards aren’t really large enough. This is really an ineffective way to fix education, but it is a good compromise that won’t really hurt anything.”

Of course, that last bit is what is most frustrating to me. Consensus policy development rarely seems to lead to something useful. Even when it appears that it might, leadership of the institutions/organizations generally spend a lot of energy to neutralize it in back channels or through gaming the metrics.

With yesterday’s announcement by Senator Lamar Alexander (R-TN) that he will push an amendment to block PIRS added to the effort of Representative Bob Goodlatte (R-VA), it seems possible PIRS will soon be a dead issue. If Congress can actually come together and pass something, you  know, like a budget.

PIRS could be a useful tool. The President and Department blew it though by promising a draft based on existing data. You can boil my very long presentation on PIRS down to a single sentence: “How dare you rate institutions when the data you have does not compare at all to what we have in Virginia, and we don’t rate institutions.”

If the Department had presented a well-crafted model representing a theory of what institutional performance should be, along with a plan on how to develop the necessary data, I suspect the outcome would have been much different. Sure, I and others would have criticized both aspects, but we also would have been more likely to offer more positive criticisms to improve it. Endeavors  such this need, and deserve, a model to inform development – not  a bunch of data forced to fit into a model that seems to look right. And the latter is what the Department is doing.

It is kind of like the difference between “All the news that is fit to print” and “All the news that fits.”

There might be time for the President and Department to save the ratings system – if they are willing to say “We took the wrong approach. Let’s start over.”

No blenders this time.