Duncan doesn’t get it. Apparently the president does not get it either if this is all coming from the top.
Duncan did ask the ratings system’s many critics to reserve judgment until they actually knew what it would look like.
“This system that people are reacting against doesn’t exist yet,” he said. “Tell us what you like; help us do this.… Let’s not be against something that has not been born yet.”
One of the many things that the Administration does not get, is that some of us are objecting to the ratings system because the currently available data are inadequate. Completely inadequate. Bloody embarrassingly inadequate. These data, for the most part, are the same data that brought us to the current state of affairs.
Asked by Leonhardt to respond to the criticism that the government can’t rate colleges intelligently and effectively, Duncan reiterated that department officials know they have a hard job ahead. “We’re going into this with a huge sense of humility,” he said, and recognize that “intellectually it is difficult.”
Bullshit. Humility is not based on ignoring the advice you receive and doing it anyway.
“I say to the Department this, “We need better data. Let me rephrase that. YOU need better data. This should be the Department’s first priority.”
-Tod Massa (me), PIRS Technical Symposium, Feb 6, 2014.
The Department called together 19 experts in this arena to advise on the proposal. Perhaps it was just a show to support the pretense of listening and engagement. That’s fine, we all do that (every time we nod when our spouse is talking to feign involvement, for example). However, since the lack of humility irritates me, I am choosing to assume they wanted advice, but are unwilling to accept it. (It is too much like a smug, snotty adolescent saying “Uh-huh, I know” when you are trying to convey important information to him. I already have one of those in the house and another enroute.) By the way, push-back from those you intend to rate does not prove the rightness of your cause.
At least the interview demonstrates that the talking points, and perhaps the structure of the ratings system itself, are changing.
“Are you increasing your six-year graduation rate, or are you not?” he said. “Are you taking more Pell Grant recipients [than you used to] or are you not?” Both of those metrics, if they were to end up as part of the rating system, would hold institutions responsible for improving their performance, not for meeting some minimum standard that would require the government to compare institutions that admit very different types of students.
This is a step in the right direction, perhaps the second best approach. Encouraging a constant cycle of improvement is a good thing. Perhaps they will look to states that have experience in accountability measures of this type, you know, like Virginia. One of the things we know on this topic, is that there are annual variations that are little more than statistical noise, especially with smaller institutions. When such accountability measures are first introduced, it takes a number of years before institutional policy and operational decisions catch up. Thus, improvement on all measures every year is unlikely – unless the measures are weak to begin with. Remember, the key decisions about the graduation rates of the entering cohort of 2014 have already been made – who, how much aid, the institutional budget, and the student support programs that will be in place. The decisions impacting the next available graduation rates of the 2008-09 have long been audited and stashed away in a dusty mausoleum.
Another issue that arises is “the pool of possibles.” Is it possible to expect every institution to increase its proportion of Pell-eligible students? Probably not, at least not after some critical point is achieved of overall enrollment is achieved, without dramatically expanding the definition of Pell-eligibilty. Over time a lot of students will wind up being shifted among institutions. Probably not before the next re-authorization of the Higher Ed Act, but I don’t know. It will depend on institutional reaction to the ratings and Congressional action to tie them to student financial aid. Anyhow, at some point one ends up creating a floor value to say, “If an institution is at or above this point, improvement on this measure is not required.”
When we start talking about improvement on multiple measures, we get this argument, “If we enroll more [under-represented students, students with Pell] our graduation rates will go down.” This is a complete and utter bullshit argument that I have heard over and over again. We go back to 2008 on this where I pushed the board to make graduation rates of Pell students compared to students without Pell, but other aid, compared to students without any aid, a performance measure.
There is absolutely no necessity for institutional graduation rates to decrease. “But Tod,” I heard, “we know students that are Pell-eligible are less academically qualified most of the time.”
“What do you about that?”
“Therein lies the problem.”
That was a fun conference call. This experience, and more like it, are why I believe a ratings system is best built around comparisons of internal rates at the each institution. There is no legitimate, nor moral reason, why family income should be a primary predictor of student success in college. We know that disadvantaged students have different needs than wealthy students, so let’s meet those needs.
I am glad to see an evolution in thinking on the ratings system. As long as the Department plans to release a draft based on existing data this fall (which is a question-begging time frame to begin with – fall semester or seasonal fall? and I have seen both “in the fall” and “by fall” – implying before), I will continue to oppose and agitate against the current plan. I continue to be willing to help and engage with the Department, but my phone lines and multiple email accounts remain strangely silent.