Yesterday, at the Summer Hearing of the Advisory Committee of Student Financial Aid about PIRS (the proposed rating system), there was plenty of talk about some kind of adjustment for inputs or weighting based on types of students enrolled. As I heard things, there are four positions on this topic as they relate to issues of PIRS for use as a consumer information product and as an accountability tool.
1) Everything must be input-adjusted for fairness (both for consumer information and accountability).
2) Input-adjustments are only appropriate for accountability.
3) Consumers need to see non-adjusted numbers, particularly graduation rates, to know their likelihood of finishing.
4) Institutions that serve predominantly low-income, under-prepared students (or a disproportionate share of such – I guess they all feel they are entitled to a righteous share of smart, rich students) are doomed to fail with a significant number of these students.
The fourth point just makes my teeth ache. Part of me wants to scream out in public, “If you don’t feel you can be successful with these students, quit taking their money and giving them false hope. Get out of this business.” I know that is somewhat unfair. Also, I believe that a certain amount of failure should be allowed and expected, especially in the name of providing opportunity. Further, each student does have to do the work and make an effort – but I believe that most want to do so. To publicly state that at some point, your institution just won’t be able to do any better (especially if that is short of 100%) just strikes me as conceding battle before fully engaging.
There is so much ongoing effort and research focused on improving student outcomes, it is hard for me to believe that someday every student that wants to succeed will be able to do so.
As you might surmise, I disagree with point one. I can live with the concept of input-adjustment for accountability, especially given differences in public support and student/family wealth. But to provide input-adjusted scores to students that attempt to level the comparisons between VSU and UVa, doesn’t make sense to me. They are radically different institutions with different mixes of students, faculty, and programs. And costs.
I’m also not a big fan of comparisons in general. They are overly simple for big decisions and so easily misleading. At SCHEV, our institution profiles are designed to avoid the comparison trap, and ignore the concepts of input-adjustment. We do provide the graduation rate data (a variety of measures on the “Grad Rates” tab) on a scale anchored with sector’s lowest and highest value in the state.
Likewise, when we released the mid-career wage reports this week, we created these only at the state-level. While there might have been more interest in comparing institutions, we think policy discussions deserve something more.
However, the US News & World Reports Best College rankings get 2500 (or more!) page views for every page view these reports get*. The PayScale Mid-Career Rankings have also gotten far more coverage. I think this is pretty strong values statement of the higher ed community, that despite what the faculty/faculty-researchers say and teach, the great bulk of the community want rankings and comparisons.
*What, you think I don’t know that non-highered people look at the rankings? Of course they do, given the number of colleges and universities ranked, the number of administrators at each, and numbers of journalists writing stories about rankings, it doesn’t take long to get to a half-million page views in a day.
So, quit whining about input-adjustments and focus on becoming exceptional in teaching and graduating students. Quit whining about government ratings if you are going to keep feeding the economic engine that saved Us News & World Report.
We are going to fail with some students. We don’t have to fail with most, which some institutions manage to do.
Pingback: A Festivus miracle, and associated grievances to be aired | random data from a tumored head