A Response to Schuman and Warner

I love all the coverage that the proposed Postsecondary Institutional Ratings System (PIRS, #PIRS) is getting these days. Rebecca Schuman over at Slate has written a nuanced support of the plan here. John Warner, over at Inside Higher Ed, has written an opposing viewpoint to Schuman’s. Warner wrote the previous day about Jamienne Studley’s unfortunate comparison of colleges and blenders.

Both are well worth the read.

I have neither the following nor the writing skills of either Schuman and Warner, but that has never stopped me from expressing my opinion. Nor will it now.

Both authors are right and wrong.

First off, while I am glad everyone is having such fun with the blender comment, where were you months ago when the comment was made and reported in Politico and elsewhere? Those of us in the higher ed data world have been shuddering for months about her use of Cooks Illustrated as a model for PIRS. The resurgence off the comment and the announcement of the first delay in the ratings have been amusing to watch.

Warner thinks the ratings will empower the already powerful on campus by giving presidents even greater leverage for their policies. Absolutely. With the data currently available to USED, any thought of nuanced, targeted approaches to improving student outcomes will go right out the window. There will be more sledgehammer approaches to institutional policies, especially as institutions try to ensure they are in the same rating as their peers.

Warner also suggests that new deanlets will be created to collect and manage all the new data required. Maybe eventually, but that depends on what happens reauthorization of the Higher Education Act (HEA). If the unit record ban is lifted, and something like the Student Right-to-Know  Before You Go Act is passed, most institutions could experience a reduction in burden. In the near term, USED still has to get OMB clearance to expand collections, which is subject to burden review. Unfortunately, reporting burden is going to increase anyway, with or without the ratings system, because  well, just because. There is always more data to collect, and lots of organizations asking USED to collect more, and at some point, with enough increases, the institutions will demand to report student-level data because it will be easier and less burdensome. (Something like 45 states have unit record collections, with about 90 different collectors. SC public institutions report student level data to the state. Sending a similar file to USED would cost less than the current IPEDS submissions.)

Schuman suggests that it “is time to come at higher education with a sledgehammer.” I tend to agree, but it depends on who is swinging the sledgehammer. The problem with the way higher education and USED have traditionally approached ratings, rankings, benchmarking, and the like, is through the use of direct institutional comparisons and peer comparisons. This is the kind of madness that has led us to today. Driving the bus by looking at the other buses is just silly. As I argued in my presentation at the PIRS Technical Symposium the proper comparisons are intra-institutional. Rather than worry about institution A compares to B on graduation rates let’s focus instead on the difference in graduation rates between Pell recipients and non-recipients encourage policies to bring those numbers in line with each other, thus increasing graduation rates across the board.

As for Schuman’s suggestion about ratings including values such as percentage of courses taught by full-time, tenure track instructors, fine. Just be warned that using the existing data for community colleges, there have been a number of research projects that have found no direct correlation between community college graduation rates and either numbers or ratios of full-time TT faculty. So, depending on the biases of the folks in the department, such a rating component might do more to support the status quo. Further, state law-makers may well push back against ratings that use such components because of the inherent cost-drivers to funding higher education.

Which is also part of the reason we are here. Not everyone wants to pay what it costs to support higher education.

I am glad that people are talking a lot more about PIRS. I think a good ratings system can be built, just not with the existing data nor the traditional mindset towards evaluation of higher education. We in Virginia  know far more about outcomes of students in Title IV aid programs than USED does – and that is only an off-shoot of our other work. If PIRS is done badly, it will empower presidents to have more of their way on campus and perhaps further damage the concept of shared governance.

The most important thing to keep in mind is that off this is taking place under the umbrella of reauthorization of the HEA with the added context of Gainful Employment. Whatever happens will be with for years and, if historical trends are true, the federal control on campus will be more intrusive. This may not be bad, but it will not be easy.

 

 

 

 

2 thoughts on “A Response to Schuman and Warner

  1. Pingback: It is a niche series of arguments and posts | random data from a tumored head

  2. Pingback: A Festivus miracle, and associated grievances to be aired | random data from a tumored head

Be nice. It won't hurt either of us.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s