The Gainful Employment Ratings System?

On my solo road-trip to Orlando today for AIR Forum 2014, it occurred to me that people are still not thinking about the ratings system (PIRS) and gainful employment (GE) the way I am.

I am not convinced that PIRS is a real thing. Certainly the Department has spent a lot of time and effort to create something, including the illusion that PIRS is real.

Why? Because both GE and PIRS attempt to do the same thing – eliminate bad actors and low performing programs/institutions from Title  IV eligibility. Since we don’t have details on PIRS, the only difference we can really point to is that GE is about disqualifying programs and PIRS is about disqualifying institutions.

So, if you are a member of Congress or congressional staff working on the reauthorization of the higher education act, are you going to pick one or both? Given the nature of the lobbying and letters of support and opposition you might receive, would you consider combing the two? Especially in light of the argument that GE should apply to all programs.

If you were a lobbyist and became convinced that something would be done, would you grudgingly support program-level effects v. institution-level? Especially if student preparation is factored in to the mix with other measures than earnings and debt repayment?

Do policy-makers and law-makers want two different consumer ratings system that accomplish essentially the same thing? To me, two ratings seems to add more confusion than helping students and family.

The cynic in me wonders if PIRS is simply a way to justify GE for all programs as a reasonable compromise.

 

The damage of relying on federal data

The IPEDS graduation rates are based on the following formula:

Number of Graduates from CohortA with x years/(Number in CohortA-Approved exclusions [Peace Corps, AmeriCorps, military service, religious mission, death, etc.]) 

Where CohortA are all First-time students in the fall of a given year, including those whose enrollment also began in the summer, initially enrolled as full-time.

The calculation is performed where “x” is four, five, and six years post-entry for students pursuing four-year degrees, two and three years for students at two-year colleges.

There is no magic here, simply a set of assumptions of normal expectations that full-time students are likely to remain full-time – assuming their initial enrollment indicates their academic plans should thus complete in four years, but additional years are “allowed” for incidental delays along the way.

Institutions complain frequently that this measure is imperfect, especially community colleges.

It seems that most fail to realize that just because the law requires Title IV-participating institutions to report these figures to USED each year, THEY DON’T HAVE TO STOP THERE.

If an institution wants to report graduation rates for part-time students, or transfer students, star-bellied Sneetches and Sneetches with out stars, they are completely free to do so.

And they should.

Most don’t, I guess because they can’t compare to someone else, such as their self-defined peer groups. Isn’t this insane? Does it really matter how one institution performs against another when an increase in the metric over time is clearly the desired change? Institutions should be focused on improving student success and understanding who doesn’t graduate on time, and why. From there they can work to better support students and address structures that  improve their success.

It simply isn’t that difficult. Comparisons and benchmarking have stopped institutional development, or at least slowed it, for institutions that rely on federal data. The only way to change that is to change the data available – with something like the Student Right to Know Before You Go Act. States can also get involved, like we do in Virginia.