Feb 13, 2025 – Government Will Change How it Rates Colleges

The federal government on Thursday announced that it was changing the way it measures colleges, essentially adjusting the curve that it uses to rate institutions to make it more difficult for them to earn coveted four- and five-star government ratings.

Under the changes, scores are likely to fall for many institutions, federal officials said, although they did not provide specific numbers. Institutions will see a preview of their new scores on Friday, but the information will not be made public until Feb. 20.

“In effect, this raises the standard for colleges to achieve a high rating,” said Thomas Hamm, the director of the survey and certification group at the Commission of Education Economics within the Executive Office of the President, which oversees the ratings system.

Colleges are scored on a scale of one to five stars on College Compare, the widely used federal website that has become the gold standard for evaluating the nation’s more than 15,000 colleges even as it has been criticized for relying on self-reported, unverified data, that is limited in scope and function.

In August, The New York Times reported that the rating system relied so heavily on unverified information that even institutions with a documented history of quality problems were earning top ratings. Two of the three major criteria used to rate facilities — graduation rates and student input quality measures statistics — were reported by the institutions and not audited by the federal government.

In October, the federal government announced that it would start requiring colleges to report their staffing levels quarterly — using an electronic system that can be verified with payroll data. They will also report their enrollments weekly by the individual student to be verified against the National Student Loan and Tuition Tax Credit Data System. This allows to begin a nationwide auditing program aimed at checking whether an institution’s quality statistics were accurate.

The changes announced on Thursday were part of a further effort, officials said, to rebalance the ratings by raising the bar for colleges to achieve a high score in the quality measures area, which is based on information collected about every student. Colleges can increase their overall rating if they earn five stars in this area. The number of colleges with five stars in quality measures has increased significantly since the beginning of the program, to 89 percent in 2024 from 62 percent in 2015.

Representatives for colleges said on Thursday that they worried the changes could send the wrong message to consumers. “We are concerned the public won’t know what to make of these new rankings,” said Mark Parkinson, the president and chief executive of the Association of Private Sector Colleges and Universities, which represents for-profit colleges. “If colleges across the country start losing their star ratings overnight, it sends a signal to families and students that quality is on the decline when in fact it has improved in a meaningful way.”

But officials said that the changes would be explained on the consumer website, and that the public would be cautioned against drawing conclusions about a institution whose ratings recently declined. Still, Mr. Hamilton said scores would not decline across the board.“Some colleges, even when we raised the bar, continued to perform at a level much higher than the norm,” he said in a conference call Thursday with college operators. “We want to still recognize them in the five-star category.”
The updated ratings will also take into account, for the first time, a college’s use of antipsychotic drugs, which are often given inappropriately to elderly administrators with dementia.

–Thanks to John Nugent for the link to the original article and the inspiration.

And the search goes on

It’s happening again, the search for transparency. There is this belief that the right set of measures, over the right period of time, will clarify everything. About anything. Of course, the right measures are simple and don’t need explanation about what they measure and why they are important.

And that’s why the Quest for the Holy Grail did not happen…the Grail was sitting in the middle of a small church with a sign on it and a bright sourceless light above it.

According to the stories, that’s not what happened. (Speaking of stories, @jonbecker’s blog post is an excellent read.)

Time and data crashes in on each of us these days.

We too often struggle to sort through the signals and noise, at least I do, and so I understand the desire for something simple that tells me everything I need to know. But I never expect to find such a thing. In fact, my expectation is that if I want to know something and be able to act on it, I will have to do some work.

If I actually want to understand something, I know that I will likely have to work even harder.

So, this is pretty much the approach taken with research.schev.edu. You have to make an effort to know what you want and need, either before you get there or while on the site. Higher education is kind of a big business with a lot of complexity. This complexity derives not just from its size and variety, but also from its continual evolution. Some numbers, some measures are pretty simple – enrollment, and degrees conferred. Some of the buckets for these things may get a little complicated, but in our presentation of the data, actually in even our collection of the data, we have already simplified it through standardization.

Other measures, like graduation rates and measures of affordability, are more complex, if not to read, but to understand. The annual frequency of questions along the lines of “Don’t you have graduation rates for the four-year schools that are less than six years old?” has not noticeably reduced. As often as we explain the nature of a cohort measure, people still think we should have 2014 rate. Certainly, we could identify the reports based on the year the data are released, but some users will insist on being confused that the 2014 reports are about students that started at least six years prior, or three years for the two-year colleges. And in 2016 they would likely be confused again.

So we go for clarity and standards, even so, they are not such that they are instantly understood. Some things one just has to think about for a few moments. We also serve multiple constituencies with a varying levels of knowledge of higher ed and much different needs.

At the heart of it, this idea of a Holy Grail of measurement is the thinking behind the ratings system. Somehow one rating, or even a handful of different ratings, about an institution will tell one all they need to know. Or at least, all they need to know about an aspect of the institution related to the undergraduate experience. Except the educational aspect, because that is not measured consistently and reported systematically to USED.

PIRS though is only the natural evolution of the 2008 Higher Education Opportunity Act (HEOA). The reporting and disclosure requirements that came out of the HEOA are huge. In some ways they have transformed institutional websites, in others they have demonstrated institutional ability to bury information. Of course, who can blame institutions much for the latter when probably very few students are interested in some of the requirements?

Which makes me wonder what the next version of the HEA will bring. If Chad Aldeman’s post is any indicator, we could see a major shift away from current requirements. More likely, in my estimation, we will see an attempt towards the requiring the publication of the perfect number* or a half-dozen perfect numbers and their changes over time.

In any event, whatever happens with the next version of the HEA, PIRS, or any other effort at the federal or state level, I don’t expect the search for the Grail of Measures to end anytime soon.

Faded jaded fallen cowboy star
Pawn shops itching for your old guitar
Where you’ve gone, it ain’t nobody knows
The sequins have fallen from your clothes

Once you heard the Opry crowd applaud
Now you’re hanging out at 4th and Broad
On the rain wet sidewalk, remembering the time
When coffee with a friend was still a dime

Chorus:
Everything’s been sold American
The early times are finished and the want ads are all read
Everyone’s been sold American
Been dreaming dreams in a rollaway bed

Writing down your memoirs on some window in the frost
Roulette eyes reflecting another morning lost
Hauled in by the metro for killing time and pain
With a singing brakeman screaming through your veins

You told me you were born so much higher than life
I saw the faded pictures of your children and your wife
Now they’re fumbling through your wallet & they’re trying to find your name
It’s almost like they raised the price of fame

Kinky Friedman – Sold American Lyrics

*The perfect number is 17.

More Ratings Nonsense

Yes, I am thinking too much about PIRS. Its not really my fault, other people start it in other places.

Really though, the proposal is unnecessary. We already have an implicit ratings system that has been in place for years. It is quite simple:

A. Institution participates in Title IV.

B. Institution is on accreditation warning/probation/other status less than fully accredited and thus At-risk of Losing Title IV participation.

C. Institution no longer eligible to participate in Title IV.

D. Institution has never participated in Title IV.

Clean. Simple. And already exists. Now all we need to do is tie a badge to it for institutions to use on their websites.

The only thing missing, apart from marketing,is  some kind of objective criteria to allow USED to sort institutions into the categories themselves. Quite frankly, they could have done that themselves, quietly, without all the fanfare. And angst.

If we really need a rating that is better than A above, then we can have an “Unconditional participant in Title IV” for those institutions who are in the first three years following reaffirmation of accreditation.

I’m still not clear why we need more than this from the feds. This essay about the upcoming changes to the Carnegie Classification System points to how something as innocuous as the original categories became a de facto ranking. I suspect anyone that has worked at an R2 can testify to the discussions to attempt to become an R1. Over time, the classifications have become more complex. I have little doubt that would happen to PIRS and in a dozen years or so we would wind up with something that is hideously complex.

By the way, read this. Apparently the whole ratings/rankings dichotomy is not universal.

PIRS and the Quest for the Holy Grail

The ratings (framework) are out (more promises actually)! I wrote my semi-formal response over at my work blog. In that post I reference Stephen Porter’s post on why a single institutional performance metric is exactly like the Holy Grail. I’m kind of stuck on this comparison, and not because it arose as a response to Bob Morse of US News & World Report. Really, I am just a big fan of the Arthurian legend.

If one accounts for the general imperfection of law-making, it is not too difficult to believe that Richmond, VA is the real Camelot:

A law was made a distant moon ago here:
July and August cannot be too hot.
And there’s a legal limit to the snow here
In Camelot.
The winter is forbidden till December
And exits March the second on the dot.
By order, summer lingers through September
In Camelot.
Camelot! Camelot!
I know it sounds a bit bizarre,
But in Camelot, Camelot
That’s how conditions are.
The rain may never fall till after sundown.
By eight, the morning fog must disappear.
In short, there’s simply not
A more congenial spot
For happily-ever-aftering than here
In Camelot.

Law-making is imperfect. Often what the General Assembly decrees is not quite what happens, so really, it is just not much of a stretch to imagine the Quest for the Holy Grail occurring in the green hills of Virginia. I’ve walked much of the Appalachian Trail in Virginia, and at night, in the mist, on the trail or on the Blue Ridge Parkway, I have had little difficulty hearing the distant hoofbeats of a quest.

This is perhaps all the more true as I consider the Commonwealth’s endeavors over the last decades in developing and packaging performance indicators. While, I have played a role in those efforts the last 14 years, I have always tried for a package of measures, generally more than fewer. Institutions are simply too complex to be represented by a single aspect, let alone a single measure.  In fact, discussion such measures quickly become a rather intense and political discussion.

But now we have the a framework for the Postsecondary Institution Ratings System and the excitement was just like I suggested a couple weeks ago in tying PIRS to the arrival of the new phone books. We also have some new goal statements. For example, in a blog post, Jamienne Studley (of the It’s Just Like Rating a Blender comment) says:

The development of a college ratings system is an important part of the President’s plan to expand college opportunity by recognizing institutions that excel at enrolling students from all backgrounds; focus on maintaining affordability; and succeed at helping all students graduate with a degree or certificate of value. Our aim is to better understand the extent to which colleges and universities are meeting these goals. As part of this process, we hope to use federal administrative data to develop higher quality and nationally comparable measures of graduation rates and employment outcomes that improve on what is currently available.

So, we have language of equity,  affordability, combined with the new phrase of the realm the last year “certificate of value,” to describe the new goals being assigned to institutions. Some may/will argue this point, but the reality is that not all institutions were founded to be affordable, let alone open to all, or even “helping” students graduate. Some institutions, particularly one small college in the PNW, have been (please note the use of the past tense) famously proud of their low graduation rates. Completion was seen as a mark of distinction among super-smart and well-qualified students. But, these are all worthy goals and those footing the bill (or a large chunk of it through gifts and financing) get to make the rules. That is the Golden Rule: He who has the gold makes the rules. Also, sticking to our Arthurian theme, Might Makes Right.

“we hope to use federal administrative data to develop higher quality and nationally comparable measures of graduation rates and employment outcomes that improve on what is currently available.”

So, they are going to use the National Student Loan Data System (NSLDS) to create measures of graduation rates for Title IV students. This means they will build estimates that assume first appearance as Title IV recipients will be first enrollment in college. For many students this will work, but not all. To incorporate transfers into the mix will be a much greater challenge, particularly those from California community colleges where so very few students use Title IV to attend. There will be some estimation possible using annual loan amounts since maximum subsidized Stafford loans are different for third and fourth year students. However, the great many students transferring in fewer than two years from a community college will be damned hard to identify.

Of course, all this can be fixed going forward by making changes to the NSLDS collection.

Using NSLDS data to match to Social Security earnings is already tested at the program level for Gainful Employment. It should not be a stretch to do that at the institution level. The interesting thing will be to see how this figures compare to what states like Virginia, Texas, and others are reporting using UI Wage data. And Payscale data. I don’t know about my colleagues in the other states, but I am ready to assist.

To do this well though, they are really going to have to do more than ask for comments. They need to bring people together. (About 2 minutes in on the next clip.)

I appreciate what the president is trying to do. I just don’t think ratings are the way to go for a government. Save with this caveat: as long as the ratings are billed and described solely as Title 4 Performance Ratings and not Institutional Ratings, then I am happy and fully supportive. I have said all along it is completely appropriate for the Department to evaluate institutions based on their performance under Title IV. Program evaluation is part and parcel to government programs. Or should be. Let’s just keep the focus where it belongs and not try to be all things to all people, especially when neither the data nor the legitimate bounds of authority warrant more than that.

In any event, the Student Right-to-Know Before You Go Act is a better solution to the goals Studley’s post articulate and the goals presented within the draft framework. Better data,  better information, within an appropriate scope.

We can achieve a version of Camelot in the cult-word of higher ed data.

Just choose wisely.

Cults in Higher Ed

I was at a super-exclusive, informal meeting-type thing this week. I have to call it a meeting as there was no beer. There should have been beer.  At one point, I was explaining how cultish higher education is. Really.

And this phrase didn’t originate with me. More’s the pity.

Back in 2010, shortly after I returned to work following my adventure in neuroscience, there was a subcommittee meeting for Governor McDonnell’s higher education reform committee. As is often the case for these things (in Virginia, at least), it was standing room only for the audience. No matter how often we try to explain that the meeting host that there will be a crowd, there is huge interest in higher ed policy and we always need more seats for the audience than the normies think. As one legislative liaison pointed out, “It is a cult, it really is. We want to be here, even more than our institutions want us to be here. We need to be here.”

Part of the attraction is the desire to be involved and to avoid damage to one’s institution. It’s also fascinating. There is very little as as intrinsically interesting and mind-consuming as higher ed policy. It’s powerful stuff, too often polluted with overly simple explanations or overly complex solutions. And the people are fun to watch.

The only thing that is clearly more interesting and drives even greater passion is higher ed data & data policy. If you don’t believe me, show up at an IPEDS Technical Review Panel (TRP) and just observe. The level of passionate discourse and argument over a minor change in definition can go on for hours. It is almost obscene. Hell, just read tweets from any of the IR people or the higher ed researchers, or follow #HiEdData. These are people deeply invested in what they do and what they want to know from data. And what they can know. And what they do know.

This is what Secretary Duncan and President Obama did not know, or failed to understand, when #PIRS was proposed.

There are hundreds, more like thousands, of people who are experts in IPEDS data. They know what can and can’t be done with IPEDS data. And what shouldn’t be done. In Ecclesiastes, the Preacher said, “There is nothing new under the sun.” That is how people felt about the prospect of using IPEDS data for a ratings system. What could be done that would be substantively different from what now exists? As big as it is, it is an exceedingly limited collection of data that was never intended for developing rankings or ratings.

Just to make this post kind academic-like (undergraduate-style) let’s look at the definition of a cult:

cult
  1. a system of religious veneration and devotion directed toward a particular figure or object.
    “the cult of St. Olaf”
  2. a misplaced or excessive admiration for a particular person or thing.
    “a cult of personality surrounding the IPEDS directors”
    synonyms: obsession with,fixation on,mania for,passion for, idolization of,devotion to,worship of,veneration of

    “the cult of eternal youth in Hollywood”

The only thing really lack is any type of charismatic leader(s). Or charisma, really. (Again, observe a TRP). Kool-aid generally comes in the form of caffeinated beverages. Everything else is there.

Of course, there are more than just these two higher ed cults. We have the new cult of Big Data, and it seems full of evangelists promising the world and beyond. Of course, this cult transcends higher education.

While I like the idea of Big Data. I like the idea of Big Information/Bigger Wisdom even more. That’s the cult I am waiting for.

I hope they have cookies. Without almonds.

IPEDS is not GRS

Say it with me, “IPEDS is not GRS. GRS is a part, a small part, of IPEDS.”

Matt Reed (@DeanDad) set me off a bit this morning with his Confessions piece over at InsideHigherEd and I kind of piled on with my long-time colleague Vic Borden that IPEDS and GRS are not simply one and the same with a focus (or fetish, if you prefer)  on first-time, full-time undergraduates. It really ticks me off when I read something like this since it takes my mind of the very good points he was trying to make. I could have written thousands of words of comments about how what we are doing in Virginia is so different, and so much better.

Every time someone says he or she wouldn’t be counted in IPEDS because they transferred, or took eight years (like yours truly), I cringe. It is just not true. It is false. It is wrong.

Yes, that person would not be in the GRS metric. However, she certainly would show up in the Completions survey if shefinished a degree or certificate, whether it took one year or 20. Likewise, she would show up in the fall enrollment survey anytime she was enrolled in a fall term.

As important as a graduation rate is, there is not much more important than the degree conferral, the completions, themselves. That is something that folks should keep in mind.

Now I could brag about some of the things we are doing at research.schev.edu, but instead I will simply highlight this tweet from ACSFA PIRS Hearing:

I think Matt has the right ideas, and I would support them in a Technical Review Panel, although I would probably offer supportive amendments. The problem is getting to a TRP. The type of collection required to support these measures, if not Student Unit Record (or IPEDS-UR, although being a Thomas Covenant fan, I want to lean towards ur-IPEDS), would be so burdensome, the collection would never happen without Congressional action. And that’s the rub. USED only controls the details. Congress makes the ultimate determination and that is where AACC and ACCT (and probably a bunch of groups representing four-year colleges) need to get involved.

The easiest thing at this point is to pile on to support the Student Right-to-Know Before Go Act.

 

 

 

Just quit whining

Yesterday, at the Summer Hearing of the Advisory Committee of Student Financial Aid about PIRS (the proposed rating system), there was plenty of talk about some kind of adjustment for inputs or weighting based on types of students enrolled. As I heard things, there are four positions on this topic as they relate to issues of PIRS for use as a consumer information product and as an accountability tool.

1) Everything must be input-adjusted for fairness (both for consumer information and accountability).

2) Input-adjustments are only appropriate for accountability.

3) Consumers need to see non-adjusted numbers, particularly graduation rates, to know their likelihood of finishing.

4) Institutions that serve predominantly low-income, under-prepared students (or a disproportionate share of such – I guess they all feel they are entitled to a righteous share of smart, rich students) are doomed to fail with a significant number of these students.

The fourth point just makes my teeth ache. Part of me wants to scream out in public, “If you don’t feel you can be successful with these students, quit taking their money and giving them false hope. Get out of this business.” I know that is somewhat unfair. Also, I believe that a certain amount of failure should be allowed and expected, especially in the name of providing opportunity. Further, each student does have to do the work and make an effort – but I believe that most want to do so. To publicly state that at some point, your institution just won’t be able to do any better (especially if that is short of 100%) just strikes me as conceding battle before fully engaging.

There is so much ongoing effort and research focused on improving student outcomes, it is hard for me to believe that someday every student that wants to succeed will be able to do so.

As you might surmise, I disagree with point one. I can live with the concept of input-adjustment for accountability, especially given differences in public support and student/family wealth. But to provide input-adjusted scores to students that attempt to level the comparisons between VSU and UVa, doesn’t make sense to me. They are radically different institutions with different mixes of students, faculty, and programs. And costs.

I’m also not a big fan of comparisons in general. They are overly simple for big decisions and so easily misleading. At SCHEV, our institution profiles are designed to avoid the comparison trap, and ignore the concepts of input-adjustment. We do provide the graduation rate data (a variety of measures on the “Grad Rates” tab) on a scale anchored with sector’s lowest and highest value in the state.

pirs6

Likewise, when we released the mid-career wage reports this week, we created these only at the state-level. While there might have been more interest in comparing institutions, we think policy discussions deserve something more.

However, the US News & World Reports Best College rankings get 2500 (or more!) page views for every page view these reports get*. The PayScale Mid-Career Rankings have also gotten far more coverage. I think this is pretty strong values statement of the higher ed community, that despite what the faculty/faculty-researchers say and teach, the great bulk of the community want rankings and comparisons.

*What, you think I don’t know that non-highered people look at the rankings? Of course they do, given the number of colleges and universities ranked, the number of administrators at each, and numbers of journalists writing stories about rankings, it doesn’t take long to get to a half-million page views in a day.

So, quit whining about input-adjustments and focus on becoming exceptional in teaching and graduating students. Quit whining about government ratings if you are going to keep feeding the economic engine that saved Us News & World Report.

We are going to fail with some students. We don’t have to fail with most, which some institutions manage to do.

 

If I were the type to rate colleges…

The question has arisen a few times now. “Tod, if you were at USED and absolutely had to build a ratings system college with current data, what would you do?’

It’s a tough question since I don’t believe the existing IPEDS data are up the task. So, I would attempt to to use the National Student Loan Data System and develop a result set about Title IV recipients. But that may not be possible just yet.

First, I would start with three categories, as I have written before. These would be Title IV Eligible, Title IV Eligible – Conditional, Title IV Ineligible. The federal government’s implicit authority to rate colleges based on Title IV participation and success I think is a given. Rating colleges outside of Title IV performance I think is not. However, given the lack of data specific to Title IV participants, some standard IPEDS measures will have to be used.

It seems to me that what is really at stake is Title IV eligibility. So let’s start be establishing minimum standards for participating in Title IV and assume that institutions have five years to meet these standards initially.It is neither fair nor appropriate to establish standards and apply them immediately.

Title IV Eligibility Minimum Standards – Four-year Institutions

  • First-year Retention Rate Greater than or Equal to 60%
  • Six-year Graduation Rate Greater than or Equal to 30% (All Students)
  • Six-year Graduation Rate Greater than or Equal to 35% (Students with Pell grants at Entry)
  • Six-year Graduation Rate Greater than or Equal to 35% (Students with Stafford Loans (Subsidized and Unsubsidized) at Entry) (Requires use of NSLDS or adding the required institutional disclosures to IPEDS)
  • Six-year Graduation Rate Greater than or Equal to 40% (Students with PLUS Loans at Entry) (Requires use of NSLDS)
  • Cohort Default Rate Less than 10%
  • 80% of Graduates with Federal Loans in active repayment (including Income-based options) and in-school deferments.
  • And as data become available through NSLDS, minimum 60% graduation rates for Title IV students in graduate and professional programs.

However, since the administration has made it clear that part of the desire is to increase the enrollment of Pell students at high-performing institutions, I might add a fourth category of Title IV Eligible – Unconditional for institutions that meet or exceed all the standards above and enroll a number of undergraduates receiving Pell grants at entry equaling or exceeding 25% of the traditional on-campus population. (In other words, lesser respected branch campuses or distance students that rarely, if ever, step on campus would not count.)

Title IV Eligibility Minimum Standards for Conditional Status  – Four-year Institutions

 

  • First-year Retention Rate Greater than or Equal to 50%
  • Six-year Graduation Rate Greater than or Equal to 20% (All Students)
  • Six-year Graduation Rate Greater than or Equal to 20% (Students with Pell grants at Entry)
  • Six-year Graduation Rate Greater than or Equal to 20% (Students with Stafford Loans (Subsidized and Unsubsidized) at Entry) (Requires use of NSLDS or adding the required institutional disclosures to IPEDS)
  • Six-year Graduation Rate Greater than or Equal to 20% (Students with PLUS Loans at Entry) (Requires use of NSLDS)
  • Cohort Default Rate Less than 15%
  • 60% of Graduates with Federal Loans in active repayment (including Income-based options) and in-school deferments.
  • And as data become available through NSLDS, minimum 60% graduation rates for Title IV students in graduate and professional programs.
  • Requires ten-year improvement plan. If unable to move into full-eligibility status after that point, loses 50% of available Title IV funds.

Title IV Ineligible – Four-year Institutions

  • Failure to meet any one of these standards constitutes an ineligible institution. Institutions failing no more than two standards would be able to appeal and be placed on a five-year remediation plan – after posting a bond equal to the Title IV funding at risk of loss for the numbers of students it would take to move into a passing score.

So, these standards are incredibly arbitrary. They would negatively affect a significant number of institutions. Further, with the addition of an “Unconditional” rating, some of the highest performing institutions in the nation (and Virginia) would not be in that highest status.

In one way though, they are not arbitrary. I have dealt with enough policymakers over the years to know how they react to graduation rates below 30% (shucks, many get outraged at rates below 50%). These are rates I feel that *I* could generally defend for about a decade. After that, I foresee necessary increases.

I have not suggested standards for community colleges. I’ll save for that another time, but they would be differently constructed.

Oh, I would also use the rating report to link each college to any state profile data and require that each college link to both federal and state reporting websites specific to that college.

 

 

Defining and Disclosing Financial Health

George Cornelius blogged this morning over at Finding My College about the desirability and need for private colleges to disclose their financial health.

No matter how you look at it, we, the U.S. taxpayers, pay dearly to support our higher ed system. Yet, when it comes to the so-called private institutions (the quasi-public colleges), there isn’t much shared with us or with prospective students about the spending and financial health (or lack thereof) of the institutions.

While I can quibble with the notion of quasi-public, it is a familiar argument. Bob Morse at US News & World Report tried making a similar argument years ago that private institutions should be subject to FOIA based on the large amounts of public money they receive. However, the money technically goes to the students, though it does seem to be a bit of a shell game. The money (student financial aid) can only be used at a qualifying provider and that provider makes the determination of eligibility and award. And controls disbursement.

The U.S. Department of Education, and any state that subsidizes quasi-public colleges, should compel the recipients of this largess to disclose conspicuously on their websites their financial statements for the past five years as well as data and information about student learning and outcomes. In other words, prospective students and their families should be given information with which to distinguish the performing institutions from the underperforming ones, and the ones with a future from the ones that are likely to find themselves in the junkyard of failed institutions before the current restructuring of higher ed has run its course.

The Department, and Congress, already require oodles of disclosures. The conspicuousness of these disclosures tends to leave much to be desired. This is also true for their usability. However, when I read George’s post this morning I was intrigued by the thought of what this might look like. In Virginia, when it comes to student-oriented data, the public and nonprofit institutions have no place to hide at the undergraduate level. We publish an awful lot of data, very detailed, with student outcomes out to 10 years. However, it does not touch the student learning issue, nor the financial stability issue.

I’ve mentioned before that there are two criteria that put a private institution on my at-risk list. A retention rate for the first to second year of less than 60% and fewer than 1500 students at an undergraduate-only, or 2000 at a predominantly undergraduate institution. A significant endowment can compensate these risks, but most institutions with these issues  have little to no endowment.

What can’t compensate is an inability to pay bills if the big checks are late. Such as those from the federal government. This is what happened to Virginia Intermont, despite the president personally loaning nearly a half-million dollars to the college. We could require some kind of disclosure as to the percentage of an institution’s total revenues represented by state and federal sources, including student financial aid.This is similar in nature to the 90/10 rule the Department has established for the for-profit institutions.

Even with five years of such numbers, that is only a minimal warning. It seems to me that a “cash-on-hand” warning trend added to this might be appropriate. Every 90 days the institution reports on its website how many days it can operate with current expenditure commitments and cash available. While this may not be directly meaningful to most families and students, it would certainly tell agencies and accreditors something important about the viability and sustainability of an institution.

I would also add a measure explains how much of tuition revenue is used to fund institutional aid, and on average, what students who have to borrow to pay for their attendance and do not receive gift aid, contribute in debt towards gift aid for other students.

Why?

This excellent article from Forbes describes many of the associated problems.

If the whole idea of jacking up a price and then selectively discounting seems a bit nefarious, Crockett takes issue: “Students on campuses pay all kinds of different price points, just like people sleeping in a hotel or flying on airplanes pay all kinds of different prices.”

Does that make it right? Or ethical for a nonprofit?

Given the distorted model in place, perhaps this kind of distorted solution has merit. So long as obtaining student loans is easy and universities continue to chase rankings by leveraging aid and beefing up campus amenities, published prices will continue to rise along with tuition discounts. Thousands of schools will continue to struggle, and enrollment consultants like Noel-Levitz will be more than happy to lend a helping hand.

Yep. So perhaps the president’s proposed rating system (#PIRS) should focus only financial stability.

By the way, speaking of #PIRS, since no one else has picked up on this, Valerie Strauss published this tidbit on her blog entry regarding 50 Virginia presidents signing on to a letter opposing PIRS:

Education Department spokeswoman Dorie Nolt issued this comment about the letter:

I noticed you wrote about the letter from the Virginia college presidents. Here is a statement from me (Dorie Nolt, no Turner necessary) on it:

“We have received the letter and look forward to responding. As a nation, we have to make college more accessible and affordable and assure that students graduate with an education of real value, which is the goal of the College Rating System. In an effort to build this system thoughtfully and wisely, we are listening actively to recommendations and concerns, which includes national listening tour of 80-plus meetings with 4,000 participants. We hear over and over — from students and families, college presidents and high school counselors, low-income students, business people and researchers – that, done right, a ratings system will push innovations and systems changes that will benefit students and we look forward to delivering a proposal that will help more Americans attain a college education.”

She also said there was more information about the development of the rating system on the Education Department website here.

“College Rating System” as opposed to “Postsecondary Institution Ratings System” – this seems like two changes to me: one rating system and only colleges and universities – not the thousands of other postsecondary institutions.

 

What I said to the GAO

Should the Federal Government Rate Colleges (and Universities)?

President Obama, Secretary Duncan, Deputy Under-Secretary Studley, and others, have called “foul” on those of us opposing the proposed ratings system since it does not exist. Their position is that our response should be constructive and supportive until there is something to criticize.

They don’t understand why many of us, the data experts, are opposed.

It’s not that a ratings system can’t be developed. It can. At issue is the quality and appropriateness of the available data. The existing data are simply inadequate for the proposed task. IPEDS was not designed for this and organizations like US News & World Report have taken IPEDS as far as it can go and added additional data for their rankings. Also at issue is the appropriateness of the federal government providing an overall rating for an institution over aspects for which it has no authority.

I think it is fully a right course of action for the Department and Administration to rate colleges as to their performance with Title IV financial aid funding and required reporting. Ratings based on graduation rates, net price, and student outcomes of students with Title IV aid would be very useful and appropriate. Placing additional value factors in ratings relevant to compliance and accuracy of required reporting under Title IV would add meaning to a system designed to determine continued eligibility for Title IV programs.

After all, if continued eligibility and amount of available aid under Title IV are the ultimate goals, aren’t these the things to measure?

This is decidedly less politically exciting than saying Institution A has a higher overall rating than Institution B, but it makes a clear relationship between what is being measured and what matters. It also has the advantage of being able to use existing student-level data on Title IV recipients in the same manner as is being done for Gainful Employment. From where I sit, PIRS is simply Gainful Employment at an institution-level as opposed to program-level.

And that is appropriate.

Using ratings to develop performance expectations to participate in the federal largesse that is Title IV would be a good thing. Regional accreditation and state approval to operate is clearly no longer adequate for gate-keeping, if, indeed, it ever was.

The difficulty is determining what those expectations should be. It is quite reasonable to subdivide institutional sectors in some manner, calculate a graduation rate quartiles or quintiles for each group of students in Title IV programs and require institutions in the bottom to submit a five-year improvement plan with annual benchmarks. Any institution failing to meet annual benchmarks two years running could then be eliminated from Title IV. Using multiple measures of success, including wage outcomes from records matched to IRS or SSA, we can reduce any tendency towards lowering standards to survive.

In an ideal world, with quote complete end quote data, a ratings system would be focused on intra-institutional improvement. In fact, this is the language that Secretary Duncan is beginning to use, as he did in a recent interview:

“Are you increasing your six-year graduation rate, or are you not?” he said. “Are you taking more Pell Grant recipients [than you used to] or are you not?” Both of those metrics, if they were to end up as part of the rating system, would hold institutions responsible for improving their performance, not for meeting some minimum standard that would require the government to compare institutions that admit very different types of students.

The problem is that simple year-to-year improvement measures tend to not be very simple to implement. We have substantial experience with this in Virginia, especially this week, as we work through institutional review of performance measures in preparation for next week’s meeting of the State Council. On any measure, annual variance should be expected. This is especially true for measures that have multiple year horizons for evaluation. It is even truer when institutions are taking action to improve performance as sometimes such actions fail.

A better approach is focus on student sub-groups within an institution. For example, is there any reason to accept that Pell-eligible students should have a lower graduation rate than students from families with incomes greater than $150,000? We generally understand why that is currently the case, but there is no reason to accept that it must be so. I would argue, vociferously, that if the Department’s goal is to improve to access and success, that this is where the focus belongs. Rate colleges on their efforts and success at ensuring all students admitted to the same institution have the same, or very similar, opportunity for success. Provide additional levers to increase the access of certain students groups to college. To do this would require IPEDS Unit Record – a national student record system – perhaps as envisioned in the Wyden-Rubio bill, The Student Right-to-Know Before You Go Act.

This means over-turning the current ban on a student record system. It also means taking a step that brings USED into a place where most of the states are. From my perspective, it is hard to accept an overall rating system of my colleges from the federal government when I have far, far more data about those colleges and choose to not to rate them. Instead we focus on improvement through transparency and goal attainment.

I think few reasonable people will disagree with the idea of rating institutions on performance within the goals and participation agreement of Title IV. It is when the federal government chooses winners and losers beyond Title IV that disagreement settles in.

We will face disagreement over what standards to put in place, if we go down this path. That is part of the rough and tumble of policy, politics, and negotiated rulemaking. You know – the fun stuff.

Let’s take a quick look at four very different institutions. These images come from our publicly available institution profiles at http://research.schev.edu/iprofile.asp

 

Germanna Community College does not have high graduation rates (note these are not IPEDS GRS rates as they include students starting in the spring as well as part-time students). All of these are toward the lower end of the range of Virginia public two-year colleges.  There are a range of differences among the subcohorts, particularly between students from the poorest and the wealthiest families.

gao1

Even at the highest performing institution on graduation rates, one of the highest in the nation, there is still a range of difference. A full 10 percentage point difference between the poorest and wealthiest students.

gao2-uva

In the last two decades, CNU has more than doubled its graduation rates through transforming the institution and its admission plans. The differences between subcohorts are much smaller, but this has come at a price of denying access to students that sought an open-enrollment institution.

gao3-cnu

Ferrum College has relatively low graduation rates and high cohort default rates. Using federal data, it does not look to be an effective institution. However, I will point out that it has the highest success rate with students requiring developmental coursework in the first two years. It apparently can serve some students well, and others better than other institutions.

gao4-fc

My point with these four examples is this. We need to drive improvements in student outcomes by focusing on differences within institutions, specifically subcohorts of students that are recipients of Title IV aid.