Feb 13, 2025 – Government Will Change How it Rates Colleges

The federal government on Thursday announced that it was changing the way it measures colleges, essentially adjusting the curve that it uses to rate institutions to make it more difficult for them to earn coveted four- and five-star government ratings.

Under the changes, scores are likely to fall for many institutions, federal officials said, although they did not provide specific numbers. Institutions will see a preview of their new scores on Friday, but the information will not be made public until Feb. 20.

“In effect, this raises the standard for colleges to achieve a high rating,” said Thomas Hamm, the director of the survey and certification group at the Commission of Education Economics within the Executive Office of the President, which oversees the ratings system.

Colleges are scored on a scale of one to five stars on College Compare, the widely used federal website that has become the gold standard for evaluating the nation’s more than 15,000 colleges even as it has been criticized for relying on self-reported, unverified data, that is limited in scope and function.

In August, The New York Times reported that the rating system relied so heavily on unverified information that even institutions with a documented history of quality problems were earning top ratings. Two of the three major criteria used to rate facilities — graduation rates and student input quality measures statistics — were reported by the institutions and not audited by the federal government.

In October, the federal government announced that it would start requiring colleges to report their staffing levels quarterly — using an electronic system that can be verified with payroll data. They will also report their enrollments weekly by the individual student to be verified against the National Student Loan and Tuition Tax Credit Data System. This allows to begin a nationwide auditing program aimed at checking whether an institution’s quality statistics were accurate.

The changes announced on Thursday were part of a further effort, officials said, to rebalance the ratings by raising the bar for colleges to achieve a high score in the quality measures area, which is based on information collected about every student. Colleges can increase their overall rating if they earn five stars in this area. The number of colleges with five stars in quality measures has increased significantly since the beginning of the program, to 89 percent in 2024 from 62 percent in 2015.

Representatives for colleges said on Thursday that they worried the changes could send the wrong message to consumers. “We are concerned the public won’t know what to make of these new rankings,” said Mark Parkinson, the president and chief executive of the Association of Private Sector Colleges and Universities, which represents for-profit colleges. “If colleges across the country start losing their star ratings overnight, it sends a signal to families and students that quality is on the decline when in fact it has improved in a meaningful way.”

But officials said that the changes would be explained on the consumer website, and that the public would be cautioned against drawing conclusions about a institution whose ratings recently declined. Still, Mr. Hamilton said scores would not decline across the board.“Some colleges, even when we raised the bar, continued to perform at a level much higher than the norm,” he said in a conference call Thursday with college operators. “We want to still recognize them in the five-star category.”
The updated ratings will also take into account, for the first time, a college’s use of antipsychotic drugs, which are often given inappropriately to elderly administrators with dementia.

–Thanks to John Nugent for the link to the original article and the inspiration.

And the search goes on

It’s happening again, the search for transparency. There is this belief that the right set of measures, over the right period of time, will clarify everything. About anything. Of course, the right measures are simple and don’t need explanation about what they measure and why they are important.

And that’s why the Quest for the Holy Grail did not happen…the Grail was sitting in the middle of a small church with a sign on it and a bright sourceless light above it.

According to the stories, that’s not what happened. (Speaking of stories, @jonbecker’s blog post is an excellent read.)

Time and data crashes in on each of us these days.

We too often struggle to sort through the signals and noise, at least I do, and so I understand the desire for something simple that tells me everything I need to know. But I never expect to find such a thing. In fact, my expectation is that if I want to know something and be able to act on it, I will have to do some work.

If I actually want to understand something, I know that I will likely have to work even harder.

So, this is pretty much the approach taken with research.schev.edu. You have to make an effort to know what you want and need, either before you get there or while on the site. Higher education is kind of a big business with a lot of complexity. This complexity derives not just from its size and variety, but also from its continual evolution. Some numbers, some measures are pretty simple – enrollment, and degrees conferred. Some of the buckets for these things may get a little complicated, but in our presentation of the data, actually in even our collection of the data, we have already simplified it through standardization.

Other measures, like graduation rates and measures of affordability, are more complex, if not to read, but to understand. The annual frequency of questions along the lines of “Don’t you have graduation rates for the four-year schools that are less than six years old?” has not noticeably reduced. As often as we explain the nature of a cohort measure, people still think we should have 2014 rate. Certainly, we could identify the reports based on the year the data are released, but some users will insist on being confused that the 2014 reports are about students that started at least six years prior, or three years for the two-year colleges. And in 2016 they would likely be confused again.

So we go for clarity and standards, even so, they are not such that they are instantly understood. Some things one just has to think about for a few moments. We also serve multiple constituencies with a varying levels of knowledge of higher ed and much different needs.

At the heart of it, this idea of a Holy Grail of measurement is the thinking behind the ratings system. Somehow one rating, or even a handful of different ratings, about an institution will tell one all they need to know. Or at least, all they need to know about an aspect of the institution related to the undergraduate experience. Except the educational aspect, because that is not measured consistently and reported systematically to USED.

PIRS though is only the natural evolution of the 2008 Higher Education Opportunity Act (HEOA). The reporting and disclosure requirements that came out of the HEOA are huge. In some ways they have transformed institutional websites, in others they have demonstrated institutional ability to bury information. Of course, who can blame institutions much for the latter when probably very few students are interested in some of the requirements?

Which makes me wonder what the next version of the HEA will bring. If Chad Aldeman’s post is any indicator, we could see a major shift away from current requirements. More likely, in my estimation, we will see an attempt towards the requiring the publication of the perfect number* or a half-dozen perfect numbers and their changes over time.

In any event, whatever happens with the next version of the HEA, PIRS, or any other effort at the federal or state level, I don’t expect the search for the Grail of Measures to end anytime soon.

Faded jaded fallen cowboy star
Pawn shops itching for your old guitar
Where you’ve gone, it ain’t nobody knows
The sequins have fallen from your clothes

Once you heard the Opry crowd applaud
Now you’re hanging out at 4th and Broad
On the rain wet sidewalk, remembering the time
When coffee with a friend was still a dime

Chorus:
Everything’s been sold American
The early times are finished and the want ads are all read
Everyone’s been sold American
Been dreaming dreams in a rollaway bed

Writing down your memoirs on some window in the frost
Roulette eyes reflecting another morning lost
Hauled in by the metro for killing time and pain
With a singing brakeman screaming through your veins

You told me you were born so much higher than life
I saw the faded pictures of your children and your wife
Now they’re fumbling through your wallet & they’re trying to find your name
It’s almost like they raised the price of fame

Kinky Friedman – Sold American Lyrics

*The perfect number is 17.

More Ratings Nonsense

Yes, I am thinking too much about PIRS. Its not really my fault, other people start it in other places.

Really though, the proposal is unnecessary. We already have an implicit ratings system that has been in place for years. It is quite simple:

A. Institution participates in Title IV.

B. Institution is on accreditation warning/probation/other status less than fully accredited and thus At-risk of Losing Title IV participation.

C. Institution no longer eligible to participate in Title IV.

D. Institution has never participated in Title IV.

Clean. Simple. And already exists. Now all we need to do is tie a badge to it for institutions to use on their websites.

The only thing missing, apart from marketing,is  some kind of objective criteria to allow USED to sort institutions into the categories themselves. Quite frankly, they could have done that themselves, quietly, without all the fanfare. And angst.

If we really need a rating that is better than A above, then we can have an “Unconditional participant in Title IV” for those institutions who are in the first three years following reaffirmation of accreditation.

I’m still not clear why we need more than this from the feds. This essay about the upcoming changes to the Carnegie Classification System points to how something as innocuous as the original categories became a de facto ranking. I suspect anyone that has worked at an R2 can testify to the discussions to attempt to become an R1. Over time, the classifications have become more complex. I have little doubt that would happen to PIRS and in a dozen years or so we would wind up with something that is hideously complex.

By the way, read this. Apparently the whole ratings/rankings dichotomy is not universal.

PIRS and the Quest for the Holy Grail

The ratings (framework) are out (more promises actually)! I wrote my semi-formal response over at my work blog. In that post I reference Stephen Porter’s post on why a single institutional performance metric is exactly like the Holy Grail. I’m kind of stuck on this comparison, and not because it arose as a response to Bob Morse of US News & World Report. Really, I am just a big fan of the Arthurian legend.

If one accounts for the general imperfection of law-making, it is not too difficult to believe that Richmond, VA is the real Camelot:

A law was made a distant moon ago here:
July and August cannot be too hot.
And there’s a legal limit to the snow here
In Camelot.
The winter is forbidden till December
And exits March the second on the dot.
By order, summer lingers through September
In Camelot.
Camelot! Camelot!
I know it sounds a bit bizarre,
But in Camelot, Camelot
That’s how conditions are.
The rain may never fall till after sundown.
By eight, the morning fog must disappear.
In short, there’s simply not
A more congenial spot
For happily-ever-aftering than here
In Camelot.

Law-making is imperfect. Often what the General Assembly decrees is not quite what happens, so really, it is just not much of a stretch to imagine the Quest for the Holy Grail occurring in the green hills of Virginia. I’ve walked much of the Appalachian Trail in Virginia, and at night, in the mist, on the trail or on the Blue Ridge Parkway, I have had little difficulty hearing the distant hoofbeats of a quest.

This is perhaps all the more true as I consider the Commonwealth’s endeavors over the last decades in developing and packaging performance indicators. While, I have played a role in those efforts the last 14 years, I have always tried for a package of measures, generally more than fewer. Institutions are simply too complex to be represented by a single aspect, let alone a single measure.  In fact, discussion such measures quickly become a rather intense and political discussion.

But now we have the a framework for the Postsecondary Institution Ratings System and the excitement was just like I suggested a couple weeks ago in tying PIRS to the arrival of the new phone books. We also have some new goal statements. For example, in a blog post, Jamienne Studley (of the It’s Just Like Rating a Blender comment) says:

The development of a college ratings system is an important part of the President’s plan to expand college opportunity by recognizing institutions that excel at enrolling students from all backgrounds; focus on maintaining affordability; and succeed at helping all students graduate with a degree or certificate of value. Our aim is to better understand the extent to which colleges and universities are meeting these goals. As part of this process, we hope to use federal administrative data to develop higher quality and nationally comparable measures of graduation rates and employment outcomes that improve on what is currently available.

So, we have language of equity,  affordability, combined with the new phrase of the realm the last year “certificate of value,” to describe the new goals being assigned to institutions. Some may/will argue this point, but the reality is that not all institutions were founded to be affordable, let alone open to all, or even “helping” students graduate. Some institutions, particularly one small college in the PNW, have been (please note the use of the past tense) famously proud of their low graduation rates. Completion was seen as a mark of distinction among super-smart and well-qualified students. But, these are all worthy goals and those footing the bill (or a large chunk of it through gifts and financing) get to make the rules. That is the Golden Rule: He who has the gold makes the rules. Also, sticking to our Arthurian theme, Might Makes Right.

“we hope to use federal administrative data to develop higher quality and nationally comparable measures of graduation rates and employment outcomes that improve on what is currently available.”

So, they are going to use the National Student Loan Data System (NSLDS) to create measures of graduation rates for Title IV students. This means they will build estimates that assume first appearance as Title IV recipients will be first enrollment in college. For many students this will work, but not all. To incorporate transfers into the mix will be a much greater challenge, particularly those from California community colleges where so very few students use Title IV to attend. There will be some estimation possible using annual loan amounts since maximum subsidized Stafford loans are different for third and fourth year students. However, the great many students transferring in fewer than two years from a community college will be damned hard to identify.

Of course, all this can be fixed going forward by making changes to the NSLDS collection.

Using NSLDS data to match to Social Security earnings is already tested at the program level for Gainful Employment. It should not be a stretch to do that at the institution level. The interesting thing will be to see how this figures compare to what states like Virginia, Texas, and others are reporting using UI Wage data. And Payscale data. I don’t know about my colleagues in the other states, but I am ready to assist.

To do this well though, they are really going to have to do more than ask for comments. They need to bring people together. (About 2 minutes in on the next clip.)

I appreciate what the president is trying to do. I just don’t think ratings are the way to go for a government. Save with this caveat: as long as the ratings are billed and described solely as Title 4 Performance Ratings and not Institutional Ratings, then I am happy and fully supportive. I have said all along it is completely appropriate for the Department to evaluate institutions based on their performance under Title IV. Program evaluation is part and parcel to government programs. Or should be. Let’s just keep the focus where it belongs and not try to be all things to all people, especially when neither the data nor the legitimate bounds of authority warrant more than that.

In any event, the Student Right-to-Know Before You Go Act is a better solution to the goals Studley’s post articulate and the goals presented within the draft framework. Better data,  better information, within an appropriate scope.

We can achieve a version of Camelot in the cult-word of higher ed data.

Just choose wisely.

Cults in Higher Ed

I was at a super-exclusive, informal meeting-type thing this week. I have to call it a meeting as there was no beer. There should have been beer.  At one point, I was explaining how cultish higher education is. Really.

And this phrase didn’t originate with me. More’s the pity.

Back in 2010, shortly after I returned to work following my adventure in neuroscience, there was a subcommittee meeting for Governor McDonnell’s higher education reform committee. As is often the case for these things (in Virginia, at least), it was standing room only for the audience. No matter how often we try to explain that the meeting host that there will be a crowd, there is huge interest in higher ed policy and we always need more seats for the audience than the normies think. As one legislative liaison pointed out, “It is a cult, it really is. We want to be here, even more than our institutions want us to be here. We need to be here.”

Part of the attraction is the desire to be involved and to avoid damage to one’s institution. It’s also fascinating. There is very little as as intrinsically interesting and mind-consuming as higher ed policy. It’s powerful stuff, too often polluted with overly simple explanations or overly complex solutions. And the people are fun to watch.

The only thing that is clearly more interesting and drives even greater passion is higher ed data & data policy. If you don’t believe me, show up at an IPEDS Technical Review Panel (TRP) and just observe. The level of passionate discourse and argument over a minor change in definition can go on for hours. It is almost obscene. Hell, just read tweets from any of the IR people or the higher ed researchers, or follow #HiEdData. These are people deeply invested in what they do and what they want to know from data. And what they can know. And what they do know.

This is what Secretary Duncan and President Obama did not know, or failed to understand, when #PIRS was proposed.

There are hundreds, more like thousands, of people who are experts in IPEDS data. They know what can and can’t be done with IPEDS data. And what shouldn’t be done. In Ecclesiastes, the Preacher said, “There is nothing new under the sun.” That is how people felt about the prospect of using IPEDS data for a ratings system. What could be done that would be substantively different from what now exists? As big as it is, it is an exceedingly limited collection of data that was never intended for developing rankings or ratings.

Just to make this post kind academic-like (undergraduate-style) let’s look at the definition of a cult:

cult
  1. a system of religious veneration and devotion directed toward a particular figure or object.
    “the cult of St. Olaf”
  2. a misplaced or excessive admiration for a particular person or thing.
    “a cult of personality surrounding the IPEDS directors”
    synonyms: obsession with,fixation on,mania for,passion for, idolization of,devotion to,worship of,veneration of

    “the cult of eternal youth in Hollywood”

The only thing really lack is any type of charismatic leader(s). Or charisma, really. (Again, observe a TRP). Kool-aid generally comes in the form of caffeinated beverages. Everything else is there.

Of course, there are more than just these two higher ed cults. We have the new cult of Big Data, and it seems full of evangelists promising the world and beyond. Of course, this cult transcends higher education.

While I like the idea of Big Data. I like the idea of Big Information/Bigger Wisdom even more. That’s the cult I am waiting for.

I hope they have cookies. Without almonds.

IPEDS is not GRS

Say it with me, “IPEDS is not GRS. GRS is a part, a small part, of IPEDS.”

Matt Reed (@DeanDad) set me off a bit this morning with his Confessions piece over at InsideHigherEd and I kind of piled on with my long-time colleague Vic Borden that IPEDS and GRS are not simply one and the same with a focus (or fetish, if you prefer)  on first-time, full-time undergraduates. It really ticks me off when I read something like this since it takes my mind of the very good points he was trying to make. I could have written thousands of words of comments about how what we are doing in Virginia is so different, and so much better.

Every time someone says he or she wouldn’t be counted in IPEDS because they transferred, or took eight years (like yours truly), I cringe. It is just not true. It is false. It is wrong.

Yes, that person would not be in the GRS metric. However, she certainly would show up in the Completions survey if shefinished a degree or certificate, whether it took one year or 20. Likewise, she would show up in the fall enrollment survey anytime she was enrolled in a fall term.

As important as a graduation rate is, there is not much more important than the degree conferral, the completions, themselves. That is something that folks should keep in mind.

Now I could brag about some of the things we are doing at research.schev.edu, but instead I will simply highlight this tweet from ACSFA PIRS Hearing:

I think Matt has the right ideas, and I would support them in a Technical Review Panel, although I would probably offer supportive amendments. The problem is getting to a TRP. The type of collection required to support these measures, if not Student Unit Record (or IPEDS-UR, although being a Thomas Covenant fan, I want to lean towards ur-IPEDS), would be so burdensome, the collection would never happen without Congressional action. And that’s the rub. USED only controls the details. Congress makes the ultimate determination and that is where AACC and ACCT (and probably a bunch of groups representing four-year colleges) need to get involved.

The easiest thing at this point is to pile on to support the Student Right-to-Know Before Go Act.

 

 

 

Just quit whining

Yesterday, at the Summer Hearing of the Advisory Committee of Student Financial Aid about PIRS (the proposed rating system), there was plenty of talk about some kind of adjustment for inputs or weighting based on types of students enrolled. As I heard things, there are four positions on this topic as they relate to issues of PIRS for use as a consumer information product and as an accountability tool.

1) Everything must be input-adjusted for fairness (both for consumer information and accountability).

2) Input-adjustments are only appropriate for accountability.

3) Consumers need to see non-adjusted numbers, particularly graduation rates, to know their likelihood of finishing.

4) Institutions that serve predominantly low-income, under-prepared students (or a disproportionate share of such – I guess they all feel they are entitled to a righteous share of smart, rich students) are doomed to fail with a significant number of these students.

The fourth point just makes my teeth ache. Part of me wants to scream out in public, “If you don’t feel you can be successful with these students, quit taking their money and giving them false hope. Get out of this business.” I know that is somewhat unfair. Also, I believe that a certain amount of failure should be allowed and expected, especially in the name of providing opportunity. Further, each student does have to do the work and make an effort – but I believe that most want to do so. To publicly state that at some point, your institution just won’t be able to do any better (especially if that is short of 100%) just strikes me as conceding battle before fully engaging.

There is so much ongoing effort and research focused on improving student outcomes, it is hard for me to believe that someday every student that wants to succeed will be able to do so.

As you might surmise, I disagree with point one. I can live with the concept of input-adjustment for accountability, especially given differences in public support and student/family wealth. But to provide input-adjusted scores to students that attempt to level the comparisons between VSU and UVa, doesn’t make sense to me. They are radically different institutions with different mixes of students, faculty, and programs. And costs.

I’m also not a big fan of comparisons in general. They are overly simple for big decisions and so easily misleading. At SCHEV, our institution profiles are designed to avoid the comparison trap, and ignore the concepts of input-adjustment. We do provide the graduation rate data (a variety of measures on the “Grad Rates” tab) on a scale anchored with sector’s lowest and highest value in the state.

pirs6

Likewise, when we released the mid-career wage reports this week, we created these only at the state-level. While there might have been more interest in comparing institutions, we think policy discussions deserve something more.

However, the US News & World Reports Best College rankings get 2500 (or more!) page views for every page view these reports get*. The PayScale Mid-Career Rankings have also gotten far more coverage. I think this is pretty strong values statement of the higher ed community, that despite what the faculty/faculty-researchers say and teach, the great bulk of the community want rankings and comparisons.

*What, you think I don’t know that non-highered people look at the rankings? Of course they do, given the number of colleges and universities ranked, the number of administrators at each, and numbers of journalists writing stories about rankings, it doesn’t take long to get to a half-million page views in a day.

So, quit whining about input-adjustments and focus on becoming exceptional in teaching and graduating students. Quit whining about government ratings if you are going to keep feeding the economic engine that saved Us News & World Report.

We are going to fail with some students. We don’t have to fail with most, which some institutions manage to do.