Defining and Disclosing Financial Health

George Cornelius blogged this morning over at Finding My College about the desirability and need for private colleges to disclose their financial health.

No matter how you look at it, we, the U.S. taxpayers, pay dearly to support our higher ed system. Yet, when it comes to the so-called private institutions (the quasi-public colleges), there isn’t much shared with us or with prospective students about the spending and financial health (or lack thereof) of the institutions.

While I can quibble with the notion of quasi-public, it is a familiar argument. Bob Morse at US News & World Report tried making a similar argument years ago that private institutions should be subject to FOIA based on the large amounts of public money they receive. However, the money technically goes to the students, though it does seem to be a bit of a shell game. The money (student financial aid) can only be used at a qualifying provider and that provider makes the determination of eligibility and award. And controls disbursement.

The U.S. Department of Education, and any state that subsidizes quasi-public colleges, should compel the recipients of this largess to disclose conspicuously on their websites their financial statements for the past five years as well as data and information about student learning and outcomes. In other words, prospective students and their families should be given information with which to distinguish the performing institutions from the underperforming ones, and the ones with a future from the ones that are likely to find themselves in the junkyard of failed institutions before the current restructuring of higher ed has run its course.

The Department, and Congress, already require oodles of disclosures. The conspicuousness of these disclosures tends to leave much to be desired. This is also true for their usability. However, when I read George’s post this morning I was intrigued by the thought of what this might look like. In Virginia, when it comes to student-oriented data, the public and nonprofit institutions have no place to hide at the undergraduate level. We publish an awful lot of data, very detailed, with student outcomes out to 10 years. However, it does not touch the student learning issue, nor the financial stability issue.

I’ve mentioned before that there are two criteria that put a private institution on my at-risk list. A retention rate for the first to second year of less than 60% and fewer than 1500 students at an undergraduate-only, or 2000 at a predominantly undergraduate institution. A significant endowment can compensate these risks, but most institutions with these issues  have little to no endowment.

What can’t compensate is an inability to pay bills if the big checks are late. Such as those from the federal government. This is what happened to Virginia Intermont, despite the president personally loaning nearly a half-million dollars to the college. We could require some kind of disclosure as to the percentage of an institution’s total revenues represented by state and federal sources, including student financial aid.This is similar in nature to the 90/10 rule the Department has established for the for-profit institutions.

Even with five years of such numbers, that is only a minimal warning. It seems to me that a “cash-on-hand” warning trend added to this might be appropriate. Every 90 days the institution reports on its website how many days it can operate with current expenditure commitments and cash available. While this may not be directly meaningful to most families and students, it would certainly tell agencies and accreditors something important about the viability and sustainability of an institution.

I would also add a measure explains how much of tuition revenue is used to fund institutional aid, and on average, what students who have to borrow to pay for their attendance and do not receive gift aid, contribute in debt towards gift aid for other students.

Why?

This excellent article from Forbes describes many of the associated problems.

If the whole idea of jacking up a price and then selectively discounting seems a bit nefarious, Crockett takes issue: “Students on campuses pay all kinds of different price points, just like people sleeping in a hotel or flying on airplanes pay all kinds of different prices.”

Does that make it right? Or ethical for a nonprofit?

Given the distorted model in place, perhaps this kind of distorted solution has merit. So long as obtaining student loans is easy and universities continue to chase rankings by leveraging aid and beefing up campus amenities, published prices will continue to rise along with tuition discounts. Thousands of schools will continue to struggle, and enrollment consultants like Noel-Levitz will be more than happy to lend a helping hand.

Yep. So perhaps the president’s proposed rating system (#PIRS) should focus only financial stability.

By the way, speaking of #PIRS, since no one else has picked up on this, Valerie Strauss published this tidbit on her blog entry regarding 50 Virginia presidents signing on to a letter opposing PIRS:

Education Department spokeswoman Dorie Nolt issued this comment about the letter:

I noticed you wrote about the letter from the Virginia college presidents. Here is a statement from me (Dorie Nolt, no Turner necessary) on it:

“We have received the letter and look forward to responding. As a nation, we have to make college more accessible and affordable and assure that students graduate with an education of real value, which is the goal of the College Rating System. In an effort to build this system thoughtfully and wisely, we are listening actively to recommendations and concerns, which includes national listening tour of 80-plus meetings with 4,000 participants. We hear over and over — from students and families, college presidents and high school counselors, low-income students, business people and researchers – that, done right, a ratings system will push innovations and systems changes that will benefit students and we look forward to delivering a proposal that will help more Americans attain a college education.”

She also said there was more information about the development of the rating system on the Education Department website here.

“College Rating System” as opposed to “Postsecondary Institution Ratings System” – this seems like two changes to me: one rating system and only colleges and universities – not the thousands of other postsecondary institutions.

 

What I said to the GAO

Should the Federal Government Rate Colleges (and Universities)?

President Obama, Secretary Duncan, Deputy Under-Secretary Studley, and others, have called “foul” on those of us opposing the proposed ratings system since it does not exist. Their position is that our response should be constructive and supportive until there is something to criticize.

They don’t understand why many of us, the data experts, are opposed.

It’s not that a ratings system can’t be developed. It can. At issue is the quality and appropriateness of the available data. The existing data are simply inadequate for the proposed task. IPEDS was not designed for this and organizations like US News & World Report have taken IPEDS as far as it can go and added additional data for their rankings. Also at issue is the appropriateness of the federal government providing an overall rating for an institution over aspects for which it has no authority.

I think it is fully a right course of action for the Department and Administration to rate colleges as to their performance with Title IV financial aid funding and required reporting. Ratings based on graduation rates, net price, and student outcomes of students with Title IV aid would be very useful and appropriate. Placing additional value factors in ratings relevant to compliance and accuracy of required reporting under Title IV would add meaning to a system designed to determine continued eligibility for Title IV programs.

After all, if continued eligibility and amount of available aid under Title IV are the ultimate goals, aren’t these the things to measure?

This is decidedly less politically exciting than saying Institution A has a higher overall rating than Institution B, but it makes a clear relationship between what is being measured and what matters. It also has the advantage of being able to use existing student-level data on Title IV recipients in the same manner as is being done for Gainful Employment. From where I sit, PIRS is simply Gainful Employment at an institution-level as opposed to program-level.

And that is appropriate.

Using ratings to develop performance expectations to participate in the federal largesse that is Title IV would be a good thing. Regional accreditation and state approval to operate is clearly no longer adequate for gate-keeping, if, indeed, it ever was.

The difficulty is determining what those expectations should be. It is quite reasonable to subdivide institutional sectors in some manner, calculate a graduation rate quartiles or quintiles for each group of students in Title IV programs and require institutions in the bottom to submit a five-year improvement plan with annual benchmarks. Any institution failing to meet annual benchmarks two years running could then be eliminated from Title IV. Using multiple measures of success, including wage outcomes from records matched to IRS or SSA, we can reduce any tendency towards lowering standards to survive.

In an ideal world, with quote complete end quote data, a ratings system would be focused on intra-institutional improvement. In fact, this is the language that Secretary Duncan is beginning to use, as he did in a recent interview:

“Are you increasing your six-year graduation rate, or are you not?” he said. “Are you taking more Pell Grant recipients [than you used to] or are you not?” Both of those metrics, if they were to end up as part of the rating system, would hold institutions responsible for improving their performance, not for meeting some minimum standard that would require the government to compare institutions that admit very different types of students.

The problem is that simple year-to-year improvement measures tend to not be very simple to implement. We have substantial experience with this in Virginia, especially this week, as we work through institutional review of performance measures in preparation for next week’s meeting of the State Council. On any measure, annual variance should be expected. This is especially true for measures that have multiple year horizons for evaluation. It is even truer when institutions are taking action to improve performance as sometimes such actions fail.

A better approach is focus on student sub-groups within an institution. For example, is there any reason to accept that Pell-eligible students should have a lower graduation rate than students from families with incomes greater than $150,000? We generally understand why that is currently the case, but there is no reason to accept that it must be so. I would argue, vociferously, that if the Department’s goal is to improve to access and success, that this is where the focus belongs. Rate colleges on their efforts and success at ensuring all students admitted to the same institution have the same, or very similar, opportunity for success. Provide additional levers to increase the access of certain students groups to college. To do this would require IPEDS Unit Record – a national student record system – perhaps as envisioned in the Wyden-Rubio bill, The Student Right-to-Know Before You Go Act.

This means over-turning the current ban on a student record system. It also means taking a step that brings USED into a place where most of the states are. From my perspective, it is hard to accept an overall rating system of my colleges from the federal government when I have far, far more data about those colleges and choose to not to rate them. Instead we focus on improvement through transparency and goal attainment.

I think few reasonable people will disagree with the idea of rating institutions on performance within the goals and participation agreement of Title IV. It is when the federal government chooses winners and losers beyond Title IV that disagreement settles in.

We will face disagreement over what standards to put in place, if we go down this path. That is part of the rough and tumble of policy, politics, and negotiated rulemaking. You know – the fun stuff.

Let’s take a quick look at four very different institutions. These images come from our publicly available institution profiles at http://research.schev.edu/iprofile.asp

 

Germanna Community College does not have high graduation rates (note these are not IPEDS GRS rates as they include students starting in the spring as well as part-time students). All of these are toward the lower end of the range of Virginia public two-year colleges.  There are a range of differences among the subcohorts, particularly between students from the poorest and the wealthiest families.

gao1

Even at the highest performing institution on graduation rates, one of the highest in the nation, there is still a range of difference. A full 10 percentage point difference between the poorest and wealthiest students.

gao2-uva

In the last two decades, CNU has more than doubled its graduation rates through transforming the institution and its admission plans. The differences between subcohorts are much smaller, but this has come at a price of denying access to students that sought an open-enrollment institution.

gao3-cnu

Ferrum College has relatively low graduation rates and high cohort default rates. Using federal data, it does not look to be an effective institution. However, I will point out that it has the highest success rate with students requiring developmental coursework in the first two years. It apparently can serve some students well, and others better than other institutions.

gao4-fc

My point with these four examples is this. We need to drive improvements in student outcomes by focusing on differences within institutions, specifically subcohorts of students that are recipients of Title IV aid.

 

 

Duncan doesn’t understand the opposition to PIRS

Duncan doesn’t get it. Apparently the president does not get it either if this is all coming from the top.

Duncan did ask the ratings system’s many critics to reserve judgment until they actually knew what it would look like.

“This system that people are reacting against doesn’t exist yet,” he said. “Tell us what you like; help us do this.… Let’s not be against something that has not been born yet.”

Read more: http://www.insidehighered.com/news/2014/07/03/arne-duncan-talks-about-ratings-and-student-debt-expansive-interview#ixzz36VOw7Wlh 
Inside Higher Ed 

One of the many things that the Administration does not get, is that some of us are objecting to the ratings system because the currently available data are inadequate. Completely inadequate. Bloody embarrassingly inadequate. These data, for the most part, are the same data that brought us to the current state of affairs.

Asked by Leonhardt to respond to the criticism that the government can’t rate colleges intelligently and effectively, Duncan reiterated that department officials know they have a hard job ahead. “We’re going into this with a huge sense of humility,” he said, and recognize that “intellectually it is difficult.”

Bullshit. Humility is not based on ignoring the advice you receive and doing it anyway.

“I say to the Department this, “We need better data. Let me rephrase that. YOU need better data. This should be the Department’s first priority.”

-Tod Massa (me), PIRS Technical Symposium, Feb 6, 2014.

The Department called together 19 experts in this arena to advise on the proposal. Perhaps it was just a show to support the pretense of listening and engagement. That’s fine, we all do that (every time we nod when our spouse is talking to feign involvement, for example). However, since the lack of humility irritates me, I am choosing to assume they wanted advice, but are unwilling to accept it. (It is too much like a smug, snotty adolescent saying “Uh-huh, I know” when you are trying to convey important information to him. I already have one of those in the house and another enroute.) By the way, push-back from those you intend to rate does not prove the rightness of your cause.

At least the interview demonstrates that the talking points, and perhaps the structure of the ratings system itself, are changing.

“Are you increasing your six-year graduation rate, or are you not?” he said. “Are you taking more Pell Grant recipients [than you used to] or are you not?” Both of those metrics, if they were to end up as part of the rating system, would hold institutions responsible for improving their performance, not for meeting some minimum standard that would require the government to compare institutions that admit very different types of students.

This is a step in the right direction, perhaps the second best approach. Encouraging a constant cycle of improvement is a good thing. Perhaps they will look to states that have experience in accountability measures of this type, you know, like Virginia. One of the things we know on this topic, is that there are annual variations that are little more than statistical noise, especially with smaller institutions. When such accountability measures are first introduced, it takes a number of years before institutional policy and operational decisions catch up. Thus, improvement on all measures every year is unlikely – unless the measures are weak to begin with. Remember, the key decisions about the graduation rates of the entering cohort of 2014 have already been made – who, how much aid, the institutional budget, and the student support programs that will be in place. The decisions impacting the next available graduation rates of the 2008-09 have long been audited and stashed away in a dusty mausoleum.

Another issue that arises is “the pool of possibles.” Is it possible to expect every institution to increase its proportion of Pell-eligible students? Probably not, at least not after some critical point is achieved of overall enrollment is achieved, without dramatically expanding the definition of Pell-eligibilty. Over time a lot of students will wind up being shifted among institutions. Probably not before the next re-authorization of the Higher Ed Act, but I don’t know. It will depend on institutional reaction to the ratings and Congressional action to tie them to student financial aid. Anyhow, at some point one ends up creating a floor value to say, “If an institution is at or above this point, improvement on this measure is not required.”

When we start talking about improvement on multiple measures, we get this argument, “If we enroll more [under-represented students, students with Pell] our graduation rates will go down.” This is a complete and utter bullshit argument that I have heard over and over again. We go back to 2008 on this where I pushed the board to make graduation rates of Pell students compared to students without Pell, but other aid, compared to students without any aid, a performance measure.

There is absolutely no necessity for institutional graduation rates to decrease. “But Tod,” I heard, “we know students that are Pell-eligible are less academically qualified most of the time.” 

“What do you about that?”

“Well, nothing.”

“Therein lies the problem.”

That was a fun conference call. This experience, and more like it, are why I believe a ratings system is best built around comparisons of internal rates at the each institution. There is no legitimate, nor moral reason, why family income should be a primary predictor of student success in college. We know that disadvantaged students have different needs than wealthy students, so let’s meet those needs.

I am glad to see an evolution in thinking on the ratings system. As long as the Department plans to release a draft based on existing data this fall (which is a question-begging time frame to begin with – fall semester or seasonal fall? and I have seen both “in the fall” and “by fall” – implying before), I will continue to oppose and agitate against the current plan. I continue to be willing to help and engage with the Department, but my phone lines and multiple email accounts remain strangely silent.

 

 

Rating the Ratings Game

What a big week for PIRS – the President’s proposed Postsecondary Institution Ratings System!

Well, at least in my little world.

On Monday, I was back at Ferrum College for the first time since my son’s graduation a year ago for SCHEV’s meeting with the Private College Advisory Board (the private college presidents) and our regular May meeting of Council. At the very end of the meeting I was asked to give an update of my activities with the wage and debt reports. During this briefest of updates I also volunteered supportive responses to issues raised during the meeting.  (Of course, “supportive responses” were really along the lines of “If you would look at the damn website you would see that these things you are asking for already exist in great detail.”)  When I asked for questions, I received one, “Can you tell us about your involvement with the ratings system?”

It was kind of a set-up, in that I had spoken with that president just prior to the meeting and so he knew something of my involvement. I gave about a four sentence response summarizing my presentation at the symposium, which was greeted with the only applause of the day. The reception following the meeting contained a number of side conversations about the topic and requests for materials.

Parallel to all this, due to the marvels of mobile email, there is an exchange on this topic with my boss and a public college president, and the sharing of my presentation with that president.

Tuesday night was the time for pair of separate email conversation with a public and private president, both of whom have become highly involved in the topic and have been offering alternatives to USED and members of Congress. I’m really glad to see that they have been engaged and are offering some thoughtful, and good, suggestions.

Wednesday we saw the blog post from Jamienne Studley that announced that draft ratings system release would be delayed until fall. Many of us were not surprised by this news. This is a big project with high stakes – the initial release sets the tone as to whether Congress might actually tie Title IV eligibility to the ratings.

One of Tuesday night’s conversation led to a call from a congressional staffer. During the course of the discussion I learned there was a proposed budget amendment to prevent the department from spending any money on PIRS. This is a reaction to Duncan’s commitment last month to continue the project, even if Congress does not provide the $10 million requested for PIRS.

So, now we have a bit of horse race to watch.

Will PIRS make it out the door before October 1? (The federal fiscal year ends September 30.)

Will Congress pass a budget before PIRS is released?

Will the budget have an amendment killing PIRS?

If PIRS hits the street before next year’s budget is passed, and is a good product, then it has a chance. If delayed too much in the fall, it is quite likely dead. (One might wonder how much intention is in this delay….)

Today’s letter opposing Gainful Employment from 34 members from both parties might be an indicator of where this might go. PIRS is merely GE at the institution level. If the initial draft places large numbers of for-profits in the lowest ratings, I suspect we will see a very similar letter from members.

So, all told, I give PIRS three stars out of 10. It had a better chance if the Department had been able to keep to its announced schedule. It seems to me that an August release of a good product is necessary for its survival. I understand the need for a delay, it is a big project and I have delayed quite a few myself. Unfortunately, political realities can get in the way.

Non-urgent update, basically a late, “See? I told you!”

InsideHigherEd confirmed the idea of the amendment last week with a copy of the email.