About the Undeclared Major as a Dead Horse

…I have not finished beating it.

Quick recap: FSA is requiring all Title IV awardees submitted by institutions to have a valid CIP Code (Classification of Instructional Programs), including students in undeclared or undecided status. Virginia institutions have been running into difficulty because they have reporting undeclared students to us for over 20 years using code 90.0000, which is, alas, a Virginia-specific code. It is thus not found in the official CIP 2010 tables and students submitted with these codes have been rejected. USED is advising institutions to use 24.0102 – General Studies.

Apart from my argument that students who are not, by definition, enrolled in a program and should not have a valid CIP code – unless it’s definition is undeclared. This is not the case of 24.0102. However,while  I grant the Illustrative Examples of “Undeclared” and “Undecided” might seem to make it a logical choice, I think a step was missed.

What step? How about checking to see if degrees are awarded under 24.0102? After all, a BA awarded in General Studies would seem to be a pretty declarative statement, wouldn’t it? Wouldn’t it also seem that someone had decided that was an appropriate major?

In 2012, there were 84,118 degrees at various levels awarded nationwide based on the IPEDS Completions report.


In 1992, there were 24,357 degrees at various levels awarded with CIP 24.0102.

Oops. It is a growing problem.

In the grand scope of things, this is a relatively minor problem. Unfortunately it demonstrates how little thought is given to future data collections and future uses of data. Any analysis of these data will be inconclusive, misleading, or wrong, because researchers won’t be able to separate out students who were actually enrolled in a General Studies program v. those who were undeclared.

It would be so very easy to add 00.0000 or 99.9999 or some other code to the valid value table that is based on CIP Codes to avoid this problem. If the logic is that all degree-seeking (and thus potentially aid-eligible) students are enrolled in a program, this is just wrong. Some schools, including one of my previous employers, did not, possibly still does not, allow students to select a major until after the third semester. This allows them to experience more of the liberal arts and sciences before settling down to focus on just one.

I am curious to know if this was a conscious, intentional decision by the department, or a development choice by the contractor that was subsequently (and thoughtlessly) approved by the person running the contract.


Duncan doesn’t understand the opposition to PIRS

Duncan doesn’t get it. Apparently the president does not get it either if this is all coming from the top.

Duncan did ask the ratings system’s many critics to reserve judgment until they actually knew what it would look like.

“This system that people are reacting against doesn’t exist yet,” he said. “Tell us what you like; help us do this.… Let’s not be against something that has not been born yet.”

Read more: http://www.insidehighered.com/news/2014/07/03/arne-duncan-talks-about-ratings-and-student-debt-expansive-interview#ixzz36VOw7Wlh 
Inside Higher Ed 

One of the many things that the Administration does not get, is that some of us are objecting to the ratings system because the currently available data are inadequate. Completely inadequate. Bloody embarrassingly inadequate. These data, for the most part, are the same data that brought us to the current state of affairs.

Asked by Leonhardt to respond to the criticism that the government can’t rate colleges intelligently and effectively, Duncan reiterated that department officials know they have a hard job ahead. “We’re going into this with a huge sense of humility,” he said, and recognize that “intellectually it is difficult.”

Bullshit. Humility is not based on ignoring the advice you receive and doing it anyway.

“I say to the Department this, “We need better data. Let me rephrase that. YOU need better data. This should be the Department’s first priority.”

-Tod Massa (me), PIRS Technical Symposium, Feb 6, 2014.

The Department called together 19 experts in this arena to advise on the proposal. Perhaps it was just a show to support the pretense of listening and engagement. That’s fine, we all do that (every time we nod when our spouse is talking to feign involvement, for example). However, since the lack of humility irritates me, I am choosing to assume they wanted advice, but are unwilling to accept it. (It is too much like a smug, snotty adolescent saying “Uh-huh, I know” when you are trying to convey important information to him. I already have one of those in the house and another enroute.) By the way, push-back from those you intend to rate does not prove the rightness of your cause.

At least the interview demonstrates that the talking points, and perhaps the structure of the ratings system itself, are changing.

“Are you increasing your six-year graduation rate, or are you not?” he said. “Are you taking more Pell Grant recipients [than you used to] or are you not?” Both of those metrics, if they were to end up as part of the rating system, would hold institutions responsible for improving their performance, not for meeting some minimum standard that would require the government to compare institutions that admit very different types of students.

This is a step in the right direction, perhaps the second best approach. Encouraging a constant cycle of improvement is a good thing. Perhaps they will look to states that have experience in accountability measures of this type, you know, like Virginia. One of the things we know on this topic, is that there are annual variations that are little more than statistical noise, especially with smaller institutions. When such accountability measures are first introduced, it takes a number of years before institutional policy and operational decisions catch up. Thus, improvement on all measures every year is unlikely – unless the measures are weak to begin with. Remember, the key decisions about the graduation rates of the entering cohort of 2014 have already been made – who, how much aid, the institutional budget, and the student support programs that will be in place. The decisions impacting the next available graduation rates of the 2008-09 have long been audited and stashed away in a dusty mausoleum.

Another issue that arises is “the pool of possibles.” Is it possible to expect every institution to increase its proportion of Pell-eligible students? Probably not, at least not after some critical point is achieved of overall enrollment is achieved, without dramatically expanding the definition of Pell-eligibilty. Over time a lot of students will wind up being shifted among institutions. Probably not before the next re-authorization of the Higher Ed Act, but I don’t know. It will depend on institutional reaction to the ratings and Congressional action to tie them to student financial aid. Anyhow, at some point one ends up creating a floor value to say, “If an institution is at or above this point, improvement on this measure is not required.”

When we start talking about improvement on multiple measures, we get this argument, “If we enroll more [under-represented students, students with Pell] our graduation rates will go down.” This is a complete and utter bullshit argument that I have heard over and over again. We go back to 2008 on this where I pushed the board to make graduation rates of Pell students compared to students without Pell, but other aid, compared to students without any aid, a performance measure.

There is absolutely no necessity for institutional graduation rates to decrease. “But Tod,” I heard, “we know students that are Pell-eligible are less academically qualified most of the time.” 

“What do you about that?”

“Well, nothing.”

“Therein lies the problem.”

That was a fun conference call. This experience, and more like it, are why I believe a ratings system is best built around comparisons of internal rates at the each institution. There is no legitimate, nor moral reason, why family income should be a primary predictor of student success in college. We know that disadvantaged students have different needs than wealthy students, so let’s meet those needs.

I am glad to see an evolution in thinking on the ratings system. As long as the Department plans to release a draft based on existing data this fall (which is a question-begging time frame to begin with – fall semester or seasonal fall? and I have seen both “in the fall” and “by fall” – implying before), I will continue to oppose and agitate against the current plan. I continue to be willing to help and engage with the Department, but my phone lines and multiple email accounts remain strangely silent.



Describe a Rainbow in Seven Words

Describe a rainbow in seven words to someone blind since birth.

This is the fundamental problem with PIRS or the current craze towards non-dashing dashboards (if the data change only once a year, it is only a dashboard for a glacier). Institutions are complex with  many things going on that simply don’t reduce to seven metrics, let alone four or five as in the White House College Scorecard. Spending yesterday and this morning at an NGA-hosted meeting on Higher Education Effectiveness and Efficiency Metrics Learning Lab reinforces my (probably curmudgeonly) belief that knowledge sometimes has to be earned through effort and study – not a 60 second review of a web-page or Powerpoint slide.

I want to believe in the power data to transform systems, to transform lives. I worry though that over-simplification of the presentation of performance data leads to under-recognition of the lives affected.

Speaking of over-simplification, I was part of an expert panel on Tuesday about PIRS and community colleges. Deputy Under-secretary Jamienne Studley was present. She was clear that PIRS is going forward based on existing data. Data that are completely inadequate to the task, in my considered opinion. However, she does seem open to some ideas that others in the Department  and the White House find anathema. I won’t share those at this time but I was kind of, umm, vocal in my suggestions. I know she heard me.

A timely example of over-simplification is this. “Starbuck’s Offers Free College to Employees.” Robert Kelchen provides a more in-depth understanding here. Matt Reed does a mea culpa from his original position and acknowledges  the efforts of others who read the fine print and went beyond the metric of “Free College.”

While I am not sure that either of these books were covered in Reading Rainbow, if one compares two of my favorites, The Sun Also Rises and The Stand, one can easily see a difference in the prose styles. Hemingway is much tighter and sparse than King (I suspect) ever dreamt of being. Despite that, neither can be meaningfully reduced to seven words or other metric. Any critical rating is meaningless to people that eschew one genre over another. And books are static. They don’t change over time. Our interpretations may change, their placements in a rating system may change, but the books themselves don’t change (except when King added 100-plus previously cut pages to a revised edition).

Institutions change. Measurement can cause change in institutions. Bad measurement, bad incentive structures are likely to cause bad changes. Let’s really be clear what we are doing and why, while recognizing that not everything can be as simple as we might like.



Why Ratings Seem to be Necessary to Outsiders

Right here.

Rebecca Schuman reacts to the MLA Report of the Task Force on Doctoral Study in Modern Language and Literature. I am not going to try summarizing it or even highlighting it, as such contempt and wrath needs to be read in its original language.

If a higher education insider can react this strongly to what is clearly a well-intentioned and highly focused effort to improve an alleged profession, then what are the normies outside academe to think?

From the InsideHigherEd story “We are faced with an unsustainable reality: a median time to degree of around nine years for language and literature doctoral recipients and a long-term academic job market that provides tenure-track employment for only around [60] percent of doctorate recipients.”

How ’bout that? This is from the same large group of faculty-types that objects when we in government start talking about credits-to-degree, time-to-degree, placement rates, and job market outcomes for undergraduates. Schuman says this:

So as I talk about this report, please keep in mind that my issue isn’t with the MLA’s leadership—it’s with the MLA’s membership, which consists almost entirely of people who can both afford to pay the dues, and haven’t been so traumatized by the convention that they drop out for their psychological health (I am in the second group).

So she is holding the large group of faculty responsible. Many of whom, at least in Virginia have bemoaned my work with wage and debt outcomes. I guess when it comes to just a continuing stream of sacrificial lambs to fund one’s salary, it is a different story.

In all fairness, this is a healthy debate to have for an academic community to have, especially given the apparent over-production of PhDs compared to the full-time, tenure-track jobs available – which may be the result of people like me (and above) pushing for cost-constraints. It is certainly a result of decreased funding (which I have not advocated). The problem is that the path the MLA suggests, doesn’t seem to make a lot of sense. It seems contradictory and under-informed. To me it looks like they are suggesting producing more of what cannot be currently consumed, without a complete re-funding or restructuring of higher education. (But probably not the restructuring suggested here.)

Reports and debates like this suggest to people like President Obama that higher education has no clue. The proposal of a rating system is a way to enforce a simple message, “Get a clue!” Unfortunately, as currently proposed, and because USED’s focus has historically been on undergraduate access, measurement of graduate and professional programs has not been talked about – save by me. I was the lone voice at the Technical Symposium making that argument. I don’t know that a rating system will help the academe get a clue in a changing world, but I don’t know that it won’t. I do know, as I have said before, that the current data available to the Department are inadequate.

I really enjoyed reading Schuman’s post. As I read it, I wondered, “Is this really much different than current accreditation practices in terms of the resultant nonsensical solution?”



A Response to Schuman and Warner

I love all the coverage that the proposed Postsecondary Institutional Ratings System (PIRS, #PIRS) is getting these days. Rebecca Schuman over at Slate has written a nuanced support of the plan here. John Warner, over at Inside Higher Ed, has written an opposing viewpoint to Schuman’s. Warner wrote the previous day about Jamienne Studley’s unfortunate comparison of colleges and blenders.

Both are well worth the read.

I have neither the following nor the writing skills of either Schuman and Warner, but that has never stopped me from expressing my opinion. Nor will it now.

Both authors are right and wrong.

First off, while I am glad everyone is having such fun with the blender comment, where were you months ago when the comment was made and reported in Politico and elsewhere? Those of us in the higher ed data world have been shuddering for months about her use of Cooks Illustrated as a model for PIRS. The resurgence off the comment and the announcement of the first delay in the ratings have been amusing to watch.

Warner thinks the ratings will empower the already powerful on campus by giving presidents even greater leverage for their policies. Absolutely. With the data currently available to USED, any thought of nuanced, targeted approaches to improving student outcomes will go right out the window. There will be more sledgehammer approaches to institutional policies, especially as institutions try to ensure they are in the same rating as their peers.

Warner also suggests that new deanlets will be created to collect and manage all the new data required. Maybe eventually, but that depends on what happens reauthorization of the Higher Education Act (HEA). If the unit record ban is lifted, and something like the Student Right-to-Know  Before You Go Act is passed, most institutions could experience a reduction in burden. In the near term, USED still has to get OMB clearance to expand collections, which is subject to burden review. Unfortunately, reporting burden is going to increase anyway, with or without the ratings system, because  well, just because. There is always more data to collect, and lots of organizations asking USED to collect more, and at some point, with enough increases, the institutions will demand to report student-level data because it will be easier and less burdensome. (Something like 45 states have unit record collections, with about 90 different collectors. SC public institutions report student level data to the state. Sending a similar file to USED would cost less than the current IPEDS submissions.)

Schuman suggests that it “is time to come at higher education with a sledgehammer.” I tend to agree, but it depends on who is swinging the sledgehammer. The problem with the way higher education and USED have traditionally approached ratings, rankings, benchmarking, and the like, is through the use of direct institutional comparisons and peer comparisons. This is the kind of madness that has led us to today. Driving the bus by looking at the other buses is just silly. As I argued in my presentation at the PIRS Technical Symposium the proper comparisons are intra-institutional. Rather than worry about institution A compares to B on graduation rates let’s focus instead on the difference in graduation rates between Pell recipients and non-recipients encourage policies to bring those numbers in line with each other, thus increasing graduation rates across the board.

As for Schuman’s suggestion about ratings including values such as percentage of courses taught by full-time, tenure track instructors, fine. Just be warned that using the existing data for community colleges, there have been a number of research projects that have found no direct correlation between community college graduation rates and either numbers or ratios of full-time TT faculty. So, depending on the biases of the folks in the department, such a rating component might do more to support the status quo. Further, state law-makers may well push back against ratings that use such components because of the inherent cost-drivers to funding higher education.

Which is also part of the reason we are here. Not everyone wants to pay what it costs to support higher education.

I am glad that people are talking a lot more about PIRS. I think a good ratings system can be built, just not with the existing data nor the traditional mindset towards evaluation of higher education. We in Virginia  know far more about outcomes of students in Title IV aid programs than USED does – and that is only an off-shoot of our other work. If PIRS is done badly, it will empower presidents to have more of their way on campus and perhaps further damage the concept of shared governance.

The most important thing to keep in mind is that off this is taking place under the umbrella of reauthorization of the HEA with the added context of Gainful Employment. Whatever happens will be with for years and, if historical trends are true, the federal control on campus will be more intrusive. This may not be bad, but it will not be easy.





Rating the Ratings Game

What a big week for PIRS – the President’s proposed Postsecondary Institution Ratings System!

Well, at least in my little world.

On Monday, I was back at Ferrum College for the first time since my son’s graduation a year ago for SCHEV’s meeting with the Private College Advisory Board (the private college presidents) and our regular May meeting of Council. At the very end of the meeting I was asked to give an update of my activities with the wage and debt reports. During this briefest of updates I also volunteered supportive responses to issues raised during the meeting.  (Of course, “supportive responses” were really along the lines of “If you would look at the damn website you would see that these things you are asking for already exist in great detail.”)  When I asked for questions, I received one, “Can you tell us about your involvement with the ratings system?”

It was kind of a set-up, in that I had spoken with that president just prior to the meeting and so he knew something of my involvement. I gave about a four sentence response summarizing my presentation at the symposium, which was greeted with the only applause of the day. The reception following the meeting contained a number of side conversations about the topic and requests for materials.

Parallel to all this, due to the marvels of mobile email, there is an exchange on this topic with my boss and a public college president, and the sharing of my presentation with that president.

Tuesday night was the time for pair of separate email conversation with a public and private president, both of whom have become highly involved in the topic and have been offering alternatives to USED and members of Congress. I’m really glad to see that they have been engaged and are offering some thoughtful, and good, suggestions.

Wednesday we saw the blog post from Jamienne Studley that announced that draft ratings system release would be delayed until fall. Many of us were not surprised by this news. This is a big project with high stakes – the initial release sets the tone as to whether Congress might actually tie Title IV eligibility to the ratings.

One of Tuesday night’s conversation led to a call from a congressional staffer. During the course of the discussion I learned there was a proposed budget amendment to prevent the department from spending any money on PIRS. This is a reaction to Duncan’s commitment last month to continue the project, even if Congress does not provide the $10 million requested for PIRS.

So, now we have a bit of horse race to watch.

Will PIRS make it out the door before October 1? (The federal fiscal year ends September 30.)

Will Congress pass a budget before PIRS is released?

Will the budget have an amendment killing PIRS?

If PIRS hits the street before next year’s budget is passed, and is a good product, then it has a chance. If delayed too much in the fall, it is quite likely dead. (One might wonder how much intention is in this delay….)

Today’s letter opposing Gainful Employment from 34 members from both parties might be an indicator of where this might go. PIRS is merely GE at the institution level. If the initial draft places large numbers of for-profits in the lowest ratings, I suspect we will see a very similar letter from members.

So, all told, I give PIRS three stars out of 10. It had a better chance if the Department had been able to keep to its announced schedule. It seems to me that an August release of a good product is necessary for its survival. I understand the need for a delay, it is a big project and I have delayed quite a few myself. Unfortunately, political realities can get in the way.

Non-urgent update, basically a late, “See? I told you!”

InsideHigherEd confirmed the idea of the amendment last week with a copy of the email.


College Decision Day and PIRS

It is May Day and many thousands of high school students are committing to colleges today. My friends Robert Kelchen and KC Deane have each blogged about it today. I am not going to bother trying to say what they have said, please read both pieces yourself – they are worth it.

Also worth reading is George Cornelius over at FindingMyCollege.com.

So, what happens next?

  • As Robert points out, there will be some melt and not all students will live up to their commitment.
  • In Virginia, sometime in the coming weeks before students enroll for classes, their names and other information will be sent to Virginia State Police to run against various criminal databases to ensure sex offenders are not enrolling, or if they are, they are properly reporting such (and any change of address).
  • Target and other stores will be flooded by students and parents to buy dorm supplies and furnishings. (Much of this will be trashed early in the semester or left in dumpsters at the end of the year).
  • Thrifty students will order new and used textbooks from Amazon and elsewhere as soon as they know what they need.
  • The rest will gripe much more loudly about the cost of text books….for multiple years.
  • According to the National Student Clearinghouse data for the 2007 cohort:
    • Of the students starting at public four-year institutions, only 51% will graduate from the same institution, about 13% will graduate from another institution, and 15% will still be enrolled. (Fyi, the numbers are much better in Virginia.)
    • For the students starting at private four-year institutions, only 59% will graduate from the same institution, about 14% will graduate from another institution, and 10% will still be enrolled. (Fyi, the completion numbers are a bit worse in Virginia.)
    • Only about 26% of students starting at a two-year public college will complete at that institution, with about 17% of all students (including those who completed a two-year degree) finishing a four-year degree and 19% still enrolled.
  • Of course, students who enroll exclusively full-time have much higher graduation rates across the board.
  • According to the Project on Student Debt, 71% of the 2012 graduates (four-year degrees) had an average of $29,400 debt, representing a 6% increase per year for the last five years.
  • If that 6% is accurate and continues forward, then at least three-quarters of those students graduating within four years will owe on average around $41k, and $44k for those in five years, and $47k in six years.  (I would really try to graduate within four years).
  • Even if debt only grows annually by 3%, we range between $35k and $37k.
  • Many of those graduates will likely start out making less money than they, their families, and policymakers have typically expected. This may have more to do with unrealistic expectations than anything else as the more I look at the earnings data, the more it seems to make sense when we look at the stories behind it.
  • These students, and ultimately graduates (many, but not all), will advise future students on how to choose a college and what college to choose. How many will suggest using a federal rating system?

“Dude, don’t pick a college that Uncle Sam hasn’t rated at least a three!”

Right after saying, “Don’t go borrowing private loans!”

Yeah, right.

The first article I read a few minutes after six a.m. today was about Secretary Duncan’s statement that the Postsecondary Institution Ratings System will go forward – even without the requested 10 million dollars.

Apart from the fact that I still don’t understand how the Department can legitimately spend $10M on ratings, other than on a significantly enhanced and expanded data collection. They might get more support from the community if he was open about such.

Further, according to the Chronicle:

On Tuesday, Mr. Duncan testified before the House education committee about the department’s budget and policy priorities for the coming fiscal year. During that hearing, Rep. Virginia Foxx, a Republican from North Carolina, said the department collects “mounds and mounds of data, but from that we get very little information.”

“We like transparency, and we don’t think we’re getting a lot of transparency from the department,” Ms. Foxx said. As an alternative to the college-rating system, she asked why the department did not just “put out useful information and let the public make decisions.”

What Representative Foxx fails to understand is that the Department has already published everything useful it has…plus a lot of other stuff that is less useful. Anything really useful will require new collections..and money. Perhaps at least $10 million. For another $10 million, I am pretty sure the Department could implement IPEDS Unit Record (if it wasn’t outlawed).

So, here’s the compromise. In exchange for no ratings, give the Department unit record collection via the Student Right-to-Know Before You Go Act.

As a matter of fact, you do need a badge

Today, APLU hosted a forum on Alternatives to Ratings. Listening to it stream online got me thinking dangerous thoughts.

All through this ratings system conversation I have been thinking about the essential uselessness of creating yet another government website for students. I don’t believe most students think of the federal government as being the authority on schools and colleges. If the ratings are supposed to be consumer information, how do you put them in front of the targeted consumers?

I have also been thinking about the ratings as being rectangular tiles, color-coded to David Bergeron’s proposal – lead, bronze, silver, gold, platinum.

And I thought about the US News & World Report America’s Best College badge. You can see an example here.

But that medal you wore on your chest always got in the way 
Like a little girl with a trophy so soft to buy her way 

So, the feds could simply add another piece to Title IV eligibility requiring institutions to display the appropriately earned PIRS Badge on their websites. Plural. It is not enough to display it on the admissions and/or financial aid page. Instead it should be required to be displayed in the footer of every single page. Perhaps even the header.

Of course, this falls apart for the Lead institutions once they lose their Title IV eligibility. Which means any institution without a badge would be suspect. Unfortunately, this would be unfair to certain institutions that have chosen not to participate in federal aid programs. For those institutions, USED could issue a badge of non-participation.

I think this is an elegant solution to make the ratings meaningful to consumers. And make sure they see them.