It’s nice to be special, or, I’m glad I am an alpha

Peter Greene takes Kevin Carey to task over at Curmudgucation. Read it. My first thought upon reading it is that most everyone bemoaning the failure of American higher education is a product of American higher education. So how would they know? Is this a case of the blonde leading the blonde?

Or is it that they feel such failure in understanding the world around them that it must be the fault of the education they received?

Must be it. Can’t be their own damn fault.

Or perhaps their expectations have changed over time. Or beliefs.

I don’t know Peter, but I like his writing and thinking. I do know Kevin and I like his writing and thinking. I don’t always agree with either of them, and both comments apply to a number of people in the education research, commentary, and punditry industry (RCPI).

Unfortunately, I know that a lot of RCPI simply are not as smart and well-educated as they think they are, particularly those I have been meetings or dialogue with over the years. I too often say things that should not go unchallenged, but do, and the statements I make that are challenged are usually those contrary to the Crisis of the Day or Cherished Myth. Those things that are not challenged, or fail to cause a “wait a moment, what did you just say?” are intended to see who is paying attention – or is as educated as they think they are.

I know I am not as well-educated as a lot of these folks. I went to unimpressive schools and have a degree in studio art (painting nekkid women and making jewelry) of all things. I’ve read far more Stephen King than the classics of western literature  (unless those classics are the golden age of science fiction and later). For that matter, I am pretty much enjoying King’s new book, Mr. Mercedes. And I have no shame or embarrassment about these things – I like what I like. Despite the source of my education, I did make what I could of it and I am pretty well aware of my biases. I also pay a lot of attention to what people say…sometimes more than I think whoever is speaking/writing.

During the last presidential elections, we heard a lot about “makers and takers” and on my last road-trip I saw a billboard using that phrase between Joplin and Kansas City. I have a strong bias when it comes to “information makers” and “information takers” in the RCPI class. Too many are in the latter group and have never been in the former group. They really have no idea, in my not-so-very-humble-at-all opinion, what is going on in the data, only what they have read about it or created as tertiary users of the data. Some have never really had a real job and simply write about the work of others applying their own biases and prejudices. Others are part of a revolving door of policy positions, think-tanks, and foundations – generally going places that support and reinforce their own belief systems. This is probably a completely an unfair bias of mine, but I admit to it. I appreciate the fact that RCPI has every bit as much of the societal utility and value of music, movie, and theater critics.

Peter makes a link about funding sources and agenda of New America Foundation, along the lines of one of my favorite reader commenters at CHE, IHE, and elsewhere (wherever higher ed stories appear), Unemployed Northeasterner. While the there is a certain conspiratorial appeal to follow the money and think that the think-tanks and researchers receiving money from foundations are being controlled in their work. Or that the foundations like Lumina are being to led in such a manner to direct their grant activities to fatten the wallets and coffers of SallieMae, it is far easier for me to believe in essential laziness of human nature. People are attracted to organizations that reinforce what they choose to believe and organizations attract people that reinforce the beliefs and agenda of the organization. The same is true in making grants to individual researchers and teams of researchers.

Of course, this essential laziness is key to understanding the pronouncements of the RCPI – they read, research, and write in a way that supports their chosen belief system. It’s a lot of work to constantly challenge your own thinking and belief system. It’s also wearying to be wrong all the time – or at least face a reality that says things are not quite as simple and clear-cut as you wish to make them out to be.

I had actually planned to write about the existential crisis that came to a head this week about whether or not there is a student loan crisis. Student loan posts get lots more views. This post might get twenty. I’ve written before about my difficulty in knowing if student debt is a crisis or not, but I am going to reinforce these basic facts:

Fewer than 5% of Virginia’s baccalaureate graduates in 2011-12 greater than $50,000. That means fewer than 2,300. Is this a large enough group on which to base public policy? In this day of of Big Data and seeming lack of sense of a collective good, we can imagine personalized public policy. In fact, I touch on that in a brilliant post over here. A more appropriate concern about growing debt is that, in Virginia, the median debt of graduates who borrow has increased from $15,253 to $25,000. That is a large increase. The percentage of borrowers has only increased from 54% to 61%, but the number of graduates has increased a bit more dramatically.

However, the very best analysis on student debt, crisis or not,  I have seen thus far is from Libby Nelson over at Vox who does a very nice job laying out the issues.

Defining it as a crisis is the easiest way to force change and perhaps attract additional money. I’ve read so many bloggers, activists, reformers, and the like saying “college is not worth the cost” and I still don’t understand why they think so. When I try to unpack their position, it seems very clear that they think students and families should bear little or no cost. That’s a different issue. Sometimes they seem to be saying that employment outcomes of recent graduates bear out their argument – college graduates aren’t all getting good jobs. Is that the fault of college or the economy? So much of this about expectations that it is hard to have legitimate conversations, particularly in the face of underemployed college graduates with significant student debt. That is a real issue, especially as we discuss the future of IBR, ICR, PAYE, and other “solutions” to the debt problem.

The fact that we have to have income-based repayment programs may be the biggest indicator of a problem.

On the flip-side, maybe it is a bigger issue that our solution to rising costs in higher education has relied predominantly on the use of part-time adjuncts in exchange for allowing other costs to rise for students (athletics and other experiential opportunities).

What if the single, unalterable, and perhaps unutterable, truth is this. Higher education is an ungodly expensive enterprise because it requires large numbers of expensively educated individuals engaged in high-touch practice?  (Which is what was in place before the US started failing in these measurements against other countries.)

What if the belief that higher education can be made cheap or essentially free is just plain wrong?

Will we look back in a few decades at all this policy churn and say,

“Gosh, that was a huge waste of effort, but I am sure some of the lessons were important.”

 

Describe a Rainbow in Seven Words

Describe a rainbow in seven words to someone blind since birth.

This is the fundamental problem with PIRS or the current craze towards non-dashing dashboards (if the data change only once a year, it is only a dashboard for a glacier). Institutions are complex with  many things going on that simply don’t reduce to seven metrics, let alone four or five as in the White House College Scorecard. Spending yesterday and this morning at an NGA-hosted meeting on Higher Education Effectiveness and Efficiency Metrics Learning Lab reinforces my (probably curmudgeonly) belief that knowledge sometimes has to be earned through effort and study – not a 60 second review of a web-page or Powerpoint slide.

I want to believe in the power data to transform systems, to transform lives. I worry though that over-simplification of the presentation of performance data leads to under-recognition of the lives affected.

Speaking of over-simplification, I was part of an expert panel on Tuesday about PIRS and community colleges. Deputy Under-secretary Jamienne Studley was present. She was clear that PIRS is going forward based on existing data. Data that are completely inadequate to the task, in my considered opinion. However, she does seem open to some ideas that others in the Department  and the White House find anathema. I won’t share those at this time but I was kind of, umm, vocal in my suggestions. I know she heard me.

A timely example of over-simplification is this. “Starbuck’s Offers Free College to Employees.” Robert Kelchen provides a more in-depth understanding here. Matt Reed does a mea culpa from his original position and acknowledges  the efforts of others who read the fine print and went beyond the metric of “Free College.”

While I am not sure that either of these books were covered in Reading Rainbow, if one compares two of my favorites, The Sun Also Rises and The Stand, one can easily see a difference in the prose styles. Hemingway is much tighter and sparse than King (I suspect) ever dreamt of being. Despite that, neither can be meaningfully reduced to seven words or other metric. Any critical rating is meaningless to people that eschew one genre over another. And books are static. They don’t change over time. Our interpretations may change, their placements in a rating system may change, but the books themselves don’t change (except when King added 100-plus previously cut pages to a revised edition).

Institutions change. Measurement can cause change in institutions. Bad measurement, bad incentive structures are likely to cause bad changes. Let’s really be clear what we are doing and why, while recognizing that not everything can be as simple as we might like.

 

 

Five Stages of Education Policy

Anger. Denial. Bargaining. Depression. Acceptance.

Yeah, I know. It is overdone. It is almost trite. But golly, it works so well for so man things besides death and dying. Marriage is a good example, at least according to M. Scott Peck in the “Golf of Your Dreams.”

So, does it apply to education policy? Yep.

First, someone gets angry about something. “This metric doesn’t mean anything to me. Compare it to someone else. Yep, it’s low. Shameful. We have to fix this.”

“Nope. No way. You got it wrong. Your metric is in error. You are comparing us to the wrong thing.” 

“Tell you what. Let’s measure it this way. We deserve to get credit for these students. Sure, I understand it is against the intent of the measure, but this is really only a fraction of our students.” “You know, everyone else is doing it this way.”

Sigh. Heavy Sigh. Bitch. Gripe. Sigh. Bitch. Gripe.

“Fine. We’ll do it this way. It’s just that the measures aren’t really good enough and the financial rewards aren’t really large enough. This is really an ineffective way to fix education, but it is a good compromise that won’t really hurt anything.”

Of course, that last bit is what is most frustrating to me. Consensus policy development rarely seems to lead to something useful. Even when it appears that it might, leadership of the institutions/organizations generally spend a lot of energy to neutralize it in back channels or through gaming the metrics.

With yesterday’s announcement by Senator Lamar Alexander (R-TN) that he will push an amendment to block PIRS added to the effort of Representative Bob Goodlatte (R-VA), it seems possible PIRS will soon be a dead issue. If Congress can actually come together and pass something, you  know, like a budget.

PIRS could be a useful tool. The President and Department blew it though by promising a draft based on existing data. You can boil my very long presentation on PIRS down to a single sentence: “How dare you rate institutions when the data you have does not compare at all to what we have in Virginia, and we don’t rate institutions.”

If the Department had presented a well-crafted model representing a theory of what institutional performance should be, along with a plan on how to develop the necessary data, I suspect the outcome would have been much different. Sure, I and others would have criticized both aspects, but we also would have been more likely to offer more positive criticisms to improve it. Endeavors  such this need, and deserve, a model to inform development – not  a bunch of data forced to fit into a model that seems to look right. And the latter is what the Department is doing.

It is kind of like the difference between “All the news that is fit to print” and “All the news that fits.”

There might be time for the President and Department to save the ratings system – if they are willing to say “We took the wrong approach. Let’s start over.”

No blenders this time.

 

 

 

Who let the doggerel out?

Clearly, it is late and I owe apologies to Nick Lowe, Elvis Costello, and any who read further. (Of course, if you read beyond the title, you probably deserve what you get.)

As I walk through IPEDS data
Searching for light in the darkness of insanity
I asked myself as all looked lost
Is there only GRS and CDR?

And each time I feel like this inside
There’s one thing I want to know:
What’s so funny ’bout peace, love, and unit record? oh, oh
What’s so funny ’bout peace, love, and unit record?

And as I think on ratings systems
My spirit gets so down-hearted sometimes
So where are the leaders with trusted data?
And where is the harmony in trusted scores?

‘Cuz each time I feel it slipping way
Just makes me want to cry
What’s so funny ’bout peace, love, and unit record? oh, oh
What’s so funny ’bout peace, love, and unit record?

I feel like I should say more (though no further apologies) to reward you for getting this far. IPEDS-UR is not the solution to everything that ails higher education. By itself, it won’t fix anything. What it can do is improve the institutional-level data that is available for the universe of IPEDS institutions. Just that bit is really not much of an improvement. After all, if we are going to use the same broken and inadequate models of decision-making based on peer comparisons and policy goals set by consultants (that seem to do an awful lot of work for the big foundations providing money for higher ed) then I am not sure things are really going to improve.

There is a lot of potential value in being able to calculate the same measures for all institutions – simply because not all institutions have invested in institutional research. Some are unable to do so because they are simply to small. Doing IPEDS is what counts as institutional research in these places.

IPEDS is NOT institutional research.

So, IPEDS-UR can provide an institutional research service and help institutions clean their data through the submission process. Indeed, this has been very much the case with SCHEV’s student-level collections. For years, the smaller institutions relied on our extensive data edits to clean up their data and used our reports, both those publicly available and those used in the submission process, to meet their campus reporting needs. It is not a bad model  and for that reason alone, IPEDS-UR may be worth it.

‘Cuz each time I feel it slipping way
Just makes me want to cry
What’s so funny ’bout peace, love, and unit record? oh, oh
What’s so funny ’bout peace, love, and unit record?

The hammer of choice

When all you have is hammer, everything looks like nail.

In reading  today’s piece “Disunited Front” in InsideHigherEd, I can’t help but be amused.

David Warren, the president of the National Association of Independent Colleges and Universities, said that his group opposes such a shift in the distribution of federal aid because it would harm student choice. Private colleges firmly believe that federal student aid should be, in effect, a voucher for students to use at the college that is best for them, he said.

Over at the Association of Private Sector Colleges and Universities, in a statement opposing Gainful Employment regulations:

“At a time when America is facing a growing skills gap, and many Americans are facing an opportunity gap, the department should be working with all sectors of higher education to promote access, simplification, and accountability. Instead, the department is continuing down the path of eliminating opportunity and choice for many new traditional students who are simply not interested in attending a four-year university. We will not idly standby and allow the department to limit access to critical postsecondary education programs that address the skills gaps and capacity gaps that exist in this country.”

Absent context, I’m not sure most people would be unable to distinguish the difference in origins of the two statements, even graduates of the finest institutions in the country. When the ratings proposal hit the streets, I predicted that NAICU and APSCU would eventually have to embrace and enter into an uneasy alliance, especially after Virginia Foxx’s comments at the NAICU conference in DC a year ago. As I have written multiple times, there is little substantive difference between PIRS and GE. To do either or both (as USED would seem to prefer) effectively, requires IPEDS-UR (Unit Record) collection. NAICU opposes such on privacy grounds, although some question whether it is student privacy or institution privacy they are trying to protect. (While I have been writing this post, Robert Kelchen has written and posted his response here to Bernie Fryshman’s doom and gloom piece on unit records.)

As the two private sectors begin to sound more like each other, it feeds into the questions that some ask about why provide public money is funneled to these institutions in the first place. Especially in the absence of good measures of institutional effectiveness.

APSCU’s fight against GE includes pushing the notion that if GE is the right thing to do, then it should apply to all institutions and programs. NAICU opposes this position, of course. Thus the two associations can partner on insisting on freedom of choice in the use of public money (vouchers and loans) for higher education, while maintaining clear opposition on other issues. All of which seems to do little more than the confuse the perception of private higher education.

To me, it seems the choice argument is the only tool either NAICU or APSCU have that is widely accepted, so they attack all problems with it.

Eventually, IPEDS-UR will become a reality and we will have access to very discrete measures of student access, success, debt, and outcomes. At that point, some institutions will shut down, through market behavior or lack of access to public money. That’s not a bad thing. There is no institutional right to existence. For any institution, public or private.

 

 

College affordability requires sobriety

It is June 4th in the Commonwealth and we still have stalemate on the budget.

The Richmond Times-Dispatch is reporting today that an resolution on the budget will likely come in the 11th hour. On July 1st, the state government will probably shut down without a budget. In the meantime, the boards of visitors of the public colleges and universities in the Commonwealth have been meeting to set tuition and fee levels for next year – without sure knowledge of the budget.

Further, it was reported last week that tax revenues are projected to be short by $350M for this fiscal year. Yes, the one that ends in 26 days. New revenue forecasts will likely reduce budget projections by $500M for each of the next two years.

All at a time while the two legislative chambers are at a stalemate over medicaid expansion.

I don’t think it is unreasonable to disagree over political issues, like medicaid expansion or the Affordable Care Act. I am concerned that few people are thinking about the possible long-term impacts of disruptions like this.

Higher education costs money to operate. When state funds disappear, they are replaced with non-general funds – a technical, painless, way to say “tuition and fees.”  The one thing that seems clear from the last two decades or so in Virginia higher education policy is the need for a stable, predictable level of funding for public institutions and student financial aid (including TAG for students attending Virginia’s private institutions). I will not suggest that our institutions are as efficient as they might be, or that structurally they are the right model. Any of them can stand improvement for operating in the 21st century. On the other hand, undergraduate graduation rates for the public fours are second only to Delaware (which has only a fraction of the institutions and students). You can read more here.

I am sure we can do better than lurch like a drunken sailor from crisis to crisis, from initiative to initiative.

Of course, this goes well beyond the responsibility of the Commonwealth. The federal government has a role to play, as do families and students. Higher education, in fact, all of education, is a shared responsibility for sober and serious people.

 

Why Ratings Seem to be Necessary to Outsiders

Right here.

Rebecca Schuman reacts to the MLA Report of the Task Force on Doctoral Study in Modern Language and Literature. I am not going to try summarizing it or even highlighting it, as such contempt and wrath needs to be read in its original language.

If a higher education insider can react this strongly to what is clearly a well-intentioned and highly focused effort to improve an alleged profession, then what are the normies outside academe to think?

From the InsideHigherEd story “We are faced with an unsustainable reality: a median time to degree of around nine years for language and literature doctoral recipients and a long-term academic job market that provides tenure-track employment for only around [60] percent of doctorate recipients.”

How ’bout that? This is from the same large group of faculty-types that objects when we in government start talking about credits-to-degree, time-to-degree, placement rates, and job market outcomes for undergraduates. Schuman says this:

So as I talk about this report, please keep in mind that my issue isn’t with the MLA’s leadership—it’s with the MLA’s membership, which consists almost entirely of people who can both afford to pay the dues, and haven’t been so traumatized by the convention that they drop out for their psychological health (I am in the second group).

So she is holding the large group of faculty responsible. Many of whom, at least in Virginia have bemoaned my work with wage and debt outcomes. I guess when it comes to just a continuing stream of sacrificial lambs to fund one’s salary, it is a different story.

In all fairness, this is a healthy debate to have for an academic community to have, especially given the apparent over-production of PhDs compared to the full-time, tenure-track jobs available – which may be the result of people like me (and above) pushing for cost-constraints. It is certainly a result of decreased funding (which I have not advocated). The problem is that the path the MLA suggests, doesn’t seem to make a lot of sense. It seems contradictory and under-informed. To me it looks like they are suggesting producing more of what cannot be currently consumed, without a complete re-funding or restructuring of higher education. (But probably not the restructuring suggested here.)

Reports and debates like this suggest to people like President Obama that higher education has no clue. The proposal of a rating system is a way to enforce a simple message, “Get a clue!” Unfortunately, as currently proposed, and because USED’s focus has historically been on undergraduate access, measurement of graduate and professional programs has not been talked about – save by me. I was the lone voice at the Technical Symposium making that argument. I don’t know that a rating system will help the academe get a clue in a changing world, but I don’t know that it won’t. I do know, as I have said before, that the current data available to the Department are inadequate.

I really enjoyed reading Schuman’s post. As I read it, I wondered, “Is this really much different than current accreditation practices in terms of the resultant nonsensical solution?”

 

 

A Response to Schuman and Warner

I love all the coverage that the proposed Postsecondary Institutional Ratings System (PIRS, #PIRS) is getting these days. Rebecca Schuman over at Slate has written a nuanced support of the plan here. John Warner, over at Inside Higher Ed, has written an opposing viewpoint to Schuman’s. Warner wrote the previous day about Jamienne Studley’s unfortunate comparison of colleges and blenders.

Both are well worth the read.

I have neither the following nor the writing skills of either Schuman and Warner, but that has never stopped me from expressing my opinion. Nor will it now.

Both authors are right and wrong.

First off, while I am glad everyone is having such fun with the blender comment, where were you months ago when the comment was made and reported in Politico and elsewhere? Those of us in the higher ed data world have been shuddering for months about her use of Cooks Illustrated as a model for PIRS. The resurgence off the comment and the announcement of the first delay in the ratings have been amusing to watch.

Warner thinks the ratings will empower the already powerful on campus by giving presidents even greater leverage for their policies. Absolutely. With the data currently available to USED, any thought of nuanced, targeted approaches to improving student outcomes will go right out the window. There will be more sledgehammer approaches to institutional policies, especially as institutions try to ensure they are in the same rating as their peers.

Warner also suggests that new deanlets will be created to collect and manage all the new data required. Maybe eventually, but that depends on what happens reauthorization of the Higher Education Act (HEA). If the unit record ban is lifted, and something like the Student Right-to-Know  Before You Go Act is passed, most institutions could experience a reduction in burden. In the near term, USED still has to get OMB clearance to expand collections, which is subject to burden review. Unfortunately, reporting burden is going to increase anyway, with or without the ratings system, because  well, just because. There is always more data to collect, and lots of organizations asking USED to collect more, and at some point, with enough increases, the institutions will demand to report student-level data because it will be easier and less burdensome. (Something like 45 states have unit record collections, with about 90 different collectors. SC public institutions report student level data to the state. Sending a similar file to USED would cost less than the current IPEDS submissions.)

Schuman suggests that it “is time to come at higher education with a sledgehammer.” I tend to agree, but it depends on who is swinging the sledgehammer. The problem with the way higher education and USED have traditionally approached ratings, rankings, benchmarking, and the like, is through the use of direct institutional comparisons and peer comparisons. This is the kind of madness that has led us to today. Driving the bus by looking at the other buses is just silly. As I argued in my presentation at the PIRS Technical Symposium the proper comparisons are intra-institutional. Rather than worry about institution A compares to B on graduation rates let’s focus instead on the difference in graduation rates between Pell recipients and non-recipients encourage policies to bring those numbers in line with each other, thus increasing graduation rates across the board.

As for Schuman’s suggestion about ratings including values such as percentage of courses taught by full-time, tenure track instructors, fine. Just be warned that using the existing data for community colleges, there have been a number of research projects that have found no direct correlation between community college graduation rates and either numbers or ratios of full-time TT faculty. So, depending on the biases of the folks in the department, such a rating component might do more to support the status quo. Further, state law-makers may well push back against ratings that use such components because of the inherent cost-drivers to funding higher education.

Which is also part of the reason we are here. Not everyone wants to pay what it costs to support higher education.

I am glad that people are talking a lot more about PIRS. I think a good ratings system can be built, just not with the existing data nor the traditional mindset towards evaluation of higher education. We in Virginia  know far more about outcomes of students in Title IV aid programs than USED does – and that is only an off-shoot of our other work. If PIRS is done badly, it will empower presidents to have more of their way on campus and perhaps further damage the concept of shared governance.

The most important thing to keep in mind is that off this is taking place under the umbrella of reauthorization of the HEA with the added context of Gainful Employment. Whatever happens will be with for years and, if historical trends are true, the federal control on campus will be more intrusive. This may not be bad, but it will not be easy.

 

 

 

 

The Gainful Employment Ratings System?

On my solo road-trip to Orlando today for AIR Forum 2014, it occurred to me that people are still not thinking about the ratings system (PIRS) and gainful employment (GE) the way I am.

I am not convinced that PIRS is a real thing. Certainly the Department has spent a lot of time and effort to create something, including the illusion that PIRS is real.

Why? Because both GE and PIRS attempt to do the same thing – eliminate bad actors and low performing programs/institutions from Title  IV eligibility. Since we don’t have details on PIRS, the only difference we can really point to is that GE is about disqualifying programs and PIRS is about disqualifying institutions.

So, if you are a member of Congress or congressional staff working on the reauthorization of the higher education act, are you going to pick one or both? Given the nature of the lobbying and letters of support and opposition you might receive, would you consider combing the two? Especially in light of the argument that GE should apply to all programs.

If you were a lobbyist and became convinced that something would be done, would you grudgingly support program-level effects v. institution-level? Especially if student preparation is factored in to the mix with other measures than earnings and debt repayment?

Do policy-makers and law-makers want two different consumer ratings system that accomplish essentially the same thing? To me, two ratings seems to add more confusion than helping students and family.

The cynic in me wonders if PIRS is simply a way to justify GE for all programs as a reasonable compromise.

 

The damage of relying on federal data

The IPEDS graduation rates are based on the following formula:

Number of Graduates from CohortA with x years/(Number in CohortA-Approved exclusions [Peace Corps, AmeriCorps, military service, religious mission, death, etc.]) 

Where CohortA are all First-time students in the fall of a given year, including those whose enrollment also began in the summer, initially enrolled as full-time.

The calculation is performed where “x” is four, five, and six years post-entry for students pursuing four-year degrees, two and three years for students at two-year colleges.

There is no magic here, simply a set of assumptions of normal expectations that full-time students are likely to remain full-time – assuming their initial enrollment indicates their academic plans should thus complete in four years, but additional years are “allowed” for incidental delays along the way.

Institutions complain frequently that this measure is imperfect, especially community colleges.

It seems that most fail to realize that just because the law requires Title IV-participating institutions to report these figures to USED each year, THEY DON’T HAVE TO STOP THERE.

If an institution wants to report graduation rates for part-time students, or transfer students, star-bellied Sneetches and Sneetches with out stars, they are completely free to do so.

And they should.

Most don’t, I guess because they can’t compare to someone else, such as their self-defined peer groups. Isn’t this insane? Does it really matter how one institution performs against another when an increase in the metric over time is clearly the desired change? Institutions should be focused on improving student success and understanding who doesn’t graduate on time, and why. From there they can work to better support students and address structures that  improve their success.

It simply isn’t that difficult. Comparisons and benchmarking have stopped institutional development, or at least slowed it, for institutions that rely on federal data. The only way to change that is to change the data available – with something like the Student Right to Know Before You Go Act. States can also get involved, like we do in Virginia.