IPEDS is not GRS

Say it with me, “IPEDS is not GRS. GRS is a part, a small part, of IPEDS.”

Matt Reed (@DeanDad) set me off a bit this morning with his Confessions piece over at InsideHigherEd and I kind of piled on with my long-time colleague Vic Borden that IPEDS and GRS are not simply one and the same with a focus (or fetish, if you prefer)  on first-time, full-time undergraduates. It really ticks me off when I read something like this since it takes my mind of the very good points he was trying to make. I could have written thousands of words of comments about how what we are doing in Virginia is so different, and so much better.

Every time someone says he or she wouldn’t be counted in IPEDS because they transferred, or took eight years (like yours truly), I cringe. It is just not true. It is false. It is wrong.

Yes, that person would not be in the GRS metric. However, she certainly would show up in the Completions survey if shefinished a degree or certificate, whether it took one year or 20. Likewise, she would show up in the fall enrollment survey anytime she was enrolled in a fall term.

As important as a graduation rate is, there is not much more important than the degree conferral, the completions, themselves. That is something that folks should keep in mind.

Now I could brag about some of the things we are doing at research.schev.edu, but instead I will simply highlight this tweet from ACSFA PIRS Hearing:

I think Matt has the right ideas, and I would support them in a Technical Review Panel, although I would probably offer supportive amendments. The problem is getting to a TRP. The type of collection required to support these measures, if not Student Unit Record (or IPEDS-UR, although being a Thomas Covenant fan, I want to lean towards ur-IPEDS), would be so burdensome, the collection would never happen without Congressional action. And that’s the rub. USED only controls the details. Congress makes the ultimate determination and that is where AACC and ACCT (and probably a bunch of groups representing four-year colleges) need to get involved.

The easiest thing at this point is to pile on to support the Student Right-to-Know Before Go Act.

 

 

 

Accreditation is not what it used to be

Accreditation is not what it used to be, at least in terms as being recognized as a standard of academic quality. Last week, Belle S. Wheelan and Mark A. Elgart published an essay at InsideHigherEd arguing that we should “Say No to ‘Checklist’ Accountability” as an argument against the White House Scorecard, PIRS, and other initiatives to differentiate institutions based on performance. In a nutshell they argue that the peer-review process accomplishes so much more than any single measurement could, that we should trust accreditation.

In many ways I agree with their arguments. However, at this point, I am a bit tired of explaining of explaining to state legislators that we should trust SACS and other accreditors, regional and national, when the only time an institution is shut down is based on financial issues, not academic issues. If academic quality issues are involved, we never hear about them.

Attacked at times by policy makers as an irrelevant anachronism and by institutions as a series of bureaucratic hoops through which they must jump, the regional accreditors’ approach to quality control has rather become increasingly more cost-effective, transparent, and data- and outcomes-oriented.

If this is true, where exactly is the transparency? I look at the SACS Commission on Colleges website and I see no data tools about outcomes. Under “Accreditation Actions & Disclosure Statements,” I see only two documents. Apparently nothing happened prior to 2013. Where can I download or review the site visit reports?

I can’t.

All I can do is trust the accreditation works even though I have no evidence that a college has have ever lost standing for academic quality reasons. Colleges rarely share their site visit reports, if ever. In Virginia, we frequently have to beat back legislation that requires those be published or at least shared with the General Assembly. I’ve seen enough site visits and assisted in enough decennial self-study to recognize that there is a great deal of value in the process, but to suggest it is transparent or outcomes-oriented goes too far in my opinion.

I don’t think ratings or scorecards solve the problem of differentiating between institutions based on quality, but clearly accreditation currently does not, or else the proposals would not be out there. These things occur when a need is not being met. Legislators and policymakers want a better answer than “trust us.”  So do a lot of other people. The accreditors ultimately determine if an institution is eligible to participate in Title IV aid programs. If the accreditors do not become much more transparent and provide explicit data and information on institutional quality, these proposals will become more than proposals and will make accreditors ultimately irrelevant.

After all, if a single score, like a Cohort Default Rate, can end eligibility, accreditors are only relevant for initial participation. At that point maybe we don’t need them as a gatekeeper.