Check Your Myths at the Door

“Golf is game of inches.”

“Drive for show, putt for dough.”

Drivel. Complete and utter bullshit. The wisdom of the ages is wrong and it is statistically wrong. And the truly bloody thing is that it was so easy to check with a simple thought exercise.

Would you rather compete one-on-one against a professional golfer for 10 drives for total distance in the fairway or for 10 putts from 10 feet away?

In August, 2010 Michael Agger in Slate wrote about why most golf statistics whiff  and Mark Broadie‘s research into golf statistics that demonstrate the relative contributions of each part of the game – driving, approach (to the green), short game (pitching and chipping), and putting. Further, the book Lowest Score Wins by Barzeski and Wedzik takes these concepts and turns them into practical advice and guidance about the game.

The simple facts are these:

  1. The farther you hit the ball on the first shot on a par four (a “scratch” or near-perfect golfer takes an average of 4 strokes to play the hole) the shorter the second is.
  2. The shorter your second shot needs to be, the shorter the club you can use. Players tend to be increasingly accurate with shorter clubs.
  3. The closer to the hole your second shot lands, the easier (shorter) your third putt is.
  4. And so on.

In other words, hit at as long as possible (and keep it safe) so that your next shot is short as possible. Repeat.

Why did people believe the myth for so long, that putting mattered more than a driving?

Here’s a new truth: The last thing that happened is what you remember best.

Yep, we tend to forget how we started out. We most remember the end. And why does this matter?

Because it is probably not the only arena in which we act this way. I think is especially true of higher ed. Too often we focus on things high school GPA, SAT/ACT scores, status at entry, and other characteristics reflecting 17/18 years or more of personal history. With the creation of state longitudinal data systems over the last (almost) decade, we are getting closer to at least understanding the impact of other aspects of those 18 years of experience. As we get further along, I suspect we are going to come to a very clear conclusion – wealth and poverty are pretty much all that matters. Without addressing the negative impacts of poverty, nothing else will matter. This is pretty much the conclusion I have come to after looking at so much data on student outcomes.

This is why the work of scholars like Sara Goldrick-Rab and Tressie McMillan-Cottom is so important.

The myth of merit is a great myth. It gives us comfort and allows us to feel special about our own accomplishments. Somehow we earned our way. This despite the fact that we pretty much always end up close to where we start out. Exceptions allow us to reinforce this belief. “See? She did it, so all the others can.”

This is not to say that merit and hard work don’t have a role to play. They do. Just like putting. Once you are on the green, they help you get all the way to hole. And the prize. It’s getting to the green that counts.

Any fool can putt. It is the simplest stroke in golf. Driving a ball 250 or 300 yards down the middle of the fairway is much, much harder, especially multiple times.

Any fool can putt. I will happily compete head-to-head in putting contest from 25 feet with any professional golfer. After all, they are only expected to make that putt one time out of 10. I can do that. So can you, almost on the basis of pure luck alone.

In other words, how close you are born to the green makes a difference.

Check what you think you know. Is it real or a myth?

 

Blood for College – A Cautionary Tale

It wasn’t a good way to the start the day.

Every Thursday, Derek started with the feeling of being a quart low. Sunday nights through Wednesday nights were spent working the overnight shift at the bus station. He knew from family members, not just older, but old, family members that such a shift used to be called the “graveyard shift,” but since only the old and poor traveled by bus, that was just too eerie. Too many riders looked like the walking dead.

Unfortunately, the daytime was not much better. Derek was a “Gerontological Blood-flow Assistant” meaning that he spent his days in the gerontology center massaging the extremities of people well over a hundred years old. Medical science (and law) could keep them from dying, but it couldn’t give them any kind of normal life. Unless they had wealth. So, Derek, and millions of young people around the world, lacking anything beyond a high school education spent hours each day twiddling toes and fingers, massaging and legs arms while maintaining a constant stream of chatter. The wages were not quite lowest of the low, but it was the cleanest of the low wage jobs.

Monday through Thursday the routine was harsh. Spend the night working baggage and customer service at the bus station. Back to the four-room house he shared with four other gerontology assistants and three college students. One of these was Sherrie.

Sherrie was whom Derek and the others wanted to be. She had busted her ass for years in the gerontology center and elsewhere (worse places they all suspected) to buy her way to college.  Sherrie had made it in, at the age of 27, without indenturing herself, bankrupting her parents, or becoming part of a menagerie. As long as she kept her expenses low and studied continually she would graduate in just two more years and receive her art degree. After two months of competition and testing, she would earn her license as a painter of landscapes and portraits. She would have options then. Unlike Derek, who cannot be an artist until he goes back to school. The Foundation has been so successful in its credential efforts begun decades ago, that now, all the creative class and the useful class (engineers, software designers) must be credentialed and licensed or face stiff penalties.

The New Indenture began in the early part of the 21st century when the policy elites became convinced the higher education bubble was about the burst, such that young people and most families would not ever be able to afford college, particularly as student debt rose and rose. At the same time, nongovernmental entities were pushing a college completion agenda convincing the same policy elites that nation’s economy (and thus the world’s) could only be saved by greater and greater numbers of citizens with college degrees. Clearly a crisis was coming and a response must be made!

As such things often go, all good intentions became little more than paving stones with a strong odors of sulfur and brimstone. Ideas that seemed reasonable and harmless to many were adopted against the warnings of the few who saw the risks (based on lessons of the past). Instead of borrowing for college or paying outright, students committed a share of their future earnings to the government or human-venture capitalists. For awhile, this approach seemed to work well. But as had always happened in the past, colleges and universities lost any sense of constraints in spending and income share to repay a student’s college costs grew from the six percent for tuition plus the four percent for living expenses to 25% and 10% leaving less and less to live. Graduates became increasingly creative in ways to hide income or to duck out of the original agreements. This lead to penalties for noncompliance.

Penalties based on those damn mice.

See, sometime in 2014, researchers had discovered they could extend a mouse’s life with new blood. Fresh blood. It was seemingly right out of Robert A. Heinlein’s novella, “Methusaleh’s Children.” Periodically, one had only replace all the blood in their body with fresh blood and life could be extended another 100 years. With development of a synthetically produced blood, the promise of longer lives was available to everyone.

Except the promise was never realized. Seventy-three years later, we were no closer to synthetic blood. But, the lobbyists of the wealthy (also known as the “elected class”)  were successful in passing laws allowing not only blood donations through private entities for life extension, but to contract with groups of donors. A merely wealthy person might have a menagerie of two or three young people in college, or waiting to get into a college. A super wealthy person might be supporting two dozen donors for each member of their family. Typically support was college scholarships, dietary supplements, medical care, and a small stipend while in college. In exchange, each donor would commit to 20 years of bimonthly donations and agree to keep up a healthy lifestyle. Anyhow, once these agreements were legalized, they also became the model for penalties for non-compliance the previously mentioned income-share agreements.Only no stipends and precious little gentleness during collection.

To default resulted in pretty horrific penalties. Mainly in forced organ donation.

 

Integrity

The percent of graduates who complete a degree in four years from start to finish is not the same thing as a four-year graduation rate.

The former measure is a backward-looking measure. It looks at all the graduates in a given year and then checks back to see what percentage started at the institution within the previous four years.

The latter measure, a four-year graduation rate, is a forward-looking measure in that it starts with a group or cohort of similar students (in this case, first-time in college, full-time at first enrollment) and checks to see what percentage of these students completed their degree within four years.

Why does this matter?

Four years ago,  a Small Liberal Arts College (a private, nonprofit SLAC) announced a new commitment to (certain) students: The college announced this week it will waive tuition costs for any additional courses needed to complete a degree if a student isn’t able to graduate in four years, provided certain requirements are met. The article goes on to report, based on the announcement on the college’s website, that 95% of graduates complete their degree within four years and that this is a much higher rate than the four-year completion rate for all graduates at private colleges (almost 80%) and for public institutions (below 50%). The source of the data is the National Association of Independent Colleges and Universities (NAICU).

In fact, one of the financial aid pages on the College’s website goes so far as to say this:

Many consider state colleges as having a low sticker price. In reality, once we factor in aid, the final cost isn’t significantly different than a state college. It is also important to note that 90% of our students graduate in four years saving the cost of a 5th year.

The College and NAICU should know better. The comparisons they are making are not appropriate. They are, in fact, deceitful. Note: We should probably be fair to NAICU as we do not know they were complicit in this deceit.

So, what is the real story here?

First, we assume that nearly all students graduating from the College  probably do graduate within four-years. It is simply too expensive not do so.

The National Center for Education Statistics (NCES) maintains an excellent website,College Navigator, that is quite useful for finding and comparing colleges. It’s data is based on annual surveys, the Integrated Postsecondary Education Data System (IPEDS), required of all Title IV (federal student aid programs) participating institutions. It reveals that only 49% of all the students at the College starting in the Fall of 2008 graduated in four-years or less. Only 62% graduated within six years.

This is a far cry from 95%.
If the statements on the College’s website are representative of the critical thinking and analysis skills taught at the college, we have a serious problem. I kind of hope these are cynical, deceitful statements of an institution trying to stay afloat. That can be dealt with greater ease than an inability to engage in critical thinking because I just don’t think they understand the problem.

The College seems to be intent on putting the slack into SLAC.

Just as a reminder, here is the Principle of Integrity from Southern Association of Colleges and Schools Commission on Colleges:

Integrity, essential to the purpose of higher education, functions as the basic contract defining the relationship between the Commission and each of its member and candidate institutions. It is a relationship in which all parties agree to deal honestly and openly with their constituencies and with one another. Without this commitment, no relationship can exist or be sustained between the Commission and its accredited and candidate institutions.

Integrity in the accreditation process is best understood in the context of peer review, professional judgment by peers of commonly accepted sound academic practice, and the conscientious application of the Principles of Accreditation as mutually agreed upon standards for accreditation. The Commission’s requirements, policies, processes, procedures, and decisions are predicated on integrity.

The Commission on Colleges expects integrity to govern the operation of institutions and for institutions to make reasonable and responsible decisions consistent with the spirit of integrity in all matters. Therefore, evidence of withholding information, providing inaccurate information to the public, failing to provide timely and accurate information to the Commission, or failing to conduct a candid self-assessment of compliance with the Principles of Accreditation and to submit this assessment to the Commission, and other similar practices will be seen as the lack of a full commitment to integrity. The Commission’s policy statement “Integrity and Accuracy in Institutional Representation” gives examples of the application of the principle of integrity in accreditation activities. The policy is not all-encompassing nor does it address all possible situations. (See Commission policy “Integrity and Accuracy in Institutional Representation.”) Failure of an institution to adhere to the integrity principle may result in a loss of accreditation or candidacy.

1.1 The institution operates with integrity in all matters. (Integrity)

(Note: This principle is not addressed by the institution in its Compliance Certification.)

Just Don’t Do It

Last night, my wife stabbed be in the chest with a pair of scissors. I spent the rest of the night in the TMC (Troop Medical Clinic). Don’t piss me off today. Just don’t.

This was Drill Sergeant Stringer, in June of 1982 at Ft. Benning, GA. It was a Sunday morning near the very end of Basic Training and Advanced Infantry Training. Somewhere along the way, one of the 30-plus young men of the second platoon did something to piss off Sergeant Stringer. In short order, he had us in formation in front of the office window, where he sat and directed us in grass drills (a form of physical training designed to wear you out).

Drill sergeants are supposed to be tough. Part of their job is to instill discipline and weed out the weakest. And those least able to show discipline. I spent a brief time as an acting drill sergeant while in the Army Reserves. It was an honor and a challenge. And not a career for me.

But, “Don’t piss me off,” still holds. Call me a hard-ass all you want (and apparently some have recently), but do it to my face. Quit making excuses about how the actions of others affect you and deal with your mission. I am as much of a hard-ass as I need to be fulfill my mission.

And I am not even talking about what I consider ethical lapses in providing bad data or making misleading statements on your website. That’s actually a whole other story.

It’s quite simple though. If your institution has a 62% graduation rate, 90% of students do not graduate in four years. Saying they do is wrong. The correct phrase is, “of our students who graduate, 90% do so in four years.” If your liberal arts college cannot be honest about that, then perhaps there are other problems that are deeper, more disturbing.

A colleague asked me about this on Friday. Is it likely that the best early indicators of failing college, one on the brink of closure, is the intentional misuse of data? Intentionally misleading prospective students about the quality and cost of an institution is something I find to be offensive, at the very least. So I wonder if we can develop an index of such things on the top pages of an institution’s web site and tie them to what we know about admissions, retention, and discount rates. I’ve written before that any private institution of fewer than 2000 students without a large graduate program (master’s program are often profit centers) and a first-year retention of less than 60% is one that should be on deathwatch unless it has a lot of unrestricted endowment.

Likewise, all institutions participating in Title IV financial aid programs, and all Virginia institutions requesting/receiving general fund support (including the tuition assistance grant)  must make certain disclosures on their website. The ability to identify and locate those disclosures would possibly be another good metric.

In other words, just don’t.

A work in progress

(A presentation at VLDS Insights 2015.)

Work (and thoughts) in progress: When is research and information enough to warrant change in policy?

 

This is really a working paper that came about from a response I gave to a reporter a few months ago.

“What policy changes has the state or institutions made since you published the wage outcomes of graduates?”

“Well, none, I hope. It is far too early in our understanding of the quality and value of these data to make any kind of sweeping policy changes based on them.”

The great promise of VLDS (Virginia’s Longitudinal Data System) is the ability use research based on individual data from administrative datasets to make policy recommendations that improve citizen outcomes.  For too long, education policy was based on series of one-off studies using data from single-use collections, national surveys, or work done in other states. I’m not suggesting these studies have no value, only that they may have less relevance than data on Virginia students. Good research, using test and control groups, and enough randomly selected students, creates important models for understanding interactions and relationships. By itself though, it is not enough.

First, even an exceptionally good study done in another state may not really be relevant to Virginia students. State policies and funding, school and institutional policies and funding, and specific curricula may simply add too many confounding factors. However, even at worst, these studies can guide us in our own research.

Second, these studies are expensive. They require skilled researchers with advanced training. The data collection itself is expensive and has to be repeated for each new study. And our constituents for the results, policymakers and their staff, have little interest in paying again and again for such studies. They also have little interest in waiting.

VLDS provides opportunities to create datasets for researchers that can look across years of students’ experience in schools, divisions, education levels, and even into the workforce with relatively little cost. Even more importantly, VLDS offers the ability to not only recreate the data for a specific study a year or five years later, it also allows us to develop regular, annualized reporting from the same data elements allowing our constituents and ourselves to track progress.

So we can do research and make policy recommendations we couldn’t before. But how quickly should we do the latter?

In 1959, Charles E. Lindblom wrote about “The Science of ‘Muddling Through’” in Public Administration Review where he posited two views of policymaking: Rational-Comprehensive and Successive Limited Comparisons. Lindblom also describes these models, respectively, as the “root” method which starts from fundamentals of the problem and grounded in theory, and the “branch” method that always builds out from the current situation.  I won’t go into two much detail about these other than enumerate their steps and attempt to draw a comparison that I think adds value.

Rational-Comprehensive (Root)

  1. Clarification of values or objectives distinct from and usually prerequisite to empirical analysis of alternative policies.
  2. Policy-formulation is therefore approached through means-end analysis: First the ends are isolated, then the means to achieve are sought.
  3. The test of a “good” policy is that it can be shown to be the most appropriate means to the desired ends.
  4. Analysis is comprehensive; every important relevant factor is taken into account.
  5. Theory is often heavily relied upon.

Successive Limited Comparisons (Branch)

  1. Selection of value goals and empirical analysis of the needed action are not distinct from one another but are closely intertwined.
  2. Since means and ends are not distinct, means-end analysis is often inappropriate or limited.
  3. The test of a “good” policy is typically that various analysts find themselves directly agreeing on a policy (without their agreeing that is the most appropriate means to an agreed objective.)
  4. Analysis is drastically limited:
    1. Important possible outcomes are neglected.
    2. Important alternative potential policies are neglected.
    3. Important affected values are neglected.
  5. A succession of comparisons greatly reduces or eliminates reliance on theory.

Lindblom discusses each step of the Branch method in detail, for those that wish to do the reading. The points I wish to make begin by drawing a comparison of the Root method with the research support that VLDS provides. It is without a doubt that very few agencies of Virginia state government are staffed and funded to perform the in-depth, long-term, theory-based research that university faculty perform each year. It is also unfortunately true that few agency staff have the time to stay current in all the published research related to the data for which they are stewards.

The way Lindblom describes the Root method, it is an impossible method for all but the simplest problems, “It assumes intellectual capacities and sources of information that men simply do not possess, and it is even more absurd as an approach to policy when time and money is limited, as is always the case.” Of course, he bases this conclusion on his initial premise that the analyst in question would perform incredible amounts of due diligence in values identification, data collection, and comparison of potentially relevant policies. I think we can put that aside and take a more reasoned view, a doable view, of the Root method that assumes an appropriately thorough conduct of due diligence grounded in theory from prior research. It’s possible, just expensive. Even completely thorough research has to make some assumptions.

The Branch method is what we do every day. “This is what we know now. This is where we want to be and these are the resources we have to make our analysis.” We find agreement without necessarily debate if a policy is the best, only if it as Herbert Simon put it, “satisfices.” We use data trends and comparable measures to confirm our agreement on policy.

Relying on either method alone is sub-optimal. The Root method is too expensive and it takes too long. Sometimes policy turns on a dime (a 20-minute phone call while one or two queries are performed “live”) and a “researched” decision has to be reached very quickly. The Branch method alone is shallow and may ignore a likely history that exists on similar data studying the same or closely similar question.

Uniting the two methods makes far more sense.

A message that I frequently push, perhaps to the annoyance of some of my colleagues, is that elected officials and their staff have little patience with the in-depth reports from the Root method. They are typically too long, too nuanced, and too detailed for their needs. Further, and this is the most important thing, if the findings of the research are adopted, they want to know that in each succeeding year data will be available to judge the results. Paying for another in-depth study is rarely a considered option.  Thus, the reporting that supports and justifies the Branch method plays a critically important role.

If you accept this model, our next question is the title of the session – When is research and information enough to warrant a change in policy?

I participate occasionally in a forum for people with a certain kind of brain tumor. New members facing treatment decisions frequently struggle with understanding how to decide what to do. The challenge is particularly acute for smaller tumors where there are more options – watch and wait, micro-surgery, and radio-surgery (radiation). This compounded further by the fact that each doctor tends to favor his or her own specialty and thus a patient doing due diligence and seeking a second, or multiple, opinions may become confused. In fact, one of their first posts following, “Oh my God, I have been diagnosed with a brain tumor!” is “How do I decide what to do? My surgeon says surgery, but the radiation oncologist says radiation.  I don’t have serious symptoms, do I have to do anything at all?”

Even after that point once a decision is made about surgery or radiation, there may be questions about what surgical approach or what form of radiation therapy. To some degree, the answers to these questions are about the experience and preferences of the selected surgeon or the availability of specific forms of radio-treatment at the selected facility. Selecting the facility and treatment team is another decision tree once a course of action is chosen.

In my own experience, I had a large tumor with very limited time to decide. I spoke to two surgical teams and both said very similar things. That made the decision very easy for me, especially within the framework for decision-making I had already made for myself. For example, it was important to me, if possible, to have the surgery done at a university hospital. Closer to home was better for my family than clear across country. Further, all the research I had done about how to make the decision to fully understand the context of my situation made it possible to recognize that hearing the same messages from surgical teams 3,000 miles apart gave a true clarity to the situation.

In other words, when you get the same response multiple times, you are probably on to something. Assuming that you have also done the research to ensure you understand both the question asked and the answer received.

Consistency of results from multiple tests seems a very good place to start. Of course, this implies multiple tests, multiple research projects. It implies, I hope, good research that is supported by well-defined theory should be a required feature of these tests. It seems to me that policy recommendations made based on one study, on one result set, are a poor thing on which to risk the lives of citizens. Our goal should always represent some form of improving lives of Virginians. One set of results seems counter-intuitive to this goal. To my mind the stakes are too high. And this is why agency staff tend towards the conservative and that the “best” policy is found in the agreement of multiple analysts, perhaps using the mantra, “yeah, we can live with this.”

Lindblom states: “If agreement directly on policy as a test for “best” policy seems a poor substitute for testing the policy against its objectives, it out to be remembered that objectives themselves have no ultimate validity other than they are agreed upon. Hence agreement is the test of “best” policy in both methods.”

So, when multiple analysts agree, we have another marker as to when to make a policy change.

Distilling these thoughts into a simple list, I see the following to be key indicators as to when make policy recommendations:

  • When replicable/replicated research confirms theory.
  • When measures developed from research are reproducible and readily from administrative datasets, such as those exposed to VLDS.
  • When multiple analysts agree.

This makes sense to me. It is reasonable and allows for time to consideration of the theory, data, and alternatives. I think also this is how our constituents wish us to make policy recommendations. Unfortunately, a lot of policy is not made this way. Sometimes we are given a matter of weeks to formulate a response to a policy question. Clearly this is not much time. Worse, there are calls that come during the legislative session giving us 20 minutes to develop a query against student-level data over multiple years and provide an answer that sets policy, or rather law. It’s not pretty, but that is the nature of law and sausage.

The purpose of VLDS is to support an environment that allows the three indicators above to take place. A mix of sound research, readily produced data and information, and analytic concurrence.

 

Hiding behind the sheetrock

 

image

A few weeks ago, without effort, I broke an upstairs bathroom water line and flooded the house about midnight. When I explained the situation to the plumber at 1:00 am and $165/hour, I am sure he thought I was crazy and hiding the fact that I had done something completely idiotic. Seriously, the pipe just snapped inside the wall while just moving stuff around inside the vanity.

About an hour later, my wife and I are downstairs and she is watching me mop up the living room and pick up the sections of sheetrock that had fallen. (The largest piece falling my head is what actually woke her.) The plumber was upstairs capping the lines in the vanity and we both saw a section of pipe just fall to the floor. The plumber came downstairs holding the hot water valve in his hand.

“I’ve never seen that happen before. I grabbed the valve to cut it off and it just snapped.”

And it happened several more times.

So, while much of the ceiling is missing, we’ve had the plumber back of couple times to completely re-plumb the upstairs bathrooms. (Ultimately we will have all the water lines replaced this year.) He brought an assistant with him this week so the job wimageould go a bit faster. I asked him if the situation had been explained to him. “Sure, I’ve just never heard of that happening before.”

A little while later I got to watch as it happened to him.

“You thought I was crazy, didn’t you?”

So, these pipes have me freaked out a bit. They will all be replaced. It has also been pointed out that these same pipes have been connected directly to the water heater, instead of to 18-24″ of copper piping between the CPVC and the water heater.This a code violation that never should have been passed.

What’s behind the sheetrock is something we rarely see. In fact, when we buy a house, or choose a college, we do it largely on faith in the processes and adopted standards. We assume that any relatively new house (and ours was built in 1999) is built to the established building codes. We also assume (hope) that quality materials were used and used correctly.

Right now, that seems like an awful lot of (misplaced) faith.

For the record, I can do a lot of things. I started looking at the water lines thinking, “you know, I could replace those myself.” But, I’ve read enough posts on the plumbers’ fora to understand that “it takes more than a can of glue and a buttload of CPVC to be a plumber.” It does indeed, and I am okay with that. I would rather appreciate the fact the someone else has taken the time to master plumbing skills than to attempt to so myself.  I look at the new lines thinking, “It is a shame to have cover this up. It’s doubly a shame that no one else will appreciate this to the degree that I do.”

 

image

 

This below is what crap pipe looks like. Manufactured in January 1999, Flowguard by Charlotte Pipe. Maybe it was a bad manufacturing run, maybe it was mishandled by the original plumber, maybe the glue was flawed, or maybe it had spent too much time in sunlight. Maybe if Charlotte Pipe had at least acknowledge receipt of my email, i would not show the details. But this is the beauty of the open Internet. Someone may see this and tell me of a related case. Or maybe the manufacture will find this and respond. Google Flowguard CPVC and browse the results. Opinions on Flowguard are all over the map.

imageI saw this evening on Twitter that there is a bill, or at least a proposal, to allow states to create their own USED-recognized accrediting body. Historically that topic has come up in Virginia from time to time. It is an interesting idea, but not one I am interested in pursuing. While I could create a hell of an accreditation function with our data resources, it still would not tell us what is behind the sheetrock. I suppose we could peak under the sinks and grab the valves and give them a good shake, from time-to-time, but we have been here 10 years and nothing like this has happened before.

In the end, I don’t really have a good answers beyond trusting the process and knowing that sometimes things break and require fixing. All the metrics in the world won’t show me what’s behind the wall.

 

image

image

image

You can do everything well and still fail

Subtitle: Some of you are focusing on the wrong things.

Subtitle: Life isn’t fair.

The board of Sweet Briar College announced today that it would cease operations with the end of the semester. It seems that they are taking a principled stand to go out on their own terms, with adequate resources to properly teach out the term and close down decently and in order. It is striking that SBC has a $94 million endowment, but over half of that is restricted in its use, significantly reducing the ability of the college to use endowment returns to fundoperations.

Sweet Briar is a fine institution that really does seem to do most everything well. Young women generally seem to thrive there, especially those that stay. Retention into the second year is not the greatest, but those that make it into the second year are pretty much going to graduate. From what I have observed the last 14 years from a distance, and on campus, it is a pretty special place.

The policy wonks have been excitedly discussing the increased applications and the declining yield rates.  The fact is that few young women seem to be really interested in attending a single-gender institution sitting on a mountain ridge an hour or four away from excitement and activity. I suspect there are far more than the approximately 200 that have enrolled each year, but finding them is not easy. As for the increased applications, those are easy enough to come by with a little work and little more mailing. A difference in 400 applications really is not that big deal to achieve with the available tools and consultants. It is much, much harder to increase the number of quality applications – applications from students really interested in what Sweet Briar offers, at a price that the students and the college can both afford.

The fact the entering students of 2013-14 had an 84% acceptance rate (Admissions tab) is pretty strong evidence that increasing applications alone may change very little.

A critical problem to my mind is the getting students in the door is only one part of the problem. A 63% graduation rate  is respectable rate, but for a small college with lots of one-on-one experience with faculty, I suspect most people believe it should be much higher. If not, what is the value of the small college experience? Again, as I said earlier, the “problem” (if it is a problem and not a feature of college) is in the first year retention. The entering class of 2008 had a 75% retention rate (148 students) into the second year. Six years later, 124 of those had graduated from Sweet Briar, or about 84 percent. A handful of others graduated from elsewhere in the Commonwealth. By prowling these data one can get a sense as to where some of the struggles are. It is a simple fact that any institution that struggles to constantly replace students that don’t persist to completion faces an uphill struggle for its existence.

But the fact remains that Sweet Briar is gem tucked away in the mountains. A gem like BridgewaterFerrum, Hollins, Lynchburg, Roanoke, and a host of other small, liberal arts colleges, that deserves more attention for all that it does well. Unfortunately the world is changing and it has never been fair.

I worry that we don’t wish to pay the costs to keep such experiences available. That we refuse to acknowledge that some things are not only somewhat expensive, they are also worth every penny.

On the other hand I admit to shopping at Walmart and Amazon. Not because of the price, but because of convenience. They are open when I have time. (But, if it is a stop on the way during “normal hours”  to just about anywhere else, I will avoid Walmart like the plague if I can get in and out quicker.) Small mom & pop stores are rarely open when I have time to shop. If I have to leave work early or go in late, the cost and inconvenience of the shopping trip has generally increased more than I wish to consider.

And this is the conundrum of the small rural college.

Feb 13, 2025 – Government Will Change How it Rates Colleges

The federal government on Thursday announced that it was changing the way it measures colleges, essentially adjusting the curve that it uses to rate institutions to make it more difficult for them to earn coveted four- and five-star government ratings.

Under the changes, scores are likely to fall for many institutions, federal officials said, although they did not provide specific numbers. Institutions will see a preview of their new scores on Friday, but the information will not be made public until Feb. 20.

“In effect, this raises the standard for colleges to achieve a high rating,” said Thomas Hamm, the director of the survey and certification group at the Commission of Education Economics within the Executive Office of the President, which oversees the ratings system.

Colleges are scored on a scale of one to five stars on College Compare, the widely used federal website that has become the gold standard for evaluating the nation’s more than 15,000 colleges even as it has been criticized for relying on self-reported, unverified data, that is limited in scope and function.

In August, The New York Times reported that the rating system relied so heavily on unverified information that even institutions with a documented history of quality problems were earning top ratings. Two of the three major criteria used to rate facilities — graduation rates and student input quality measures statistics — were reported by the institutions and not audited by the federal government.

In October, the federal government announced that it would start requiring colleges to report their staffing levels quarterly — using an electronic system that can be verified with payroll data. They will also report their enrollments weekly by the individual student to be verified against the National Student Loan and Tuition Tax Credit Data System. This allows to begin a nationwide auditing program aimed at checking whether an institution’s quality statistics were accurate.

The changes announced on Thursday were part of a further effort, officials said, to rebalance the ratings by raising the bar for colleges to achieve a high score in the quality measures area, which is based on information collected about every student. Colleges can increase their overall rating if they earn five stars in this area. The number of colleges with five stars in quality measures has increased significantly since the beginning of the program, to 89 percent in 2024 from 62 percent in 2015.

Representatives for colleges said on Thursday that they worried the changes could send the wrong message to consumers. “We are concerned the public won’t know what to make of these new rankings,” said Mark Parkinson, the president and chief executive of the Association of Private Sector Colleges and Universities, which represents for-profit colleges. “If colleges across the country start losing their star ratings overnight, it sends a signal to families and students that quality is on the decline when in fact it has improved in a meaningful way.”

But officials said that the changes would be explained on the consumer website, and that the public would be cautioned against drawing conclusions about a institution whose ratings recently declined. Still, Mr. Hamilton said scores would not decline across the board.“Some colleges, even when we raised the bar, continued to perform at a level much higher than the norm,” he said in a conference call Thursday with college operators. “We want to still recognize them in the five-star category.”
The updated ratings will also take into account, for the first time, a college’s use of antipsychotic drugs, which are often given inappropriately to elderly administrators with dementia.

–Thanks to John Nugent for the link to the original article and the inspiration.

And the search goes on

It’s happening again, the search for transparency. There is this belief that the right set of measures, over the right period of time, will clarify everything. About anything. Of course, the right measures are simple and don’t need explanation about what they measure and why they are important.

And that’s why the Quest for the Holy Grail did not happen…the Grail was sitting in the middle of a small church with a sign on it and a bright sourceless light above it.

According to the stories, that’s not what happened. (Speaking of stories, @jonbecker’s blog post is an excellent read.)

Time and data crashes in on each of us these days.

We too often struggle to sort through the signals and noise, at least I do, and so I understand the desire for something simple that tells me everything I need to know. But I never expect to find such a thing. In fact, my expectation is that if I want to know something and be able to act on it, I will have to do some work.

If I actually want to understand something, I know that I will likely have to work even harder.

So, this is pretty much the approach taken with research.schev.edu. You have to make an effort to know what you want and need, either before you get there or while on the site. Higher education is kind of a big business with a lot of complexity. This complexity derives not just from its size and variety, but also from its continual evolution. Some numbers, some measures are pretty simple – enrollment, and degrees conferred. Some of the buckets for these things may get a little complicated, but in our presentation of the data, actually in even our collection of the data, we have already simplified it through standardization.

Other measures, like graduation rates and measures of affordability, are more complex, if not to read, but to understand. The annual frequency of questions along the lines of “Don’t you have graduation rates for the four-year schools that are less than six years old?” has not noticeably reduced. As often as we explain the nature of a cohort measure, people still think we should have 2014 rate. Certainly, we could identify the reports based on the year the data are released, but some users will insist on being confused that the 2014 reports are about students that started at least six years prior, or three years for the two-year colleges. And in 2016 they would likely be confused again.

So we go for clarity and standards, even so, they are not such that they are instantly understood. Some things one just has to think about for a few moments. We also serve multiple constituencies with a varying levels of knowledge of higher ed and much different needs.

At the heart of it, this idea of a Holy Grail of measurement is the thinking behind the ratings system. Somehow one rating, or even a handful of different ratings, about an institution will tell one all they need to know. Or at least, all they need to know about an aspect of the institution related to the undergraduate experience. Except the educational aspect, because that is not measured consistently and reported systematically to USED.

PIRS though is only the natural evolution of the 2008 Higher Education Opportunity Act (HEOA). The reporting and disclosure requirements that came out of the HEOA are huge. In some ways they have transformed institutional websites, in others they have demonstrated institutional ability to bury information. Of course, who can blame institutions much for the latter when probably very few students are interested in some of the requirements?

Which makes me wonder what the next version of the HEA will bring. If Chad Aldeman’s post is any indicator, we could see a major shift away from current requirements. More likely, in my estimation, we will see an attempt towards the requiring the publication of the perfect number* or a half-dozen perfect numbers and their changes over time.

In any event, whatever happens with the next version of the HEA, PIRS, or any other effort at the federal or state level, I don’t expect the search for the Grail of Measures to end anytime soon.

Faded jaded fallen cowboy star
Pawn shops itching for your old guitar
Where you’ve gone, it ain’t nobody knows
The sequins have fallen from your clothes

Once you heard the Opry crowd applaud
Now you’re hanging out at 4th and Broad
On the rain wet sidewalk, remembering the time
When coffee with a friend was still a dime

Chorus:
Everything’s been sold American
The early times are finished and the want ads are all read
Everyone’s been sold American
Been dreaming dreams in a rollaway bed

Writing down your memoirs on some window in the frost
Roulette eyes reflecting another morning lost
Hauled in by the metro for killing time and pain
With a singing brakeman screaming through your veins

You told me you were born so much higher than life
I saw the faded pictures of your children and your wife
Now they’re fumbling through your wallet & they’re trying to find your name
It’s almost like they raised the price of fame

Kinky Friedman – Sold American Lyrics

*The perfect number is 17.

Cults in Higher Ed

I was at a super-exclusive, informal meeting-type thing this week. I have to call it a meeting as there was no beer. There should have been beer.  At one point, I was explaining how cultish higher education is. Really.

And this phrase didn’t originate with me. More’s the pity.

Back in 2010, shortly after I returned to work following my adventure in neuroscience, there was a subcommittee meeting for Governor McDonnell’s higher education reform committee. As is often the case for these things (in Virginia, at least), it was standing room only for the audience. No matter how often we try to explain that the meeting host that there will be a crowd, there is huge interest in higher ed policy and we always need more seats for the audience than the normies think. As one legislative liaison pointed out, “It is a cult, it really is. We want to be here, even more than our institutions want us to be here. We need to be here.”

Part of the attraction is the desire to be involved and to avoid damage to one’s institution. It’s also fascinating. There is very little as as intrinsically interesting and mind-consuming as higher ed policy. It’s powerful stuff, too often polluted with overly simple explanations or overly complex solutions. And the people are fun to watch.

The only thing that is clearly more interesting and drives even greater passion is higher ed data & data policy. If you don’t believe me, show up at an IPEDS Technical Review Panel (TRP) and just observe. The level of passionate discourse and argument over a minor change in definition can go on for hours. It is almost obscene. Hell, just read tweets from any of the IR people or the higher ed researchers, or follow #HiEdData. These are people deeply invested in what they do and what they want to know from data. And what they can know. And what they do know.

This is what Secretary Duncan and President Obama did not know, or failed to understand, when #PIRS was proposed.

There are hundreds, more like thousands, of people who are experts in IPEDS data. They know what can and can’t be done with IPEDS data. And what shouldn’t be done. In Ecclesiastes, the Preacher said, “There is nothing new under the sun.” That is how people felt about the prospect of using IPEDS data for a ratings system. What could be done that would be substantively different from what now exists? As big as it is, it is an exceedingly limited collection of data that was never intended for developing rankings or ratings.

Just to make this post kind academic-like (undergraduate-style) let’s look at the definition of a cult:

cult
  1. a system of religious veneration and devotion directed toward a particular figure or object.
    “the cult of St. Olaf”
  2. a misplaced or excessive admiration for a particular person or thing.
    “a cult of personality surrounding the IPEDS directors”
    synonyms: obsession with,fixation on,mania for,passion for, idolization of,devotion to,worship of,veneration of

    “the cult of eternal youth in Hollywood”

The only thing really lack is any type of charismatic leader(s). Or charisma, really. (Again, observe a TRP). Kool-aid generally comes in the form of caffeinated beverages. Everything else is there.

Of course, there are more than just these two higher ed cults. We have the new cult of Big Data, and it seems full of evangelists promising the world and beyond. Of course, this cult transcends higher education.

While I like the idea of Big Data. I like the idea of Big Information/Bigger Wisdom even more. That’s the cult I am waiting for.

I hope they have cookies. Without almonds.