Mourning the loss of nuance and complexity

The opening of the song is nearly two minutes of piano and guitar. No lyric until just about the two minute mark, at a time when pop songs on American FM were expected to be no longer than three minutes and following a simple structure of verse, chorus, bridge, hook, and refrain. At  eight minutes and 10 seconds, the song “Bat out of Hell” follows none of these conventions.

It is one of the greatest rock songs of all time.

It’s greatness lies in its power and complexity. It has nuance and depth. It can be sung with anger and triumph or it can be screamed down the highway with an unshakable sense of loss.

Cutting it down to fit FM airplay rules is a hack job.

The same is true for “Paradise by the Dashboard Light.” Any trimming simply reduces the narrative, and the basic conflict. It would turn it in to pablum. Ten minutes is just about the perfect length for this song.

These songs are great and the album on which they appear “Bat out of Hell” is currently the fifth best-selling of all time, according to the Wiki-god.

Reducing them would be absurd.

At the other end of the spectrum, “Escape” aka “Escape (the pina colada song)” is much shorter. Kind of cute and maudlin and you really only the need the chorus to feel happy. The narrative exists only to justify the chorus. On the other hand, the chorus is unnecessary without the narrative.

So, let’s also look at “Hooked on a Feeling” which is just under three minutes and just as close as one can get to lyrically-based musical wallpaper. It’s a nice little song, but it is not one that washes away or recalls teenage angst.

Let’s just think about how this is like data or policy information.

“Just give me what I need to make a decision.”

“That’s what I am trying to do. You need to know these things to both make your decision, understand the risks, and more importantly, understand why you are making this decision.”

“No. I don’t have that kind of time. Neither does our audience.”

“Right, you want “Hooked on a Feeling” and I am trying to give you “Bat out of Hell.”

One of these will stand the test of time.

We are in a time when nuance and complexity seem to be rarely appreciated. Consultants shout, “Spare change! Spare Change!” and all the executives hear “Change! Change!” (credit to Scott Adams for this.) Consultants tell us fewer measures are better, simpler measures are better. We end up with a college scorecard that demands understanding, demands nuance, but little is provided.

I believe understanding comes with effort, with work. Good decision-making comes from understanding, otherwise it is luck or privilege. I am a violently cynical idealist that keeps hoping for the tide to turn, but expecting to wait a long damn time.

And when you say Dylan, he thinks you are talking about Dylan Thomas, whoever he was.

And he was:

Do not go gentle into that good night,
Old age should burn and rage at close of day;
Rage, rage against the dying of the light.
Though wise men at their end know dark is right,
Because their words had forked no lightning they
Do not go gentle into that good night.

Good men, the last wave by, crying how bright
Their frail deeds might have danced in a green bay,
Rage, rage against the dying of the light.

Wild men who caught and sang the sun in flight,
And learn, too late, they grieved it on its way,
Do not go gentle into that good night.

Grave men, near death, who see with blinding sight
Blind eyes could blaze like meteors and be gay,
Rage, rage against the dying of the light.

And you, my father, there on the sad height,
Curse, bless me now with your fierce tears, I pray.
Do not go gentle into that good night.
Rage, rage against the dying of the light.

–Dylan Thomas

Don’t Think Twice, It’s All Right (maybe)

So, I am kind of moody about the latest BSO (Bright Shiny Object) to hit the ether this week. While I think the new report from EdTrust about Pell Graduation Rates is a good piece of work, I am again frustrated that the attention is on having a national database to play with…especially since it merely documents what some of us knew already.

Pell students tend to have lower graduation rates than the institution average and lower than non-Pell recipients.

We’ve been publishing such data since 2008, here in THE Commonwealth. Further, we have it by gender, race/ethnicity, part-time students, transfer students, and those taking remedial courses in the first year. Oh, we also do the same for all other Title IV programs and Virginia aid programs.

Of course, I am getting kind of oldish and dated. It is rare that I get excited by, let alone chase, each new BSO, even if it is a data BSO. I have more data than people are able to use. I also spend a lot of time thinking about reshaping published data, and doing so. So any new data must add value beyond what I have already. And that is a pretty high bar.

My relationship with data goes back a long ways. Long enough to explain the why my database is still relational. Data, she’s harsh mistress, but she doesn’t have to worry about me leaving her for the newest data. In fact, the older she gets, the more I care about her.

But I do get jealous that she doesn’t get the attention she deserves. Too often there is a bias for two things: national data and easy-pezy comparisons. I keep wanting the story to be along the lines of, “Very nice, you caught up with Virginia. Now what are you going to do with it? What’s your goal?”

You can read about our goals here.

I also want the occasionally story to be, “Oh, look at Virginia has done. Nice. Now what are you going to do with it?” “Well, let me tell you…..”

Dude, it is tabular. The data are tabular. Abide.

It ain’t no use in callin’ out my name, Gal
Like you never did before
And It ain’t no use in callin’ out my name, Gal
I can’t hear you anymore
I’m a-thinkin’ and a-wond’rin’ walkin’ all the way down the road
I once loved a woman,a child I’m told
I gave her my heart but she wanted my soul
Don’t think twice,it’s all right

And this is where the Scorecard fails

Clarity.

A colleague sent me a link to a local blog post that took data from the College Scorecard and plotted wages against estimated median SAT. He wanted to know if we could do this.

“Can we do crap analysis, inattentive to definition and datasource, and blog about it? We could, but we won’t.”

Apart from the fact that the author of the blog post in question is kind of clueless about this type of analysis in the first place, the Scorecard has admirable lack of clarity to it. Admirable, that is, if you are trying to create confusion and noise. I know I take some heat for trying to publish too much data and text, but I need people to know what they are looking at. Understanding will eventually come with such knowledge, but almost never in its absence. The Scorecard does not do that.

If a user is like myself and understands where the data are drawn from and what that means for their scope, then the scorecard is fine. But few enough people in higher education are actually well-informed about Title IV, and fewer still have a clue about the National Student Loan Data System (NSLDS). So confusion is not a surprise. But it is irritating as it creates local brushfires to extinguish. Lord knows what our board is going to have say about it the next two days.

The College Scorecard needs next. Much more text than is currently there. Explication about what things are and are not. Recognition that some words are commonly misused and misunderstood. Such as “alumni.”

There needs to be a guide for the casual, oh-so casual user (and abuser) of data that lays out the limits in simple English. You know, “‘executive-speak.”

Utility

I am a huge fan of utility. I am also ultimately a pragmatist. Some have called me an unreasoning pragmatist. Or something like that.

However,  when Kentucky Community and Technical College System’s approach to measuring the social good of degree programs that do result in large earnings by creating a “social-utility index.” My response was, “Well, okay.”

There is nothing wrong with the methodology or the conceptual underpinnings. Christina Whitfield, the author, does very good work and so I wasn’t finding fault, it just generated kind of null feelings. Not empty, no content, just null. And I didn’t understand why.

Now I do. When I reviewed the Storify that Dave Mazella had assembled of our conversations last week about assessment, I was sitting in the trailer positioning the new cabinets I had built. As I began fastening them into place,  I realized that my problem is that I don’t think that everything needs to be measured, or assigned a value.

There I said it.  I don’t think that everything needs to be measured, or assigned a value.

I really think we need to make peace with a couple of concepts.

  1. A little inefficiency, like a little nonsense now and then, is a good thing.
  2. It’s okay for something to not have an immediate economic return, or a large one.

I suppose I could explain or justify these, but I don’t think I will. This is what I think, and these are principles that often guide my analysis and recommendations. And they are not new to me.

I don’t think we really need to justify that a well-educated child-care provider is probably a good thing. If you disagree, let me choose your next daycare.

 

The burning hand, and Frost

In last night’s discussion about assessment, I said this:

We know that the burned hand teaches that the stove is hot. (Actually, it teaches us that a very unpleasant feeling is experienced by touching the stove in a way that we later learn is inappropriate, the same way that unpleasant experience is called “burning.”) The experience of the burnt hand can be observed and measured according to it’s severity and the quickness of the individual’s response. But can learning be measured?

If the “student” can articulate what happened and also articulate that she does not want to experience this unpleasant feeling again, then it seems there is pretty clear evidence of learning. That seems like some kind of proxy measure to me, but a very good one.

The real test of learning, it seems to me, is to observe over time whether or not the experience is repeated.

If not, it seems safe that the lesson is learned.

Of course, all this assumes a physically and mentally normal subject. Someone with nerve damage or Hansen’s disease, may not be able to detect the heat and burning.

All of this occurred to me while driving into work this morning. It also led me, again, to consider measurement. I think about measurement a lot, I really do.

Last night I also said this:

My two years spent running the frame shop at the university museum at SIUE, a number of Habitat for Humanity builds, and years of Scouting, have left me pretty cynical about people’s ability to measure even simple things consistently. Even given that criticism, I was reminded of Roger Zelazny’s novella, “For a Breath I Tarry.” It is the story of a computer named “Frost” in a far-distant future, where mankind has become extinct, that develops a quest, first to understand Man and then ultimately to become Man.

   “Regard this piece of ice, mighty Frost. You can tell me its composition, dimensions, weight, temperature. A Man could not look at it and do that. A Man could make tools which would tell Him these things, but He still would not know measurement as you know it. What He would know of it, though, is a thing that you cannot know.”
   “What is that?”
   “That it is cold.”

I think this is the heart of issue. We can know learning, or we can know measurement. We may not be able to know both.

What the Pell?!!

I am irritated. Very irritated. All of these folks sharing the recent Hechinger report on Pell graduation rates and getting so upset that there is not nationally available data. Fine, we knew this. And those of us paying the least attention since the 2008 HEOA knew that institutions either weren’t publishing Pell & Stafford graduation rates or making them so difficult to find on their websites that it simply doesn’t matter.

Not one of these people are pointing to Virginia and saying, “See how easy it is?” UNLESS I REMIND THEM.

For Pell graduation rates (and Stafford, and 200+ other groups) go here.

See that we publish Pell graduation rates and others on each Institutional Profile by clicking on the Grad. Rates tab.

And we also have a Student Success, Persistence, and Completion Scorecard.

If your data are not as complete, and good, as Virginia’s, you are slacking.

Learning to Count

I don’t know much about assessment anymore. I am also too lazy at the moment to Google “assessing student learning” and am just going to assume that either this hasn’t been modeled before, or that it has and the 17 readers of this blog have not seen the original, or that they have and will tell me so.

For simplicity’s sake, let’s break learning into four categories:

  1. Learning to know.
  2. Learning to do.
  3. Learning to learn.
  4. Learning to be.

So, as long as a student can demonstrate one of these accomplishments (they know what was taught, can do what was taught, can learn to do more, and incorporates all of these) we can measure it and say learning has occurred. Right?

Hmmmm.

Dave Mazella takes off from When is a waffle not?

There are so many possible ideas to parse out of this Twitter essay.

I think I am going to limit myself, at this time to a defense.

I know that I/we count is so abstract as to have little to no relationship to what actually takes place on any of Virginia college or university campus. 

On the other hand, this is no less true of any well-researched history of an event. Only so much can be conveyed through any narrative, whether in prose or video. With prose you are limited to the words (which also have an agreed up definition or collection of such), arranged in a way that to convey the author’s message, and intent. Video differs only that more visual information is conveyed than can be expressed through the written word, but there is still a point of view and intent.

This is all necessary abstraction.

And it is no different than what we do.

Dave makes two points especially worthy of note.  The first, “since concept of curriculum must assume equivalence of “same” class at diff times, places,” is the idea that we do standardize (Jeff’s translation regime again comes into play here) and standardization is both an abstraction and assumption of equivalence. I suspect most of us in the profession think about it as “approximately equal” as “equal” in any quantitative sense really does allow for variance. (Unless specified as such.)

This takes us to items 8 & 9 – assuming we are counting things and not processes; and that standardization encourages to think about education as discrete and thing-like. Exactly. Counting is about things. Even in the midst of a process, it still comes down to counting things. For example, the speedometer on my car measures the fluid process of movement by reporting an estimate of my speed. This is done by counting the revolutions of the driveshaft within the transmission and converting that to velocity based on the gearing and known size of the tires and wheels. Of course, if the tire/wheel is of an unexpected (non-stock or non-programmed size) the estimate is in error. This value is reported to the driver continuously and looks to be a continuous measure when it is only a collection of values for tiny snapshots in time.

If everything in education moved at 88 feet per second, we could fake a process measurement quite well.

Now for the purists that call me out saying the automotive speed is not a process, that’s fine. Show me any other process management that is not a collection of tiny discrete measures.

To return to  our four categories of learning,  the first two are pretty easy. We can test knowledge gain within reasonable timespans. We can wait years and test knowledge retention. We can do the same with doing. If someone can learn to do something and demonstrate that something, they have clearly learned. If they can still do it two years later, then we should probably consider ourselves successful.

Likewise, if over time, someone demonstrates the ability to keep learning in a specific area, that seems to be a successful outcome.

If someone can do all these things within a specific domain, that also seems a success.

Looks and feels like things that can be measured, but to do so seems that it requires a pretty narrow domain of knowledge to me – not what would be expected in a complete college degree (from the associate’s on up). Perhaps less than even what is in a single course.

Further, it seems that knowing and doing prerequisites to further learning, and by our definition, all three prerequisite for being.

We haven’t even discussed the individual. Or the concept of attribution and who was responsible for what. And so it seems to me that despite decades of work and research, we are not really ready to count learning.

When is a waffle not?

It started with this.

And led to this.

And so the discussion kind of turned to one of what is a waffle? Or rather, how is an Eggo not a waffle?

Some people feel I have a one-track mind, or a little bit of an obsessive-compulsive order. Sometimes I just can’t let things go, like Eggos. I understand where Jeff is coming from in his comment. It is about quality. It implies that something mass-produced and sold frozen is not the same as the thing as made from scratch.

It may not be as good, but it is the same thing. It says so on the package.

They look like waffles. There are shaped life waffles. They have square indentations to hold, syrup, melted butter, sugar, and frosting. Or sausage gravy. They taste like waffles and are almost as good as those served at Waffle House. (That may just be a function of cleanliness.)

Ergo, they are waffles.

Why this matters is again all about counting to one. Language tells us that these are waffles. They have all the obvious characteristics of waffles. It is taste, preference, some other subjective criteria that interfere with some calling an Eggo a waffle.

This is the inherent bias of problems of data systems, and the threat of big data. Automatic application of bias based on unarticulated subjective criteria. I don’t accuse Jeff of anything, I know full well this was light-hearted Twitter conversation about food that I inserted myself into. It is a great example though of how we need to think about data decisions and bias. Especially as matters of belief creep in.

At some point in the future, we will have synthetic/artificial persons, a la Heinlein’s Friday or Dick’s Do Androids Dream of Electric Sheep. We will have to fight to ensure fair treatment and honest counting. Belief can be a poisonous thing in counting.

Even waffles.

A simple devolutionary what if

The last two posts have been about when (not) to count something and how to count to one. I feel the need to go a little further and ask you to consider the possibility that most questions about counting to one have been addressed in the arts and literature. Way back when, we were taught in school four major conflicts in literature that every story could be reduced to:

  1. Man vs. man.
  2. Man vs. society.
  3. Man vs. nature.
  4. Man vs. self.

Let’s revise these and replace “man” with “person.” This is not to be politically correct. (although I was accused of that when explaining the Comonwealth’s version of Zaphod Breeblebox why we used the term “First-time in College” instead of “Freshman”. FTIC is simply more accurate in at least three dimensions.) So then we have:

  1. Person vs. person.
  2. Person vs. society.
  3. Person vs. nature.
  4. Person vs. self.

Of course, this is perhaps a bit specific for those of us that are fans of anthropomorphic literature and and science fiction/fantasy in which even “personhood” may be questionable.

So, perhaps:

  1. Entity vs. entity.
  2. Entity vs. society.
  3. Entity vs. nature.
  4. Entity vs. self.

And now we have a way to talk about conflict and interaction for every data pursuit, big or little. We can think about the interactions of a data entity on another, or others. We can think about how externalities impact an entity, as well as the internalities of the entity itself. (This is especially useful if we are talking about entities within object-oriented programming models and methods, properties, and so on.)

By considering language and meaning in defining One and how literary and artistic history affects our under5940114b57a48c126522c65b6fb0936a900871a0fa482eafabb9e9af07412764standing of the language, we can develop new insights to the data and apply existing solutions to our problems of comprehension. The deep, rich history of cautionary tales throughout ancient and contemporary arts literature provide myriad guideposts.

The arts also help us understand the nature of One.

So few people have read A Tenure Line that I am not surprised this has gone without comment. At the end of the synopsis I left out any mention of the closing number, One. In  “A Chorus Line,” after all the personal stories and the audience has become invested in seeing each character as an individual, all that goes away. Individuals are placed in identical, uniform costumes, singing the same words, dancing the same choreography. Striving for uniform range of motion. Sameness. Eight individuals are now one chorus.

(My old boss at the SIUE University Museum was fond of pointing out, “The sign outside says ‘university’ – ‘uni’ means one!”)

Almost everything you need to know about learning to count to one and the translation regime can be learned through the study of “A Chorus Line.”

After all, the most glorious words in the English language are “musical comedy.”

On Counting to One

In the last post, I mentioned that I describe my job as teaching people how to count to one. This was well-received by some, though I suspect more don’t quite get its significance.

Counting, just counting, is easy. (Kinda.)

number_line We are familiar with the basic number line. Going from 0 to 1 is easy, right? We just move one space. But what does that mean?

“Tod, how many students do we have here?”

“11,729.”

“Really? That seems low. I thought I heard the president give a larger number.”

“Well, we have 323 students in study abroad programs in Spain.”

“Okay, that’s better, but Iwas thinking the total was around 15,000.”

“Yes, if you include the students at the off-campus sites, the total enrollment is 15,892.”

“Right! Why didn’t you tell me that before?’

“You asked how many students we have here.”

“Oh. So, are these headcount or FTE?”

And so it goes.

But even more importantly, before we could even get to the first number, we were operating under the assumption that we both have the same definition of “student.” That was never verified in the exchange above, but let’s assume that it is true. That the inquisitor and I knew we both meant that a student was an individual with a specific kind of relationship to the institution. In fact, we knew that “student” is really a general term for a class of groups of individuals with similar, but differing relationships to the institution. These differences may be the level of the degree sought, whether or not they are even seeking a degree, if tuition is paid, if so, under what policies, and so on.

Getting to one requires definition. And as Jeff points out, making choices about who, what, and how someone or something is counted is fraught with peril.

How much fruit do we have?

About seven pounds.

No, no, no, how many pieces?

Gee, that depends on how you cut it.

Dammit, Tod! You know what I mean!

Okay,eight plus 100-150.

What?!

Look, there is a big bag of grapes, I’m not going to count them. There are also two bananas, three apples, an orange, and two tomatoes.

The distance between 1 and 2 is easy, as it is between 2 and 3, and so on. But it is only easy because we have done the hard work of defining the distance between 0 and 1.