One of things I did this week was re-read Larry Niven’s novel Protector. Toward the end there are several passages dealing with a battle in space. Unlike Star Wars and Star Trek (and every other adventuresome space opera) the battle is painfully slow. It is slow enough to be thematically related to Theodore Sturgeon’s Slow Sculpture in that movements of the enemy craft are observed from such distance that weeks and months might go by before the result of a maneuver can be noted.
Reading this put me in mind of Tom Clancy’s The Hunt for Red October. When I first saw the movie,I learned something about submarine navigation. Being in the infantry, I never worried about how the Navy got from place to place. We had our maps and as long as we were somewhere on the ground, that was represented on a map that was in our possession, and we could see something to triangulate on, we could figure out where we were with a great confidence. I know I could, and still can.
Out on the water, away from land, it is another thing. GPS makes things much easier today, but it doesn’t really work underwater. Navigating a submerged vessel is tricky business. Movement is in three dimensions, unlike ground-based life. (Yes, one can argue that we move in three dimensions when on land, but only while there is a plane underneath our feet to counteract gravity – our vertical movement.) Submarines can’t see where they are, Voyage to the Bottom of the Sea aside. A submarine moves from a known point to another point by tracking its vectors of travel ( with velocity, direction, and duration) against maps and sonar readings. It is effective use of imperfect information.
If only navigating higher education was as easy and effective. Too many institution leaders and policymakers think they can effectively reach a “place” by relying on benchmarks that tell them where they have been and where some select group of institutions have been. Any time comparative data are used, the lag time makes them irrelevant to now and just another marker as to where the institution has been. The solution would seem to be to ignore what other institutions are doing and not worrying about comparisons. Unfortunately, those in charge seem incapable of thinking this way. Gee whiz, it seems simple enough to select achievable goals and the desired profile of an institution and navigate to those.
Institutions rely way too much on IPEDS data and data from national projects. It forces them to always live in the past. Comparable data across institutions provides a nice a map, but unlike a real map, the landscape keeps changing. The flow of students and the choices they make don’t map the same way as mountains, valleys, rivers, and lakes. Roads and buildings have some stability, but they change enough to require new maps on a regular basis. People just don’t map well.
Until we start thinking about measuring institutions in meaningful terms of velocity, direction, and duration, and how they relate to a specific goal, I just don’t see things changing very much for higher education…at least in terms of the changes desired by higher education. The change we will see will be that forced upon the industry by others.
For the record, student unit-record data at the national level will not solve this problem. It can improve the map through greater detail and precision, but it will likely create more dependency on comparable metrics instead of freeing decision-makers from the chains of comparative benchmarking.
This post has actually gone a different direction than I had intended. Originally I had planned to draw the link between the the long-term nature of the space battle described in Protector and how long it takes to see the results of a policy change at an institution. Instead, I got hung up on the difficulty of measurement and the fixations of decision-makers on benchmarking. I’m not sure which is the more important issue.