A work in progress

(A presentation at VLDS Insights 2015.)

Work (and thoughts) in progress: When is research and information enough to warrant change in policy?


This is really a working paper that came about from a response I gave to a reporter a few months ago.

“What policy changes has the state or institutions made since you published the wage outcomes of graduates?”

“Well, none, I hope. It is far too early in our understanding of the quality and value of these data to make any kind of sweeping policy changes based on them.”

The great promise of VLDS (Virginia’s Longitudinal Data System) is the ability use research based on individual data from administrative datasets to make policy recommendations that improve citizen outcomes.  For too long, education policy was based on series of one-off studies using data from single-use collections, national surveys, or work done in other states. I’m not suggesting these studies have no value, only that they may have less relevance than data on Virginia students. Good research, using test and control groups, and enough randomly selected students, creates important models for understanding interactions and relationships. By itself though, it is not enough.

First, even an exceptionally good study done in another state may not really be relevant to Virginia students. State policies and funding, school and institutional policies and funding, and specific curricula may simply add too many confounding factors. However, even at worst, these studies can guide us in our own research.

Second, these studies are expensive. They require skilled researchers with advanced training. The data collection itself is expensive and has to be repeated for each new study. And our constituents for the results, policymakers and their staff, have little interest in paying again and again for such studies. They also have little interest in waiting.

VLDS provides opportunities to create datasets for researchers that can look across years of students’ experience in schools, divisions, education levels, and even into the workforce with relatively little cost. Even more importantly, VLDS offers the ability to not only recreate the data for a specific study a year or five years later, it also allows us to develop regular, annualized reporting from the same data elements allowing our constituents and ourselves to track progress.

So we can do research and make policy recommendations we couldn’t before. But how quickly should we do the latter?

In 1959, Charles E. Lindblom wrote about “The Science of ‘Muddling Through’” in Public Administration Review where he posited two views of policymaking: Rational-Comprehensive and Successive Limited Comparisons. Lindblom also describes these models, respectively, as the “root” method which starts from fundamentals of the problem and grounded in theory, and the “branch” method that always builds out from the current situation.  I won’t go into two much detail about these other than enumerate their steps and attempt to draw a comparison that I think adds value.

Rational-Comprehensive (Root)

  1. Clarification of values or objectives distinct from and usually prerequisite to empirical analysis of alternative policies.
  2. Policy-formulation is therefore approached through means-end analysis: First the ends are isolated, then the means to achieve are sought.
  3. The test of a “good” policy is that it can be shown to be the most appropriate means to the desired ends.
  4. Analysis is comprehensive; every important relevant factor is taken into account.
  5. Theory is often heavily relied upon.

Successive Limited Comparisons (Branch)

  1. Selection of value goals and empirical analysis of the needed action are not distinct from one another but are closely intertwined.
  2. Since means and ends are not distinct, means-end analysis is often inappropriate or limited.
  3. The test of a “good” policy is typically that various analysts find themselves directly agreeing on a policy (without their agreeing that is the most appropriate means to an agreed objective.)
  4. Analysis is drastically limited:
    1. Important possible outcomes are neglected.
    2. Important alternative potential policies are neglected.
    3. Important affected values are neglected.
  5. A succession of comparisons greatly reduces or eliminates reliance on theory.

Lindblom discusses each step of the Branch method in detail, for those that wish to do the reading. The points I wish to make begin by drawing a comparison of the Root method with the research support that VLDS provides. It is without a doubt that very few agencies of Virginia state government are staffed and funded to perform the in-depth, long-term, theory-based research that university faculty perform each year. It is also unfortunately true that few agency staff have the time to stay current in all the published research related to the data for which they are stewards.

The way Lindblom describes the Root method, it is an impossible method for all but the simplest problems, “It assumes intellectual capacities and sources of information that men simply do not possess, and it is even more absurd as an approach to policy when time and money is limited, as is always the case.” Of course, he bases this conclusion on his initial premise that the analyst in question would perform incredible amounts of due diligence in values identification, data collection, and comparison of potentially relevant policies. I think we can put that aside and take a more reasoned view, a doable view, of the Root method that assumes an appropriately thorough conduct of due diligence grounded in theory from prior research. It’s possible, just expensive. Even completely thorough research has to make some assumptions.

The Branch method is what we do every day. “This is what we know now. This is where we want to be and these are the resources we have to make our analysis.” We find agreement without necessarily debate if a policy is the best, only if it as Herbert Simon put it, “satisfices.” We use data trends and comparable measures to confirm our agreement on policy.

Relying on either method alone is sub-optimal. The Root method is too expensive and it takes too long. Sometimes policy turns on a dime (a 20-minute phone call while one or two queries are performed “live”) and a “researched” decision has to be reached very quickly. The Branch method alone is shallow and may ignore a likely history that exists on similar data studying the same or closely similar question.

Uniting the two methods makes far more sense.

A message that I frequently push, perhaps to the annoyance of some of my colleagues, is that elected officials and their staff have little patience with the in-depth reports from the Root method. They are typically too long, too nuanced, and too detailed for their needs. Further, and this is the most important thing, if the findings of the research are adopted, they want to know that in each succeeding year data will be available to judge the results. Paying for another in-depth study is rarely a considered option.  Thus, the reporting that supports and justifies the Branch method plays a critically important role.

If you accept this model, our next question is the title of the session – When is research and information enough to warrant a change in policy?

I participate occasionally in a forum for people with a certain kind of brain tumor. New members facing treatment decisions frequently struggle with understanding how to decide what to do. The challenge is particularly acute for smaller tumors where there are more options – watch and wait, micro-surgery, and radio-surgery (radiation). This compounded further by the fact that each doctor tends to favor his or her own specialty and thus a patient doing due diligence and seeking a second, or multiple, opinions may become confused. In fact, one of their first posts following, “Oh my God, I have been diagnosed with a brain tumor!” is “How do I decide what to do? My surgeon says surgery, but the radiation oncologist says radiation.  I don’t have serious symptoms, do I have to do anything at all?”

Even after that point once a decision is made about surgery or radiation, there may be questions about what surgical approach or what form of radiation therapy. To some degree, the answers to these questions are about the experience and preferences of the selected surgeon or the availability of specific forms of radio-treatment at the selected facility. Selecting the facility and treatment team is another decision tree once a course of action is chosen.

In my own experience, I had a large tumor with very limited time to decide. I spoke to two surgical teams and both said very similar things. That made the decision very easy for me, especially within the framework for decision-making I had already made for myself. For example, it was important to me, if possible, to have the surgery done at a university hospital. Closer to home was better for my family than clear across country. Further, all the research I had done about how to make the decision to fully understand the context of my situation made it possible to recognize that hearing the same messages from surgical teams 3,000 miles apart gave a true clarity to the situation.

In other words, when you get the same response multiple times, you are probably on to something. Assuming that you have also done the research to ensure you understand both the question asked and the answer received.

Consistency of results from multiple tests seems a very good place to start. Of course, this implies multiple tests, multiple research projects. It implies, I hope, good research that is supported by well-defined theory should be a required feature of these tests. It seems to me that policy recommendations made based on one study, on one result set, are a poor thing on which to risk the lives of citizens. Our goal should always represent some form of improving lives of Virginians. One set of results seems counter-intuitive to this goal. To my mind the stakes are too high. And this is why agency staff tend towards the conservative and that the “best” policy is found in the agreement of multiple analysts, perhaps using the mantra, “yeah, we can live with this.”

Lindblom states: “If agreement directly on policy as a test for “best” policy seems a poor substitute for testing the policy against its objectives, it out to be remembered that objectives themselves have no ultimate validity other than they are agreed upon. Hence agreement is the test of “best” policy in both methods.”

So, when multiple analysts agree, we have another marker as to when to make a policy change.

Distilling these thoughts into a simple list, I see the following to be key indicators as to when make policy recommendations:

  • When replicable/replicated research confirms theory.
  • When measures developed from research are reproducible and readily from administrative datasets, such as those exposed to VLDS.
  • When multiple analysts agree.

This makes sense to me. It is reasonable and allows for time to consideration of the theory, data, and alternatives. I think also this is how our constituents wish us to make policy recommendations. Unfortunately, a lot of policy is not made this way. Sometimes we are given a matter of weeks to formulate a response to a policy question. Clearly this is not much time. Worse, there are calls that come during the legislative session giving us 20 minutes to develop a query against student-level data over multiple years and provide an answer that sets policy, or rather law. It’s not pretty, but that is the nature of law and sausage.

The purpose of VLDS is to support an environment that allows the three indicators above to take place. A mix of sound research, readily produced data and information, and analytic concurrence.