# You can create a program design discipline that enables better research and development
There is a lot of discourse about how to manage research, but not about the discipline of *planning* research. Now, the idea of planning research may have made some of you just throw up up in your mouth a little bit. [^5]
I’m going to put on my Wittgenstein hat here ([[Wittgenstein: Reality is Shaped by the Words we Use]]) and assert that I suspect what people are actually objecting to is *scheduling* research around *deliverables*, which I see as very different from planning research around goals.
Planning on some scope and timescale is essential to *all* research. Even “I’m going to put this liquid in this other liquid and see what happens!” is a plan. Similarly, your goal is to see and report on what happens you mix those two liquids. These examples are arguably so trivial as to be meaningless but they provide both an existence proof and a place to build on top of. The chemist mixing the chemicals probably (maybe not!) has a bigger plan that involves mixing many combinations of chemicals in order to chase a bigger goal of eg. understanding which combination produces the most heat or why.
By contrast, this chemist doesn’t necessarily have or need a specific deliverable to do good work. It might literally be impossible for her to eg. find a mixture that produces energy above a specific threshold. Even if she could, it might involve testing an order of magnitude more mixtures or even needing to invent an entirely new synthesis technique to do it. Requiring her to work on a specific schedule would be inane. [[For a certain class of research, deliverables can be detrimental]][^1]. I suspect that deliverables and schedules are artifacts of the commoditization process that happened to research during the 20th century as a byproduct of (see [[odlyzkoDeclineUnfetteredResearch1995]] and [[Analogizing the commoditization of research to the agricultural revolution]].)
So, we’ve established that planning and goal setting (separate from schedules and deliverables) is necessary for any kind of R+D work. If that’s the case, there are a ton of follow on questions. What kind of goals should be set and what timescales should be planned for what sort of work? Should you parallelize work or serialize it? When? What are the key constraints keeping you from a breakthrough? What level of fidelity will stifle creativity and what will provide the focus to make breakthroughs? Which tools will maximize the latter and minimize the former and which frameworks can make the difference between good work and bad? I would posit that these questions and their ilk, what I might refer to as “program design” are woefully under-theorized.
In addition to the big-headed questions, the objective truth seems to be that culturally, we kind of suck at designing research-y programs. [[Most roadmaps suck]]. Whether or not we’ve actually become *worse* at program design is an open question. For the sake of argument, I would say yes, due to reasonable incentives - there are fewer areas outside of research that need it. [[Instead of planning not working as well as people in the past thought as a society we’ve become worse at technological planning]].
The question of whether we’ve become worse at program design is an important preliminary for the more important question - “could we do program design better?”[^2] If we’ve gotten worse, it’s strong evidence that we could do better. The hypothesis that we could do program design better isn’t dead if we’ve always sucked at it, but it certainly has more of an uphill battle.
One reason program design is under-theorized is that (like so many disciplines which are meant to supplement practice) the people who are good at it are too busy applying it to spend time and effort making it legible. There are no research coaches. Program design also suffers acutely from siloization - each organization (or individual!) seems to reinvent the wheel. This isn’t to say that we should aim for standardization - program design will always be very context-dependent. However, it seems possible to shoot for a baseline of best practices and frameworks for how to build on top of them.
There are many places that the discipline might be able to draw on. A big part of it will be trying to make legible the process that skilled practitioners already use to create things like [[Adam Marblestone]]’s [[Climate Technology Primer]] or [[José Luis Rincon]]’s [[Longevity FAQ]] (José does a good job unpacking his process in [[Longevity FAQ Making Of]] - one could imagine a line of research that focuses on understanding why some people are good at this process and some are not.) Adam and [[Ed Boyden]] hinted at the possibility of more structured discovery in [[boydenArchitectingDiscoveryModel2019]]. History is another place to dig - how did people manage research-adjacent projects that had [shockingly fast results](https://patrickcollison.com/fast)? Some of it was just the individuals involved, I can’t shake the hunch that there is transferrable knowledge in the tools developed to manage early space programs [[The Secrets of Apollo]] or the [[PERT guide for Management]]. How did the creators of the [[deanFusionPowerMagnetic1976]] think about the different possibilities and tradeoffs? Of course, the value of historical study depends on the aforementioned question of whether we’ve become worse or not. There has been a lot of work on how to do better project management, which is incredibly relevant. One could even argue that researchy program design is just part of project management. I would argue that standard project management techniques ([[Six Sigma is a process improvement framework created at Motorola]], [[Matrix Management - Wikipedia]], etc.) need so much modification to deal with the uncertainty, timescales, and fractured expertise inherent in research programs that we’re talking about something beyond a simple extension of project management. There is a whole range of methodologies like [[TRIZ]], [[Wigmore Guide - Paper]], or [[Wardley Maps]] that feel like they have nuggets of truth but are for the most part post-hoc explanations of success used to get consulting gigs in areas with loose feedback loops like big companies or law schools. So, like any good discipline, program design would start strongly adjacent to crackpots and mysticism![^3]
There are several areas that I suspect are important for good program design where I haven’t found much written prior art unless you squint very hard: Systematic ways to think about technological constraints and tradeoffs; Ways to visually represent constraints and possibilities in a way that can actually aid decision making and reveal new possibilities; and the tactical psychology of creating programs come to mind. Each of these is something that people already do but in an ad-hoc way. Any time someone says something along the lines of “oh, you can’t do long-range battery-powered commercial flight - it’s too heavy” they’re implicitly bundling a ton of constraints and assumptions: What kind of batteries are we talking about? What is their power density? What components contribute what fraction to their weight? How much power do they need to put out for how long to be useful? What are we assuming about the propulsion system for the plane? What are the degrees of freedom for all of these these components? What are the dependencies between them? How hard would it be to improve any one of those components by how much and what would that effect be on the whole system? The same sort of questions are important for any technology - I just used battery-powered planes to de-abstract. It’s also impossible to hold in your head all at once, let alone transfer into someone else’s head - thus the need for thinking frameworks and representation tools.
And then there’s the shockingly human and under-discussed process of mining that knowledge in the first place. Perhaps controversially, I’m convinced that [[Most human knowledge is not encoded]], especially on the knowledge frontier. As a consequence, a lot of program design is actually applied psychology. As the program designer you need to not only nudge people to talk about their area of expertise in a way that they probably haven’t done before and generate excitement about the idea of a bigger program, you need to figure out *who* the right people are to talk to and get them to talk to you in the first place! This piece of program design resembles some combination of sales and user research. Equally (or perhaps more!) important to the question of “what work needs to be done?” is “who is best suited to do the work?” It’s quite funny actually - while there seems to be a strong cultural consensus that good outcomes in high-uncertainty endeavors depend heavily on the specific individuals involved[^4], answers to the question “who is best suited to do this work?” Are almost entirely absent when people lay out a research program.
It’s important to acknowledge that the hypothesis that you can create a program design discipline could easily be false. Perhaps guiding research is more like creating art - one can talk about specific techniques (**art**: stenciling, mixing paints, shadows, etc. **program design**: talking to experts, budgeting, evaluating claims) but the process itself is too context-dependent and holistic to improve through the tools and abstractions that a discipline would provide. Perhaps more dangerous is the possibility that any formalization ventures out of the realm of planning+goals and into the realm of schedules+deliverables, hamstringing the work you intend to enable.
Another strong argument against the formalization of program design is that the real issues are unknown unknowns that are undiscoverable or unthinkable before you run into them in the process of doing the work. Paradigm shifts are to some extent unimaginable before they happen by definition. The presumption that we could do a better job than practitioner intuition could be a foolish waste of brain-cycles at best or actively steer people away from actually useful work at worst. “How do you draw a roadmap off the edge of the map?” Is a valid question.
I suspect if you go in explicitly acknowledging the potential failure modes, the potential upsides are worth it. [[The possible upsides of a healthy program design discipline seem like they could be huge]].
The trick is that to do risky experiments (in the Popperian sense - see [[Science as Falsification]]) program design as a discipline needs to be embedded in an organization running programs. [[There are a number of experiments that seem like they can only be done in the context of an organization]]. Without an organizational home, program design would suffer the same fate as other academic or consultant-driven disciplines of practice. These disciplines tend to sound great on paper so that someone wants to implement them but that implementation often has so many restrictions, imperfect information, and strung-out feedback loops that it’s unclear how much of the success or failure of the project is attributable to the discipline. Developing better program design is one of the reasons why despite [[Running multiple organizational experiments flies in the face of common wisdom]], [[A private DARPA riff both can and must do multiple experiments at once]].
### Related
* [[Person targeted questions for roadmapping]]
* [[Design for roadmapping system]]
* [[Program design could be done as a residence program]]
* [[TRIZ is a framework for creating inventions]]
* [[What are the eight steps of Wigmore analysis translated to science?]]
* [[§Program Design]]
* [[Program design is like creating the lego instruction book]]
### Some specific research questions that the discipline may want to go after
* [[Is there room for an expanded TRL framework?]]
[^1]: Of course, [[For a certain class of research, deliverables can be helpful]]
[^2]: Yes “better” is a vague word. In the context of program design means some combination of “faster, cheaper, more successfully, enabling programs that wouldn’t exist otherwise, and generating more knowledge in both successes and failures.”
[^3]: See: alchemy, astrology, and evolutionary psychology.
[^4]:[[The things that make “great talent” in high variance industries boils down to the ability to successfully make things happen under a lot of uncertainty]]
[^5]:[[riconFundPeopleNot2020]]
<!-- #project/spectech/roadmapping #evergreen -->
[Web URL for this note](http://notes.benjaminreinhardt.com/You+can+create+a+program+design+discipline+that+enables+better+research+and+development)
[Comment on this note](http://via.hypothes.is/http://notes.benjaminreinhardt.com/You+can+create+a+program+design+discipline+that+enables+better+research+and+development)