ARPA-E did a poor job riffing on DARPA

ARPA-E: ARPA-Energy

The Intelligence ARPA agency started operation in 2009. It currently has 20 program managers and a budget of ~$180M as of 2020. That makes it approximately the size of one DARPA office. On the surface it has all the pieces of the ARPA model: empowered program managers coordinating high-risk high-reward external research.

Deltas

ARPA-E is on the opposite end of the transfer spectrum from IARPA: while IARPA only hands off technology to it’s funding departments, ARPA-E almost purely targets commercialization as its transfer mechanism (DARPA does both.) Transfer via commercialization makes sense given that the Department of Energy doesn’t actually build or deploy energy technology. ARPA-E adapted the ARPA model to a commercialization focus in several ways: it has a whole commercialization team and on staff lawyers and explicitly considers how the technology is going to be implemented by the energy industry before embarking on a program. While these adaptations make ‘sense’ they may be shooting the ARPA model in the foot. The additional apparatus around commercialization makes ARPA-E less flat than DARPA - there are more non-PM staff than PMs, which I suspect removes many of the benefits of DARPA is relatively tiny and flat.

The DOE funds ARPA-E programs directly, in contrast to DARPA being funded by congress as part of the DOE budget. This subtle difference in funding sources make ARPA-E less independent from the DOE and may invalidate The government’s influence on DARPA is buffered by opacity and the director. As noted above, ARPA-E is targeting transfer to non-DOE entities, so the increased DOE oversight may lead to one of those situations where an entity with no actual skin-in-the-game has a large say on risky activities.

ARPA-E is explicitly metrics-driven. While this approach certainly jives with modern sensibilities, my hunch is that metrics can hamstring embryonic technology. Metrics are great when you know what you’re optimizing, but tend to lead to the Streetlight Effect where people optimize for things that can be measured. Do you know what you’re optimizing when you’re still figuring out how a technology works and what it is good for? What would J.C.R. Licklider’s metrics have been for a personal computing program? It is possible that energy has so few relevant metrics that a metrics-driven approach doesn’t cut down on trying weird things, but I’m skeptical. ( DARPA funds wacky things that go nowhere )

ARPA-E is fairly process heavy. I have first hand accounts of inflexibility around hiring “despite most of the work being in different parts of the country, you must be full time and in Washington or you will not be invited to all the meetings” and tools “you must use the government issued laptop for everything.” This inflexibility goes against the conclusions to Pay attention to DARPA’s informal process and ignore formal processand DARPA is incredibly flexible with who it hires to be program managers.

::new::

Results

Most reports about ARPA-E’s impact focus on the amount of money they have spent and low on concrete outcomes. It feels like a dog that didn’t bark type situation. The closest to an outlier result was funding the research that led to a solar-cell company named 1366 hitting 19.6% efficiency.

ARPA-E is also just operating in a brutal area: the energy industry is notoriously conservative partially because it is behooven to many stakeholders: various kinds of investors, governments, customers, and non-customer residents in the areas where the energy company operates.

Web URL for this note

Comment on this note