# ROI is tricky when you’re talking about advancing humanity Anybody who’s taken a science course knows that you can only compare two quantities when they are measured in the same unit. So if you want to compare two processes based on a single number, you need to convert some combination of their inputs and outputs into the same unit. In the realm of choosing how to deploy money, the instinct to compare possible projects on the basis of ROI makes a lot of sense. Funding the highest ROI projects maximizes the efficiency of the deployed capital and removes inevitably biased human judgement from the final decision. ROI makes sense for investments where the inputs and outputs are actually denominated in a scalar (dollar) value. There are two major reasons ROI is a flawed way to think about evaluating Comparing things based on ROI requires converting everything into a single metric. There are three reasons why you would want to reduce everything down to a single metric 1. Debiasing 2. External justification 3. The ability to prioritize Any concept of ROI needs to account for the fact that there are conceivably infinite upsides to be had and infinite downsides to be prevented. General purpose super intelligent AI is effectively infinite upside or infinite downside * [[Futuristic goals are not orthogonal]] * [[What are the dimensions that people care about in project selection]] <!-- #evergreen --> [Web URL for this note](http://notes.benjaminreinhardt.com/ROI+is+tricky+when+you’re+talking+about+advancing+humanity) [Comment on this note](http://via.hypothes.is/http://notes.benjaminreinhardt.com/ROI+is+tricky+when+you’re+talking+about+advancing+humanity)