A more serious way to say this is “Computers are not great at diagnosing and analyzing anomalies.”
A European robot would have reported that North America is worthless because its gold detectors registered nothing. When European humans came to North America they said “well, there’s no gold, but there is a bunch of forest, space, and possible farmland.” Abstractly, computers and robots as of 2020 are not great at noticing and responding to things that they haven’t been explicitly instructed to notice and respond to.
Saying ‘huh, that’s funny’ is the entry point to Thomas Kuhn’s eEpistemology. The fact that Kuhnsian epistemology depends on people noticing anomalies and then diagnosing them would mean that computers are bad at generating paradigm shifting knowledge. Computers are potentially better at Karl Popper-Ian epistemology because the ‘huh, that’s funny’ part comes in the middle of the cycle, after you’ve formulated a hypothesis and done an experiment. This is approximately the de-facto division of labor in §Augmented Knowledge Generation but nothing I’ve seen has tied it back to the epistemology. Making the epistemological connection to the human-computer division of labor could be useful. (Most approaches to augmented knowledge generation ignore humans)
There are notable exceptions to this deficiency - reenforcement learning algorithms that notice how to “cheat” at whatever task they’ve been assigned. This note will become incorrect if those results start to extend outside of scenarios with constraints on inputs, outputs, and the planning timescale.