Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Could a superintelligence deduce general relativity from a falling apple? An investigation, published by titotal on April 23, 2023 on LessWrong.
Introduction:
In the article/short story “That Alien Message”, Yudkowsky writes the following passage, as part of a general point about how powerful super-intelligences could be:
Riemann invented his geometries before Einstein had a use for them; the physics of our universe is not that complicated in an absolute sense. A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis—perhaps not the dominant hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration—by the time it had seen the third frame of a falling apple. It might guess it from the first frame, if it saw the statics of a bent blade of grass.
As a computational physicist, this passage really stuck out to me. I think I can prove that this passage is wrong, or at least misleading. In this post I will cover a wide range of arguments as to why I don't think it holds up.
Before continuing, I want to state my interpretation of the passage, which is what I’ll be arguing against.
1. Upon seeing three frames of a falling apple and with no other information, a superintelligence would assign a high probability to Newtonian mechanics, including Newtonian gravity. So if it was ranking potential laws of physics by likelihood, “Objects are attracted to each other by their masses in the form F=Gmm/r2” would be near the top of the list.
2. Upon seeing only one frame of a falling apple and one frame of a single blade of grass, a superintelligence would assign a decently high likelihood to the theory of general relativity as put forward by Einstein.
This is not the only interpretation of the passage. It could just be saying that an AI would “invent” general relativity in terms of inventing the equations just by idly playing around, like mathematicians playing with 57 dimensional geometries or whatever. However, the phrase “under active consideration”, and “dominant hypothesis” imply that these aren’t just inventions, these are deductions. A monkey on a typewriter could write down the equations for Newtonian gravity, but that doesn’t mean they deduced it. What matters is whether the knowledge gained from the frames could be used to achieve things.
I’m not interested in playing semantics or trying to read minds here. My interpretation above seems to be what the commenters took the passage to mean. I’m mainly using the passage as a starting off point for discussion about the limits of first principles computations.
Here is a frame of an apple, and a frame of a field of grass.
I encourage you to ponder these images in detail. Try and think for yourself the most plausible method for a superintelligence to deduce general relativity from one apple image and one grass image.
I’m going to assume that the AI can somehow figure out the encoding of the images and actually “see” images like the ones above. This is by no means guaranteed, see the comments of this post for a debate on the matter.
What can be deduced:
It’s true that the laws of physics are not that complicated to write down. Doing the math properly would not be an issue for anything we would consider a “superintelligence”.
It’s also true that going from 0 images of the world to 1 image imparts a gigantic amount of information. Before seeing the image, the number of potential worlds it could be in is infinite and unconstrained, apart from cogito ergo sum. After seeing 1 image, all potential worlds that are incompatible with the image can be ruled out, and many more worlds can be deemed unlikely. This still leaves behind an infinite number of potential plausible worlds, but there are significantly more constraints on those worlds.
With 2 image, the constra...