The Nonlinear Library

LW - The Brain is Not Close to Thermodynamic Limits on Computation by DaemonicSigil


Listen Later

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Brain is Not Close to Thermodynamic Limits on Computation, published by DaemonicSigil on April 24, 2023 on LessWrong.
Introduction
This post is written as a response to jacob_cannel's recent post Contra Yudkowsky on AI Doom. He writes:
EY correctly recognizes that thermodynamic efficiency is a key metric for computation/intelligence, and he confidently, brazenly claims (as of late 2021), that the brain is about 6 OOM from thermodynamic efficiency limits
EY is just completely out of his depth here: he doesn't seem to understand how the Landauer limit actually works, doesn't seem to understand that synapses are analog MACs which minimally require OOMs more energy than simple binary switches, doesn't seem to understand that interconnect dominates energy usage regardless, etc.
Most of Jacob's analysis for brain efficiency is contained in this post: Brain Efficiency: Much More than You Wanted to Know. I believe this analysis is flawed with respect to the thermodynamic energy efficiency of the brain. That's the scope of this post: I will respond to Jacob's claims about thermodynamic limits on brain energy efficiency. Other constraints are out of scope, as is a discussion of the rest of the analysis in Brain Efficiency.
The Landauer limit
Just to review quickly, the Landauer limit says that erasing 1 bit of information has an energy cost of kTlog2. This energy must be dissipated as heat into the environment. Here k is Boltzmann's constant, while T is the temperature of the environment. At room temperature, this is about 0.02 eV.
Erasing a bit is something that you have to do quite often in many types of computations, and the more bit erasures your computation needs, the more energy it costs to do that computation. (To give a general sense of how many erasures are needed to do a given amount of computation: If we add n-bit numbers a and b to get a+bmod2n, and then throw away the original values of a and b, that costs n bit erasures. I.e. the energy cost is nkTlog2.)
Extra reliability costs?
Brain Efficiency claims that the energy dissipation required to erase a bit becomes many times larger when we try to erase the bit reliably.
The key transition error probability α is constrained by the bit energy: α=e−EbkBT Here's a range of bit energies and corresponding minimal room temp switch error rates (in electronvolts):
α=0.49, Eb=0.02eV
α=0.01, Eb=0.1eV
α=10−25, Eb=1eV
This adds a factor of about 50 to the energy cost of erasing a bit, so this would be quite significant if true. To back up this claim, Jacob cites this paper by Michael P. Frank. The relevant equation is pulled from section 2. However, in that entire section, Frank is temporarily assuming that the energy used to represent the bit internally is entirely dissipated when it comes time for the bit to be erased. Dissipating that entire energy is not required by the laws of physics, however. Frank himself explicitly mentions this in the paper (see section 3): The energy used to represent the bit can be partially recovered when erasing it. Only kTlog2 must actually be dissipated when erasing a bit, even if we ask for very high reliability.
(I originally became suspicious of Jacob's numbers here based on a direct calculation. Details in this comment for those interested.)
Analog signals?
Quoting Brain Efficiency:
Analog operations are implemented by a large number of quantal/binary carrier units; with the binary precision equivalent to the signal to noise ratio where the noise follows a binomial distribution.
Because of this analog representation, Jacob estimates about 6000 eV required to do the equivalent of an 8 bit multiplication. However, the laws of physics don't require us to do our floating point operations in analog. "are implemented" does not imply "have to be implemented". Digital multiplication of two 8 bit ...
...more
View all episodesView all episodes
Download on the App Store

The Nonlinear LibraryBy The Nonlinear Fund

  • 4.6
  • 4.6
  • 4.6
  • 4.6
  • 4.6

4.6

8 ratings