A charge is often laid at the door of the large language models (LLMs) that they rely on probabilistic generation, assuming that this is somehow a bad thing, and that a more deterministic behaviour would somehow be a better idea for the future artificial general intelligence (AGI). Before the advent of the LLMs, almost all practical computer systems followed deterministic logic encoded in their software, but as I will argue in this episode, the future AGI will be neither deterministic nor deductively logical.
This episode is broadly based on a chapter from E.T.Jaynes (2003) Probability theory: The logic of science. Cambridge, UK: Cambridge University Press.