Or: Towards Bayesian Natural Language Semantics In Terms Of Interoperable Mental Content
Or: Towards a Theory of Interoperable Semantics
You know how natural language “semantics” as studied in e.g. linguistics is kinda bullshit? Like, there's some fine math there, it just ignores most of the thing which people intuitively mean by “semantics”.
When I think about what natural language “semantics” means, intuitively, the core picture in my head is:
- I hear/read some words, and my brain translates those words into some kind of internal mental content.
- The mental content in my head somehow “matches” the mental content typically evoked in other peoples’ heads by the same words, thereby allowing us to communicate at all; the mental content is “interoperable” in some sense.
That interoperable mental content is “the semantics of” the words. That's the stuff we’re going to try to model.
The main goal of this post is [...]
---
Outline:
(01:13) But Why Though?
(02:34) Overview
(04:19) What's The Problem?
(04:22) Central Problem 1: How To Model The Magic Box?
(05:34) Subproblem: What's The Range Of Box Outputs?
(06:48) SubSubProblem: What Can Children Attach Words To?
(08:15) Summary So Far
(09:18) Central Problem 2: How Do Alice and Bob “Agree” On Semantics?
(11:41) Summary: Interoperable Semantics
(13:07) First (Toy) Model: Clustering + Naturality
(14:54) Equivalence Via Naturality
(17:02) A Quick Empirical Check
(22:22) Strengths and Shortcomings of This Toy Model
(24:59) Aside: What Does “Grounding In Spacetime Locality” Mean?
(26:22) Second (Toy) Model Sketch: Rigid Body Objects
(27:44) The Teacup
(33:46) Geometry and Trajectory Clusters
(36:27) Strengths and Shortcomings of This Toy Model
(39:07) Summary and Call To Action
(40:10) Call To Action
The original text contained 6 footnotes which were omitted from this narration.
---