
Sign up to save your podcasts
Or
"GPT-3 was trained on is so large that the model contains a certain fraction of the actual complexity of the world. But how much is actually inside these models, implicitly embedded within these neural networks?
I decided to test this and see if I could examine the GPT-3 model of the world through the use of counterfactuals. Specifically, I wanted to see if GPT-3 could productively unspool histories of the world if things were slightly different, such as if the outcome of a war were different or a historical figure hadn’t been born. I wanted to see how well it could write alternate histories." - Samuel Arbesman
From Cabinet of Wonders newsletter by Samuel Arbesman
Great tweet thread summarizing his post
"Securities" podcast is produced and edited by Chris Gates
4.7
1515 ratings
"GPT-3 was trained on is so large that the model contains a certain fraction of the actual complexity of the world. But how much is actually inside these models, implicitly embedded within these neural networks?
I decided to test this and see if I could examine the GPT-3 model of the world through the use of counterfactuals. Specifically, I wanted to see if GPT-3 could productively unspool histories of the world if things were slightly different, such as if the outcome of a war were different or a historical figure hadn’t been born. I wanted to see how well it could write alternate histories." - Samuel Arbesman
From Cabinet of Wonders newsletter by Samuel Arbesman
Great tweet thread summarizing his post
"Securities" podcast is produced and edited by Chris Gates
1,023 Listeners
579 Listeners
513 Listeners
1,777 Listeners
935 Listeners
2,299 Listeners
799 Listeners
390 Listeners
390 Listeners
343 Listeners
46 Listeners
194 Listeners
61 Listeners
132 Listeners
455 Listeners