#034 Eray Özkural- AGI, Simulations & Safety

12.20.2020 - By Machine Learning Street Talk (MLST)

Download our free app to listen on your phone

Dr. Eray Ozkural is an AGI researcher from Turkey, he is the founder of Celestial Intellect Cybernetics. Eray is extremely critical of Max Tegmark, Nick Bostrom and MIRI founder Elizier Yodokovsky and their views on AI safety. Eray thinks that these views represent a form of neoludditism and they are capturing valuable research budgets with doomsday fear-mongering and effectively want to prevent AI from being developed by those they don't agree with. Eray is also sceptical of the intelligence explosion hypothesis and the argument from simulation.

Panel -- Dr. Keith Duggar, Dr. Tim Scarfe, Yannic Kilcher

00:00:00 Show teaser intro with added nuggets and commentary
00:48:39 Main Show Introduction 
00:53:14 Doomsaying to Control  
00:56:39 Fear the Basilisk!  
01:08:00 Intelligence Explosion Ethics  
01:09:45 Fear the Automous Drone! ... or spam  
01:11:25 Infinity Point Hypothesis  
01:15:26 Meat Level Intelligence 
01:21:25 Defining Intelligence ... Yet Again  
01:27:34 We'll make brains and then shoot them 
01:31:00 The Universe likes deep learning 
01:33:16 NNs are glorified hash tables 
01:38:44 Radical behaviorists  
01:41:29 Omega Architecture, possible AGI?  
01:53:33 Simulation hypothesis 
02:09:44 No one cometh unto Simulation, but by Jesus Christ  
02:16:47 Agendas, Motivations, and Mind Projections  
02:23:38 A computable Universe of Bulk Automata 
02:30:31 Self-Organized Post-Show Coda 
02:31:29 Investigating Intelligent Agency is Science 
02:36:56 Goodbye and cheers!  


More episodes from Machine Learning Street Talk (MLST)