Share Let's Learn About A.I.
Share to email
Share to Facebook
Share to X
By Nick Elsey
5
44 ratings
The podcast currently has 10 episodes available.
We discuss graphs first, which are a data structure used to store relationships between objects. You’ve probably heard the term “social graph” used to describe the friendships in a social network like Facebook or Twitter. These structures pop up frequently in computer science, the natural sciences, and in other areas. I give an example of when a graph might be useful, and try to clarify when it may not be useful.
https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
https://en.wikipedia.org/wiki/Graph_(abstract_data_type)
After graphs, we go to a specific subset of graphs, called trees. Whereas a graph can look like a spider’s web, or a grid of streets, all trees look very similar - much like a family tree or an org chart of a large company. I discuss when and why trees are appropriate.
https://en.wikipedia.org/wiki/Tree_(data_structure)
https://medium.freecodecamp.org/all-you-need-to-know-about-tree-data-structures-bceacb85490c
Finally, we discuss computational complexity and algorithm analysis. These topics come up frequently when writing code to solve interesting problems. Computational complexity theory is the study of classifying problems by their complexity (in terms of run time, memory used, etc). Algorithm analysis is concerned with finding the amount of time, memory or other resources needed for an algorithm, usually as a function of the size of the input. Together, these tools allow for structured reasoning about the complexity of problems, and what problems are feasible or unfeasible, given our current understanding and current hardware. Interestingly, “hard” math problems that can not be solved by our current computing resources are actually the foundation for secure internet communication - cryptographers rely on what are called trap-door functions to create secure encryption algorithms.
https://en.wikipedia.org/wiki/Computational_complexity_theory
https://en.wikipedia.org/wiki/Analysis_of_algorithms
https://en.wikipedia.org/wiki/Trapdoor_function
https://en.wikipedia.org/wiki/Discrete_logarithm
I hope you all enjoy the episode, and I will talk to you soon! Nick
Next episode we start learning the more technical stuff, so buckle up! Nick
wikipedia page that describes the categories discussed in this episode:
https://en.wikipedia.org/wiki/Intelligent_agent
We’re back after a long hiatus! In this episode, we start to introduce the more technical definition of an agent, and how it interacts with its environment. I also discuss the topic of how to grade an agent (rather abstractly), and why the ability to learn is important for something to be considered autonomous, or intelligent. Next episode, we will talk about some paradigms of how an agent can be implemented, and how the agent can learn from its environment. Hopefully these two will be the most boring episode in the show :)
Nick
Resources:
Russell & Norvig - Artificial Intelligence: A Modern Approach
https://en.wikipedia.org/wiki/Intelligent_agent
I hope you enjoy the episode!
Resources:
The book:
Russell & Norvig, Artificial Intelligence: A Modern Approach (2008) http://aima.cs.berkeley.edu/
Turing's original paper on the imitation game:
https://www.csee.umbc.edu/courses/471/papers/turing.pdf
Topics that we covered:
cognitive biases: https://en.wikipedia.org/wiki/List_of_cognitive_biases
logical forms: https://en.wikipedia.org/wiki/Logical_form
complexity theory: https://en.wikipedia.org/wiki/Computational_complexity_theory
websites:
https://en.wikipedia.org/wiki/Artificial_intelligence
https://en.wikipedia.org/wiki/History_of_artificial_intelligence
essays:
https://rodneybrooks.com/forai-the-origins-of-artificial-intelligence/
http://people.csail.mit.edu/brooks/idocs/DartmouthProposal.pdf
books:
Russell & Norvig, Artificial Intelligence: A Modern Approach (2008) http://aima.cs.berkeley.edu/
Lucci & Kopec, Artificial Intelligence in the 21st Century (2012)
The podcast currently has 10 episodes available.