
Sign up to save your podcasts
Or


(The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2022 Retraice, Inc.)
AIMA4e Annotations
A companion to the great white brick.
As of November 23, 2022
(Start date: November 21, 2022.)
[1]retraice.com
PREFACE
* The phenomenon: intelligent agents * The discipline: artificial intelligence "the study of agents that receive percepts from the environment and perform actions." (vii)
* Aspects of the phenomenon: + Agent function: "Each ...agent implements a function that maps percept sequences to actions" (vii) o Ways to represent agent functions include: "reactive agents, real-time planners, decision-theoretic systems, and deep learning systems." (vii) + Learning o "a construction method for competent systems" (viii) o "a way of extending the reach of the designer into unknown environments." (viii) + Goals o Robotics and vision: # "not ...independently defined problems" # "[things] in the service of achieving goals."
I INTELLIGENCE --"Artificial Intelligence"
1 Intro:
definitions, foundations, history, philosophy, state of the art, risks-benefits
2 Agents:
environments, `good' behavior, agent structure and types
II SOLVING--"Problem-solving"
3 Searching: Looking ahead to find a sequence.
Algorithms, strategies, informed/heuristic strategies.
4 Complex Environments: More realistic environments.
Local search, optimization, continuous spaces, nondeterministic actions, partially observable env.s, online search and unknown env.s.
5 Adversarial Games: Other agents competing against us.
Theory, optimal decisions, alpha-beta tree search, Monte Carlo tree search, stochastic g.s, partially observable g.s, limitations.
6 Constraint Satisfaction Problems: States as domains, solutions as allowable combinations of states.
Constraint propagation, inference, backtracking search, local search, structure of problems
III THINKING--"Knowledge, reasoning, and planning"
7 Logical Agents: Forming representations and reasoning before acting.
Knowledge-based agents; representing worlds; logic, world models and `possible worlds'; logic without objects.
8 First-Order Logic: A formal language for objects and their relations.
`Ontological commitment' (what is assumed about reality); syntax, semantics; knowledge engineering (building formal representations of important objects and relations in a domain).
9 First-Order Inference: Reasoning about objects and their relations.
Algorithms to answer any 1st-order logic question.
10 Knowledge Representation: Representing the real world for problem solving.
What content to put into a knowledge base.
Knowledge representation languages and their uses (315): * First-order logic: reasoning about a world of objects and relations; * Hierarchical task networks: for reasoning about plans (chpt. 11); * Bayesian networks: for reasoning with uncertainty (chpt. 13); * Markov models: for reasoning over time (chpt. 17); * Deep neural networks: for reasoning about images, sounds, other data (chpt. 21).
11 Automated Planning: Hierarchical task networks.
Planning for spacecraft, factories, military campaigns; representing actions and states; efficient algorithms and heuristics.
IV UNCERTAINTY--"Uncertain knowledge and reasoning"
12 Quantifying Uncertainty: An answer to the laziness and ignorance that kill formal logic.
Causes of uncertainty are environment types (partially observable, nondeterministic, adversarial); belief state grows big and unlikely fast (384); agents still need a way to act; absolute certainty is impossible; it comes down to importance, likelihood and degree of success (385-386).
Logic fails because laziness and ignorance; probability theory solves the qualification problem by summarizing the uncertainty. * Laziness: too much work to list everything, or use such a list; * Ignorance: (theoretical) there are no complete theories; (practical) we can never run all the tests.
13 Probabilistic Reasoning [big]: Bayesian networks.
For reasoning with uncertainty by representing causal independence (398) and conditional independence (401) relationships to simplify probabilistic representations of the world.
14 Probabilistic Reasoning Over Time: Comprehending the uncertain past, present and future.
Belief state plus transition model yields prediction (chpt 4, 7, 11); percepts and sensor model yield updated belief state; add probability theory to switch from possible states to probable states.
15 Probabilistic Programming: Universal formal languages to represent any computable probability model, and they come with algorithms.
Using formal logic and traditional programming languages to represent probabilistic information.
16 Making Simple Decisions: Agents getting what they want in an uncertain world--as much as possible, on average.
Beliefs, desires; utility theory; utility functions; decision networks; the value of information (547); this chapter is concerned with one-shot or episodic decsions problems (as opposed to sequential) (cf. 562).
17 Making Complex Decisions: What to do today given decisions to be made tomorrow.
Sequential decision problems (as opposed to one-shot episodic): the agent's utility depends on a sequence of decisions in stochastic (explicitly probabilistic (45)) and partially observable environments. Markov models (563; cf. 463) for reasoning over time (chpt. 17).
18 Multiagent Decision Making [big]: When there's more than one agent in the environment.
The nature of such environments and the strategies for problem-solving depend on the relationships between agents: non-cooperative and cooperative game theory; collective decision-making.
V LEARNING--"Machine learning"
19: 20: 21: deep neural networks: for reasoning about images, sounds, other data (chpt. 21). 22:
VI INTERACTING--"Communicating, perceiving, and acting"
23: 24: 25: 26:
VII CONCLUSIONS--"Conclusions"
27: 28:
__
References
1. https://retraice.com/
By Retraice, Inc.(The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2022 Retraice, Inc.)
AIMA4e Annotations
A companion to the great white brick.
As of November 23, 2022
(Start date: November 21, 2022.)
[1]retraice.com
PREFACE
* The phenomenon: intelligent agents * The discipline: artificial intelligence "the study of agents that receive percepts from the environment and perform actions." (vii)
* Aspects of the phenomenon: + Agent function: "Each ...agent implements a function that maps percept sequences to actions" (vii) o Ways to represent agent functions include: "reactive agents, real-time planners, decision-theoretic systems, and deep learning systems." (vii) + Learning o "a construction method for competent systems" (viii) o "a way of extending the reach of the designer into unknown environments." (viii) + Goals o Robotics and vision: # "not ...independently defined problems" # "[things] in the service of achieving goals."
I INTELLIGENCE --"Artificial Intelligence"
1 Intro:
definitions, foundations, history, philosophy, state of the art, risks-benefits
2 Agents:
environments, `good' behavior, agent structure and types
II SOLVING--"Problem-solving"
3 Searching: Looking ahead to find a sequence.
Algorithms, strategies, informed/heuristic strategies.
4 Complex Environments: More realistic environments.
Local search, optimization, continuous spaces, nondeterministic actions, partially observable env.s, online search and unknown env.s.
5 Adversarial Games: Other agents competing against us.
Theory, optimal decisions, alpha-beta tree search, Monte Carlo tree search, stochastic g.s, partially observable g.s, limitations.
6 Constraint Satisfaction Problems: States as domains, solutions as allowable combinations of states.
Constraint propagation, inference, backtracking search, local search, structure of problems
III THINKING--"Knowledge, reasoning, and planning"
7 Logical Agents: Forming representations and reasoning before acting.
Knowledge-based agents; representing worlds; logic, world models and `possible worlds'; logic without objects.
8 First-Order Logic: A formal language for objects and their relations.
`Ontological commitment' (what is assumed about reality); syntax, semantics; knowledge engineering (building formal representations of important objects and relations in a domain).
9 First-Order Inference: Reasoning about objects and their relations.
Algorithms to answer any 1st-order logic question.
10 Knowledge Representation: Representing the real world for problem solving.
What content to put into a knowledge base.
Knowledge representation languages and their uses (315): * First-order logic: reasoning about a world of objects and relations; * Hierarchical task networks: for reasoning about plans (chpt. 11); * Bayesian networks: for reasoning with uncertainty (chpt. 13); * Markov models: for reasoning over time (chpt. 17); * Deep neural networks: for reasoning about images, sounds, other data (chpt. 21).
11 Automated Planning: Hierarchical task networks.
Planning for spacecraft, factories, military campaigns; representing actions and states; efficient algorithms and heuristics.
IV UNCERTAINTY--"Uncertain knowledge and reasoning"
12 Quantifying Uncertainty: An answer to the laziness and ignorance that kill formal logic.
Causes of uncertainty are environment types (partially observable, nondeterministic, adversarial); belief state grows big and unlikely fast (384); agents still need a way to act; absolute certainty is impossible; it comes down to importance, likelihood and degree of success (385-386).
Logic fails because laziness and ignorance; probability theory solves the qualification problem by summarizing the uncertainty. * Laziness: too much work to list everything, or use such a list; * Ignorance: (theoretical) there are no complete theories; (practical) we can never run all the tests.
13 Probabilistic Reasoning [big]: Bayesian networks.
For reasoning with uncertainty by representing causal independence (398) and conditional independence (401) relationships to simplify probabilistic representations of the world.
14 Probabilistic Reasoning Over Time: Comprehending the uncertain past, present and future.
Belief state plus transition model yields prediction (chpt 4, 7, 11); percepts and sensor model yield updated belief state; add probability theory to switch from possible states to probable states.
15 Probabilistic Programming: Universal formal languages to represent any computable probability model, and they come with algorithms.
Using formal logic and traditional programming languages to represent probabilistic information.
16 Making Simple Decisions: Agents getting what they want in an uncertain world--as much as possible, on average.
Beliefs, desires; utility theory; utility functions; decision networks; the value of information (547); this chapter is concerned with one-shot or episodic decsions problems (as opposed to sequential) (cf. 562).
17 Making Complex Decisions: What to do today given decisions to be made tomorrow.
Sequential decision problems (as opposed to one-shot episodic): the agent's utility depends on a sequence of decisions in stochastic (explicitly probabilistic (45)) and partially observable environments. Markov models (563; cf. 463) for reasoning over time (chpt. 17).
18 Multiagent Decision Making [big]: When there's more than one agent in the environment.
The nature of such environments and the strategies for problem-solving depend on the relationships between agents: non-cooperative and cooperative game theory; collective decision-making.
V LEARNING--"Machine learning"
19: 20: 21: deep neural networks: for reasoning about images, sounds, other data (chpt. 21). 22:
VI INTERACTING--"Communicating, perceiving, and acting"
23: 24: 25: 26:
VII CONCLUSIONS--"Conclusions"
27: 28:
__
References
1. https://retraice.com/