CSEP 573 Applications of Artificial Intelligence (AI) : Rajesh Rao (Instructor) Abe Friesen (TA)
CSEP 573 Applications of Artificial Intelligence (AI) : Rajesh Rao (Instructor) Abe Friesen (TA)
Applications of Artificial
Intelligence (AI)
http://www.cs.washington.edu/csep573
© UW CSE AI faculty
Our 2-course meal for this evening
• Part I
Goals
Logistics
What is AI?
Examples
Challenges
• Part II
Agents and environments
Rationality
PEAS specification
Environment types
Agent types
2
CSEP 573 Goals
• To introduce you to a set of key:
Concepts & Techniques in AI
3
CSEP 573 Topics
4
CSEP 573 Logistics
• E-mail:
Rajesh Rao rao@cs
Abe Friesen afriesen@cs
• Required Textbook
Russell & Norvig’s “AIMA3” (2009)
• Recommended Textbook
Witten & Frank’s “Data Mining” (2005)
5
CSEP 573 Logistics
• Grading:
4 homework assignments, each 25% of course grade,
containing a mix of written and programming problems
• Software tool:
Some homeworks will use the data mining and machine
learning software package Weka:
http://www.cs.waikato.ac.nz/~ml/weka/index.html
Documentation online and in the recommended textbook
by Witten and Frank (see previous slide)
6
CSEP 573 Logistics
• 2 University Holidays:
January 18 and February 15 – No class
• Make-up class:
Thursday, February 18 6:30-9:20 pm
Does this work for everyone?
7
Enough logistics,
let’s begin!
8
AI as Science
Physics: Where did the physical universe come
from and what laws guide its dynamics?
AI: ?????
9
AI as Science
Physics: Where did the physical universe come
from and what laws guide its dynamics?
10
AI as Engineering
• How can we make software and robotic devices
more powerful, adaptive, and easier to use?
• Examples:
Speech recognition
Natural language understanding
Computer vision and image understanding
Intelligent user interfaces
Data mining
Mobile robots, softbots, humanoids
Medical expert systems…
11
Hardware
1011 neurons
1014 synapses
cycle time: 10-3 sec
12
Computer vs. Brain
human-like rational
Systems that think Systems that think
thought like humans rationally
16
History of AI: Foundations
17
History of AI: Foundations
• Probability & Game Theory
Cardano (1501-1576) – probabilities (Liber de Ludo Aleae)
Bernoulli (1654-1705) – random variables
Bayes (1702-1761) – belief update
von Neumann (1944) – game theory
Richard Bellman (1957) – Markov decision processes
18
Early AI
• Neural networks
McCulloch & Pitts (1943) – simple neural nets
Rosenblatt (1962) – perceptron learning
• Symbolic processing
Dartmouth AI conference (1956)
Newell & Simon – logic theorist
John McCarthy – symbolic knowledge representation
Arthur Samuel – Checkers program
19
Battle for the Soul of AI
• Minsky & Papert (1969) – Perceptrons
Single-layer networks cannot learn XOR
Argued against neural nets in general
• Backpropagation
Invented in 1969 and again in 1974
Hardware too slow, until rediscovered in 1985
• Research funding for neural nets disappears
• Rise of rule-based expert systems
20
Knowledge is Power
• Expert systems (1969-1980)
Dendral – molecular chemistry
Mycin – infectious disease
R1 – computer configuration
• AI Boom (1975-1985)
LISP machines – single user workstations
Japan’s 5th Generation Project – massive parallel
computing
21
AI Winter
• Expert systems oversold
Fragile
Hard to build, maintain
• AI Winter (1985-1990)
• Science went on... looking for
Principles for robust reasoning
Principles for learning
22
AI Now
• Probabilistic graphical models
Pearl (1988) – Bayesian networks
• Machine learning
Quinlan (1993) – decision trees (C4.5)
Vapnik (1992) – Support vector machines (SVMs)
Schapire (1996) – Boosting
Neal (1996) – Gaussian processes
• Recent progress:
Probabilistic relational models, deep networks,
active learning, structured prediction, etc.
23
AI Now: Applications
• Countless AI systems in day to day use
Industrial robotics
Data mining on the web
Speech recognition
Security: Face & Iris recognition
Stock market prediction
Space exploration
Computational biology
Hardware verification
Credit card fraud detection
Surveillance and threat assessment
Military applications (bomb-defusing robots, drones)
Etc.
24
Notable Examples: Chess (Deep Blue, 1997)
Deep blue wins 2-1-3 (wins-losses-draws)
25
Speech Recognition
Automated call
centers
Navigation Systems
26
Natural Language Understanding
• Speech Recognition
“word spotting” feasible today
continuous speech – inching closer
• WWW Information Extraction
E.g., KnowItAll project
• Machine Translation / Understanding
The spirit is willing but the flesh is weak. (English)
The vodka is good but the meat is rotten. (Russian)
(i.e., very much a work in progress…)
27
Museum Tour-Guide Robots
28
Mars Rovers (2003-now)
29
Europa Mission ~ 2018?
30
Humanoid Robots
31
Robots that Learn
Before Learning
32
Robots that Learn
After Learning
33
Chess Playing vs. Robots
• Static ¾ Dynamic
• Deterministic ¾ Stochastic
• Turn-based ¾ Real-time
34
Robotic Prosthetics
35
Brain-Computer Interfaces
36
Limitations of AI Systems Today
• Today’s successful AI systems
operate in well-defined domains
employ narrow, specialized hard-wired knowledge
• Needed: Ability to
Operate in complex, open-ended dynamic worlds
• E.g., Your kitchen vs. GM factory floor
Adapt to unforeseen circumstances
Learn from new experiences
• In this class, we will explore some potentially
useful techniques for tackling these problems
37
5 Minute Break…
Next:
Agents & Environments (Chapter 2 in AIMA)
Outline
• Agents and environments
• Rationality
• PEAS specification
• Environment types
• Agent types
39
Agents
• An agent is any entity that can perceive its
environment through sensors and act upon
that environment through actuators
• Human agent:
Sensors: Eyes, ears, and other organs
Actuators: Hands, legs, mouth, etc.
• Robotic agent:
Sensors: Cameras, laser range finders, etc.
Actuators: Motorized limbs, wheels, etc.
40
Types of Agents
• Immobots (Immobile Robots)
Intelligent buildings
Intelligent forests
• Softbots
Jango (early softbot for
shopping)
Microsoft Clippy
Askjeeves.com (now Ask.com)
Expert Systems
• Cardiologist
Intelligent Agents
• Have sensors and actuators (effectors)
• Implement mapping from percept sequence
to actions
percepts
Environment Agent
actions
43
Rational Agent
“For each possible percept sequence, does
whatever action is expected to maximize its
performance measure on the basis of evidence
perceived so far and built-in knowledge.''
46
PEAS
• PEAS for Automated taxi driver
• Performance measure:
Safe, fast, legal, comfortable trip, maximize profits
• Environment:
Roads, other traffic, pedestrians, customers
• Actuators:
Steering wheel, accelerator, brake, signal, horn
• Sensors:
Cameras, sonar, speedometer, GPS, odometer, engine
sensors, keyboard
47
PEAS
• PEAS for Medical diagnosis system
• Performance measure:
Healthy patient, minimize costs, lawsuits
• Environment:
Patient, hospital, staff
• Actuators:
Screen display (questions, tests, diagnoses, treatments,
referrals)
• Sensors:
Keyboard (entry of symptoms, findings, patient's answers)
48
Properties of Environments
• Observability: full vs. partial
Sensors detect all aspects of state of environment
relevant to choice of action?
• Deterministic vs. stochastic
Next state completely determined by current state and
action?
• Episodic vs. sequential
Current action independent of previous actions?
• Static vs. dynamic
Can environment change over time?
• Discrete vs. continuous
State of environment, time, percepts, and actions
discrete or continuous-valued?
• Single vs. multiagent
Properties of Environments
• Observability: full vs. partial
• Deterministic vs. stochastic
• Episodic vs. sequential
• Static vs. dynamic
• Discrete vs. continuous
• Single vs. multiagent
• Crossword puzzle
• Chess
• Poker
• Coffee delivery mobile robot
Agent Functions and Agent Programs
• An agent’s behavior can be described by an
agent function mapping percept sequences to
actions taken by the agent
• An implementation of an agent function
running on the agent architecture (e.g., a
robot) is called an agent program
• Our goal: Develop concise agent programs for
implementing rational agents
51
Example
52
How should the agent be designed if…
• It has location and dirt sensors, but no internal state?
• It has no sensors, but knows the starting state?
• It has no sensors, and does not know the starting state?
53
Implementing Rational Agents
• Table lookup based on percept sequences
Infeasible
• Agent programs:
Simple reflex agents
Agents with memory
• Reflex agent with internal state
• Goal-based agents
• Utility-based agents
Simple Reflex Agents
AGENT Sensors
Percept
ENVIRONMENT
what action
Condition-Action rules should I do now?
Effectors
Simple Reflex Agents
Reflex Agent with Internal State
Sensors
state
ENVIRONMENT
What my actions do
what action
Condition-Action rules should I do now?
AGENT Effectors
Goal-Based (Planning) Agents
Sensors
state
what world is
How world evolves like now
ENVIRONMENT
What my actions do what it’ll be like
if I do action A
what action
Goals should I do now?
AGENT Effectors
Utility-Based Agents
Sensors
state
what world is
How world evolves like now
ENVIRONMENT
what it’ll be like
What my actions do if I do action A
AGENT Effectors
Learning Agents
Performance standard
Critic Sensors
feedback
ENVIRONMENT
changes
Learning Performance
element element
knowledge (from previous
learning slides)
goals
Problem
generator
AGENT Effectors
While driving, what’s the best policy?
• Always stop at a stop sign
• Never stop at a stop sign
• Look around for other cars and stop only if you
see one approaching
• Look around for a cop and stop only if you see
one
(http://www.gonomad.com/traveltalesfromindia/archives/2007_09_01_archive.html)
62
For You To Do
• Browse CSEP 573 course web page
63