0% found this document useful (0 votes)
14 views

Introduction To AI

Introduction to AI

Uploaded by

poovarasan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Introduction To AI

Introduction to AI

Uploaded by

poovarasan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Artificial Intelligence Introduction

Dr. Poovarasan Selvaraj


Assistant Professor
Department of CS(AI & DS)
Sri Ramakrishna College of Arts & Science
Coimbatore- 641 006.
Why Study AI?
◦ AI makes computers more useful
◦ Intelligent computer would have huge impact on civilization
◦ AI cited as “field I would most like to be in” by scientists in all
fields
◦ Computer is a good comparison for talking and thinking about
intelligence
◦ Turning theory into working programs forces us to work out
the details
◦ AI produces good results for Computer Science
◦ AI produces good results for other fields
◦ Computers make good experimental subjects
◦ Personal motivation: mystery??????
Goals of AI
• To Create Expert Systems
• The systems which demonstration intelligent behavior, learn, demonstrate,
explain, and advice its users.
• To Implement Human Intelligence in Machines
• Creating systems that understand, think, learn, and behave like humans.
What is the definition of AI?
• Artificial intelligence or AI refers to software
technologies that make a robot or computer act
and think like a human. Systems that Systems that
• Some software engineers say that it is only think like think rationally
artificial intelligence if it performs as well or
humans
better than a human. Systems that Systems that
act like humans act rationally
• In this context, when we talk about performance,
we mean human computational accuracy, speed,
and capacity.
Cont.
◦ AI program will demonstrate a high level of intelligence to a

degree that equals or exceeds the intelligence required of a human

in performing some task.

◦ AI is unique, sharing borders with Mathematics, Computer

Science, Philosophy, Psychology, Biology, Cognitive Science and

many others.

◦ Although there is no clear definition of AI or even Intelligence, it

can be described as an attempt to build machines that like humans

can think and act, able to learn and use knowledge to solve

problems on their own.


Types of Artificial Intelligence

◦ Reactive Machines
◦ Limited Memory
◦ Theory of Mind
◦ Self-Aware
Applications of AI
◦ Gaming

◦ Natural Language Processing

◦ Expert Systems

◦ Vision Systems

◦ Speech Recognition

◦ Handwriting Recognition

◦ Intelligent Robots
Foundations of Artificial Intelligence
Different fields have contributed to AI in the form of ideas,
viewpoints and techniques
◦ Philosophy
e.g., foundational issues (can a machine think?),
issues of knowledge and believe, mutual knowledge
◦ Psychology and Cognitive Science
e.g., problem solving skills
◦ Neuro-Science
e.g., brain architecture
◦ Computer Science And Engineering
e.g., complexity theory, algorithms, logic and
inference, programming languages, and system
building.
◦ Mathematics and Physics
e.g., statistical modeling, continuous mathematics,
◦ Statistical Physics, and Complex Systems.
History of Artificial Intelligence
◦ In 1931, Goedel layed the foundation of Theoretical Computer Science1920-30s:
Cont.
He published the first universal formal language and showed that math itself is either flawed or
allows for unprovable but true statements.
◦ In 1936, Turing reformulated Goedel’s result and church’s extension thereof.
◦ In 1956, John McCarthy coined the term "Artificial Intelligence" as the topic of the Dartmouth
Conference, the first conference devoted to the subject.
◦ In 1957, The General Problem Solver (GPS) demonstrated by Newell, Shaw & Simon
◦ In 1958, John McCarthy (MIT) invented the Lisp language (second-oldest high-level programming
language).
◦ In 1959, Arthur Samuel (IBM) wrote the first game-playing program, for checkers, to achieve sufficient
skill to challenge a world champion.
◦ In 1963, Ivan Sutherland's MIT dissertation on Sketchpad introduced the idea of interactive graphics
into computing.
◦ In 1966, Ross Quillian (PhD dissertation, Carnegie Inst. of Technology; now CMU) demonstrated
semantic nets
◦ In 1967, Dendral program (Edward Feigenbaum, Joshua Lederberg, Bruce Buchanan, Georgia
Sutherland at Stanford) demonstrated to interpret mass spectra on organic chemical compounds. First
successful knowledge-based program for scientific reasoning.
Cont.
◦ In 1967, Doug Engelbart invented the mouse at SRI
◦ In 1968, Marvin Minsky & Seymour Papert publish Perceptron's, demonstrating limits of simple
neural nets.
◦ In 1972, Prolog developed by Alain Colmerauer.
◦ In Mid 80’s, Neural Networks become widely used with the Backpropagation algorithm (first
described by Werbos in 1974).
◦ 1990, Major advances in all areas of AI, with significant demonstrations in machine learning,
intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning,
data mining, natural language understanding and translation, vision, virtual reality, games, and
other topics.
◦ In 1997, Deep Blue beats the World Chess Champion Kasparov
◦ In 2002,iRobot, founded by researchers at the MIT Artificial Intelligence Lab, introduced Roomba,
a vacuum cleaning robot. By 2006, two million had been sold.
◦ In 2016, Sophia is social humanoid robot developed by the Hong Kong based company Hanson
Robotics.
◦ 2011 Apple released Siri, a virtual assistant on Apple iOS operating systems. Siri uses a
natural-language user interface to infer, observe, answer, and recommend things to its human user. It
adapts to voice commands and projects an “individualized experienceˮ per user.

◦ In 2012 Google has launched an Android app feature "Google now", which was able to provide
information to the user as a prediction.

◦ In 2014 In the year 2014, Chatbot "Eugene Goostman" won a competition in the infamous "Turing
test.“ The Turing test is a game proposed by computer scientist and mathematical logician Alan
Turing in 1950

◦ 2014: Amazon created Amazon Alexa, a home assistant that developed into smart speakers that
function as personal assistants.

◦ 2016: Google released Google Home, a smart speaker that uses AI to act as a “personal assistantˮ to
help users remember tasks, create appointments, and search for information by voice.

◦ Year 2018 The "Project Debater" from IBM debated on complex topics with two master debaters and
also performed extremely well.
Risks and Benefits of Artificial Intelligence
Human safety is always the primary thing that
is also taken care by machines. Whenever we
need to explore the deepest part of the ocean or
study space, scientists use AI-enabled machines
in risky situations where human survival
becomes difficult. AI can reach at every place
where humans can't reach.
• RISKS
◦ Unsustainability
◦ Unemployment
◦ Misuse leading to threats
◦ Data discrimination
◦ Making humans lazy
◦ No emotions
◦ Lacking out of box thinking
◦ A future threat to humanity
Benefits

◦ Reduction in Human Error


◦ Reduce the Risk (Zero Risk)
◦ 24/7 Support
◦ Perform Repetitive Jobs
◦ Faster decision
◦ New Inventions
◦ Daily Applications
◦ Digital Assistance
What is an Agent?
◦ An agent can be anything that perceive its environment through sensors
and act upon that environment through actuators. An Agent runs in the
cycle of perceiving, thinking, and acting. An agent can be:

• Human-Agent: A human agent has eyes, ears, and other organs which
work for sensors and hand, legs, vocal tract work for actuators.

• Robotic Agent: A robotic agent can have cameras, infrared range


finder, NLP for sensors and various motors for actuators.

• Software Agent: Software agent can have keystrokes, file contents as


sensory input and act on those inputs and display output on the screen.
Intelligent Agents
◦ An intelligent agent is an autonomous entity which act upon an environment using sensors and actuators
for achieving goals. An intelligent agent may learn from the environment to achieve their goals. A
thermostat is an example of an intelligent agent.

◦ Following are the main four rules for an AI agent:


• Rule 1: An AI agent must have the ability to perceive the environment.

• Rule 2: The observation must be used to make decisions.

• Rule 3: Decision should result in an action.

• Rule 4: The action taken by an AI agent must be a rational action.


Rational Agent:
◦ A rational agent is an agent which has clear preference, models uncertainty, and acts in a way
to maximize its performance measure with all possible actions.

◦ A rational agent is said to perform the right things. AI is about creating rational agents to use
for game theory and decision theory for various real-world scenarios.

◦ For an AI agent, the rational action is most important because in AI reinforcement learning
algorithm, for each best possible action, agent gets the positive reward and for each wrong
action, an agent gets a negative reward.
Rationality:
◦ The rationality of an agent is measured by its performance measure. Rationality can be judged on

the basis of following points:

• Performance measure which defines the success criterion.

• Agent prior knowledge of its environment.

• Best possible actions that an agent can perform.

• The sequence of percepts.


Structure of an AI Agent
◦ The task of AI is to design an agent program which implements the agent function. The structure of an

intelligent agent is a combination of architecture and agent program. It can be viewed as:

Agent = Architecture + Agent program

◦ Following are the main three terms involved in the structure of an AI agent:

◦ Architecture: Architecture is machinery that an AI agent executes on.

◦ Agent Function: Agent function is used to map a percept to an action.

◦ f:P → A

◦ Agent Program: Agent program is an implementation of agent function. An agent program

executes on the physical architecture to produce function f.


PEAS Representation
◦ PEAS is a type of model on which an AI agent works upon. The define an AI agent or rational agent,

then we can group its properties under PEAS representation model. It is made up of four words:

• P: Performance measure

• E: Environment

• A: Actuators

• S: Sensors
PEAS for self-driving cars:
◦ Let's suppose a self-driving car then PEAS representation will be:

◦ Performance: Safety, time, legal drive, comfort

◦ Environment: Roads, other vehicles, road signs, pedestrian

◦ Actuators: Steering, accelerator, brake, signal, horn

◦ Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar.


Example of Agents with their PEAS
representation
Agent Performance Environment Actuators Sensors
measure
Medical Diagnose • Healthy patient • Patient • Tests • Keyboard
• Minimized cost • Hospital Staff • Treatments (Entry of
symptoms)
Vacuum Cleaner • Cleanness • Room • Wheels • Camera
• Efficiency • Table • Brushes • Dirt detection
• Battery life • Wood floor • Vacuum Extractor sensor
• Security • Carpet • Cliff sensor
• Various obstacles • Bump Sensor
• Infrared Wall
Sensor

Part -picking • Percentage of parts • Conveyor belt with • Jointed Arms •Camera
Robot in correct bins. parts, • Hand •Joint angle sensors
• Bins
Types of AI Agents
◦ Agents can be grouped into five classes based on their degree of perceived intelligence and
capability. All these agents can improve their performance and generate better action over the
time. These are given below:

• Simple Reflex Agent

• Model-based reflex agent

• Goal-based agents

• Utility-based agent

• Learning agent
Simple Reflex agent
• The Simple reflex agents are the simplest agents. These
agents take decisions on the basis of the current percepts
and ignore the rest of the percept history.
• These agents only succeed in the fully observable
environment.
• The Simple reflex agent does not consider any part of
percepts history during their decision and action process.
• The Simple reflex agent works on Condition-action rule,
which means it maps the current state to action. Such as a
Room Cleaner agent, it works only if there is dirt in the
room.
• Problems for the simple reflex agent design approach:
• They have very limited intelligence
• They do not have knowledge of non-perceptual parts of the
current state
• Mostly too big to generate and to store.
• Not adaptive to changes in the environment.
Model-based reflex agent
• The Model-based agent can work in a partially
observable environment, and track the situation.
• A model-based agent has two important factors:
• Model: It is knowledge about "how things
happen in the world," so it is called a
Model-based agent.
• Internal State: It is a representation of the
current state based on percept history.
• These agents have the model, "which is
knowledge of the world" and based on the model
they perform actions.
• Updating the agent state requires information
about:
• How the world evolves
Goal-based agents
• The knowledge of the current state environment is not always
sufficient to decide for an agent to what to do.

• The agent needs to know its goal which describes desirable


situations.

• Goal-based agents expand the capabilities of the model-based


agent by having the "goal" information.

• They choose an action, so that they can achieve the goal.

• These agents may have to consider a long sequence of possible


actions before deciding whether the goal is achieved or not. Such
considerations of different scenario are called searching and
planning, which makes an agent proactive.
Utility-based agents
• These agents are similar to the goal-based agent
but provide an extra component of utility
measurement which makes them different by
providing a measure of success at a given state.
• Utility-based agent act based not only goals but
also the best way to achieve the goal.
• The Utility-based agent is useful when there are
multiple possible alternatives, and an agent has to
choose in order to perform the best action.
• The utility function maps each state to a real
number to check how efficiently each action
achieves the goals.
Learning Agents
• A learning agent in AI is the type of agent which can learn
from its past experiences, or it has learning capabilities.
• It starts to act with basic knowledge and then able to act and
adapt automatically through learning.
• A learning agent has mainly four conceptual components,
which are:
• Learning element: It is responsible for making
improvements by learning from environment
• Critic: Learning element takes feedback from critic
which describes that how well the agent is doing with
respect to a fixed performance standard.
• Performance element: It is responsible for selecting
external action
• Problem generator: This component is responsible
for suggesting actions that will lead to new and
informative experiences.
• Hence, learning agents are able to learn, analyze
performance, and look for new ways to improve the
performance.
Agents and Environments
◦ An environment is everything in the world which surrounds the agent, but it is not a part of an agent
itself. An environment can be described as a situation in which an agent is present.
◦ The environment is where agent lives, operate and provide the agent with something to sense and act
upon it. An environment is mostly said to be non-feministic.
◦ Features of Environment
◦ As per Russell and Norvig, an environment can have various features from the point of view of an agent:
1. Fully observable vs Partially Observable
2. Static vs Dynamic
3. Discrete vs Continuous
4. Deterministic vs Stochastic
5. Single-agent vs Multi-agent
6. Episodic vs sequential
7. Known vs Unknown
8. Accessible vs Inaccessible
Fully observable vs Partially
Observable
• When an agent sensor is capable to sense or access the complete state of an agent at each point in
time, it is said to be a fully observable environment else it is partially observable.
• Maintaining a fully observable environment is easy as there is no need to keep track of the history of
the surrounding.
• An environment is called unobservable when the agent has no sensors in all environments.
• Examples:
• Chess – the board is fully observable, and so are the opponent’s moves.
• Driving – the environment is partially observable because what’s around the corner is not
known.
Deterministic(human action) vs
Stochastic (Random)
• When a uniqueness in the agent’s current state completely determines the next state of the agent, the
environment is said to be deterministic.
• The stochastic environment is random in nature which is not unique and cannot be completely
determined by the agent.
• Examples:
• Chess – there would be only a few possible moves for a coin at the current state and these moves
can be determined.
• Self-Driving Cars- the actions of a self-driving car are not unique, it varies time to time.
Competitive vs Collaborative
• An agent is said to be in a competitive environment when it competes against another agent to optimize
the output.
• The game of chess is competitive as the agents compete with each other to win the game which is the
output.
• An agent is said to be in a collaborative environment when multiple agents cooperate to produce the
desired output.
• When multiple self-driving cars are found on the roads, they cooperate with each other to avoid
collisions and reach their destination which is the output desired.
Single-agent vs Multi-agent
• An environment consisting of only one agent is said to be a single-agent environment.
• A person left alone in a maze is an example of the single-agent system.
• An environment involving more than one agent is a multi-agent environment.
• The game of football is multi-agent as it involves 11 players in each team.
Dynamic vs Static
• An environment that keeps constantly changing itself when the agent is up with some action is said to
be dynamic.
• A roller coaster ride is dynamic as it is set in motion and the environment keeps changing every instant.
• An idle environment with no change in its state is called a static environment.
• An empty house is static as there’s no change in the surroundings when an agent enters.
Discrete vs Continuous
• If an environment consists of a finite number of actions that can be deliberated in the environment to
obtain the output, it is said to be a discrete environment.
• The game of chess is discrete as it has only a finite number of moves. The number of moves might vary
with every game, but still, it’s finite.
• The environment in which the actions are performed cannot be numbered i.e. is not discrete, is said to be
continuous.
• Self-driving cars are an example of continuous environments as their actions are driving, parking, etc.
which cannot be numbered.
Episodic vs Sequential
• In an Episodic task environment, each of the agent’s actions is divided into atomic incidents or episodes. There is no
dependency between current and previous incidents. In each incident, an agent receives input from the environment
and then performs the corresponding action.
• Example: Consider an example of Pick and Place robot, which is used to detect defective parts from the conveyor
belts. Here, every time robot(agent) will make the decision on the current part i.e. there is no dependency between
current and previous decisions.
• In a Sequential environment, the previous decisions can affect all future decisions. The next action of the agent
depends on what action he has taken previously and what action he is supposed to take in the future.
• Example:
• Checkers- Where the previous move can affect all the following moves.
Known vs Unknown
◦ In a known environment, the output for all probable actions is given. Obviously, in case of unknown
environment, for an agent to make a decision, it has to gain knowledge about how the environment
works.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy