Introduction To AI
Introduction To AI
many others.
can think and act, able to learn and use knowledge to solve
◦ Reactive Machines
◦ Limited Memory
◦ Theory of Mind
◦ Self-Aware
Applications of AI
◦ Gaming
◦ Expert Systems
◦ Vision Systems
◦ Speech Recognition
◦ Handwriting Recognition
◦ Intelligent Robots
Foundations of Artificial Intelligence
Different fields have contributed to AI in the form of ideas,
viewpoints and techniques
◦ Philosophy
e.g., foundational issues (can a machine think?),
issues of knowledge and believe, mutual knowledge
◦ Psychology and Cognitive Science
e.g., problem solving skills
◦ Neuro-Science
e.g., brain architecture
◦ Computer Science And Engineering
e.g., complexity theory, algorithms, logic and
inference, programming languages, and system
building.
◦ Mathematics and Physics
e.g., statistical modeling, continuous mathematics,
◦ Statistical Physics, and Complex Systems.
History of Artificial Intelligence
◦ In 1931, Goedel layed the foundation of Theoretical Computer Science1920-30s:
Cont.
He published the first universal formal language and showed that math itself is either flawed or
allows for unprovable but true statements.
◦ In 1936, Turing reformulated Goedel’s result and church’s extension thereof.
◦ In 1956, John McCarthy coined the term "Artificial Intelligence" as the topic of the Dartmouth
Conference, the first conference devoted to the subject.
◦ In 1957, The General Problem Solver (GPS) demonstrated by Newell, Shaw & Simon
◦ In 1958, John McCarthy (MIT) invented the Lisp language (second-oldest high-level programming
language).
◦ In 1959, Arthur Samuel (IBM) wrote the first game-playing program, for checkers, to achieve sufficient
skill to challenge a world champion.
◦ In 1963, Ivan Sutherland's MIT dissertation on Sketchpad introduced the idea of interactive graphics
into computing.
◦ In 1966, Ross Quillian (PhD dissertation, Carnegie Inst. of Technology; now CMU) demonstrated
semantic nets
◦ In 1967, Dendral program (Edward Feigenbaum, Joshua Lederberg, Bruce Buchanan, Georgia
Sutherland at Stanford) demonstrated to interpret mass spectra on organic chemical compounds. First
successful knowledge-based program for scientific reasoning.
Cont.
◦ In 1967, Doug Engelbart invented the mouse at SRI
◦ In 1968, Marvin Minsky & Seymour Papert publish Perceptron's, demonstrating limits of simple
neural nets.
◦ In 1972, Prolog developed by Alain Colmerauer.
◦ In Mid 80’s, Neural Networks become widely used with the Backpropagation algorithm (first
described by Werbos in 1974).
◦ 1990, Major advances in all areas of AI, with significant demonstrations in machine learning,
intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning,
data mining, natural language understanding and translation, vision, virtual reality, games, and
other topics.
◦ In 1997, Deep Blue beats the World Chess Champion Kasparov
◦ In 2002,iRobot, founded by researchers at the MIT Artificial Intelligence Lab, introduced Roomba,
a vacuum cleaning robot. By 2006, two million had been sold.
◦ In 2016, Sophia is social humanoid robot developed by the Hong Kong based company Hanson
Robotics.
◦ 2011 Apple released Siri, a virtual assistant on Apple iOS operating systems. Siri uses a
natural-language user interface to infer, observe, answer, and recommend things to its human user. It
adapts to voice commands and projects an “individualized experienceˮ per user.
◦ In 2012 Google has launched an Android app feature "Google now", which was able to provide
information to the user as a prediction.
◦ In 2014 In the year 2014, Chatbot "Eugene Goostman" won a competition in the infamous "Turing
test.“ The Turing test is a game proposed by computer scientist and mathematical logician Alan
Turing in 1950
◦ 2014: Amazon created Amazon Alexa, a home assistant that developed into smart speakers that
function as personal assistants.
◦ 2016: Google released Google Home, a smart speaker that uses AI to act as a “personal assistantˮ to
help users remember tasks, create appointments, and search for information by voice.
◦ Year 2018 The "Project Debater" from IBM debated on complex topics with two master debaters and
also performed extremely well.
Risks and Benefits of Artificial Intelligence
Human safety is always the primary thing that
is also taken care by machines. Whenever we
need to explore the deepest part of the ocean or
study space, scientists use AI-enabled machines
in risky situations where human survival
becomes difficult. AI can reach at every place
where humans can't reach.
• RISKS
◦ Unsustainability
◦ Unemployment
◦ Misuse leading to threats
◦ Data discrimination
◦ Making humans lazy
◦ No emotions
◦ Lacking out of box thinking
◦ A future threat to humanity
Benefits
• Human-Agent: A human agent has eyes, ears, and other organs which
work for sensors and hand, legs, vocal tract work for actuators.
◦ A rational agent is said to perform the right things. AI is about creating rational agents to use
for game theory and decision theory for various real-world scenarios.
◦ For an AI agent, the rational action is most important because in AI reinforcement learning
algorithm, for each best possible action, agent gets the positive reward and for each wrong
action, an agent gets a negative reward.
Rationality:
◦ The rationality of an agent is measured by its performance measure. Rationality can be judged on
intelligent agent is a combination of architecture and agent program. It can be viewed as:
◦ Following are the main three terms involved in the structure of an AI agent:
◦ f:P → A
then we can group its properties under PEAS representation model. It is made up of four words:
• P: Performance measure
• E: Environment
• A: Actuators
• S: Sensors
PEAS for self-driving cars:
◦ Let's suppose a self-driving car then PEAS representation will be:
Part -picking • Percentage of parts • Conveyor belt with • Jointed Arms •Camera
Robot in correct bins. parts, • Hand •Joint angle sensors
• Bins
Types of AI Agents
◦ Agents can be grouped into five classes based on their degree of perceived intelligence and
capability. All these agents can improve their performance and generate better action over the
time. These are given below:
• Goal-based agents
• Utility-based agent
• Learning agent
Simple Reflex agent
• The Simple reflex agents are the simplest agents. These
agents take decisions on the basis of the current percepts
and ignore the rest of the percept history.
• These agents only succeed in the fully observable
environment.
• The Simple reflex agent does not consider any part of
percepts history during their decision and action process.
• The Simple reflex agent works on Condition-action rule,
which means it maps the current state to action. Such as a
Room Cleaner agent, it works only if there is dirt in the
room.
• Problems for the simple reflex agent design approach:
• They have very limited intelligence
• They do not have knowledge of non-perceptual parts of the
current state
• Mostly too big to generate and to store.
• Not adaptive to changes in the environment.
Model-based reflex agent
• The Model-based agent can work in a partially
observable environment, and track the situation.
• A model-based agent has two important factors:
• Model: It is knowledge about "how things
happen in the world," so it is called a
Model-based agent.
• Internal State: It is a representation of the
current state based on percept history.
• These agents have the model, "which is
knowledge of the world" and based on the model
they perform actions.
• Updating the agent state requires information
about:
• How the world evolves
Goal-based agents
• The knowledge of the current state environment is not always
sufficient to decide for an agent to what to do.