0% found this document useful (0 votes)
24 views

Week 3 Learning Journal

The document summarizes a student's daily activities observing talks about artificial intelligence. It discusses machine learning, predictions for the next decade of AI, whether AI can be creative, and debates about advanced AI capabilities. The student watched videos each day and reported on topics including types of AI systems, what machine learning is, and whether AI could write screenplays or become too intelligent.

Uploaded by

latomlala853
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

Week 3 Learning Journal

The document summarizes a student's daily activities observing talks about artificial intelligence. It discusses machine learning, predictions for the next decade of AI, whether AI can be creative, and debates about advanced AI capabilities. The student watched videos each day and reported on topics including types of AI systems, what machine learning is, and whether AI could write screenplays or become too intelligent.

Uploaded by

latomlala853
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

STUDENT'S ON-THE-JOB TRAINING

DAILY REPORT OF ACTIVITIES

Name of Trainee: _DOCOT, MARK ERICKSON V. Company: MAPUA UNIVERSITY

Class Schedule: Monday to Friday | 8am – 5pm Professor: Engr. Edward Ang

Note: The trainee should give a brief but clear report on the task performed, its purpose and how it is
accomplished. Also, indicate the machines, equipment, tools and materials used, if any. Practice Safety.

Date: March 14, 2022 For the synchronous online seminar/talk this week. I watched the video "Will
Time In: 8am Self-Taught, AI-Powered Robots Be the End of Us?" When I saw the title, the
Time Out: 5pm
first thing that sprang to me was the movie Terminator, where robots or
Hour Spent: 8 hours
artificial intelligence (AI) are their adversary and humanity's most prominent
Certified Correct by: enemy. Not just in terminator, there are a lot of movies where A.I. robots
will become the destroyer of humanity.
Engr. Edward Ang
Supervisor/Trainer

Figure 1. Introduction of A. I

So, according to Professor Hod Lipson of Columbia University. In making an


Artificial intelligence, you will make rules in the program that they should
obey and logically interpret these rules and follow them. In 1957, a robot or
machine could challenge a master-level in checkers. While in 1997, there
was a machine that could beat a world champion in chess. There are two
types of systems in A.I.

Rule-based systems and machine learning systems are two types of systems.
Machine learning systems are systems in which the machine improves as
more data is collected.
Figure 2. First A. I that win a Gameshow is called Watson

These talk participants are Susan Schneider, a Philosopher Cognitive


Scientist; Yann Lecun, a Computer Scientist; Peter Ulric Tse, a Neuroscientist;
Max Tegmark, a Physicist, A.I. researcher.

Figure 3. The participants in this talk are Susan Schneider, Yann Lecun, Peter
Ulric Tse, and Max Tegmark.

Their first topic is what machine learning is. You tell a computer if you offer
it a picture of a vehicle and it doesn't say car, "Actually, you got it incorrect."
This is a car. The computer then changes its internal settings or functions so
that the result is closer to what you desire the next time you show the same
picture. Machines may be programmed to do a variety of tasks. You may
teach them how to categorize text. They're pretty like brain neurons, and
machines learn by modifying the efficacy of the connections between those
connections, which are like coefficients. There are two sorts of learning: trial
and error and feedback, in which a computer attempts something, and you
tell it whether it performed well or poorly. If you used this to teach a
machine to drive a car, it would have to drive for millions of hours before
figuring out how to avoid colliding.

Source:

Will Self-Taught, A.I. Powered Robots Be the End of Us? (2019, March

9). YouTube. https://www.youtube.com/watch?

v=IHc5Zt7qT6o&list=PLKy-B3Qf_RDXxyutMp4RNo-

4qrIrx1lGj&index=1

Date: March 15, 2022 I continued to watch the talk on the second day, and this time the topic was
Time In: 8 am about what the next ten years in A.I. will look like. Now it is Peter Ulric Tse, a
Time Out: 5pm
neuroscientist, who is speaking.
Hour Spent: 8 hours

Certified Correct by:

Engr. Edward Ang


Supervisor/Trainer

Figure 4. Discussion with Peter Ulric Tse

The next ten years, He believes, will be dominated by limited artificial


intelligence growing increasingly vital, with mental models posing the
biggest challenge. For example, most of what we call vision represents what
is unseen and difficult to categorize. It is impossible to see what is happening
in other people's heads. Our conscious experience is the highest model of
what is going on in the world that evolution has provided us. Causation, the
backs of objects, the forms of things Short of full-fledged mental models, I
believe it will be difficult for A.I. to develop systems that perceive a lack of
knowledge as instructive.

People will be afraid of A.I. because it has the potential to outperform


humans. As the speaker demonstrated, the paintings that an A.I. can create
are more stunning than what other individuals can produce. And, if we think
that A.I. will improve with time, we may conclude that A.I. will be more
intelligent than humans in the future. However, as the participants stated,
Humans and A.I. may assist each other in improving. An artificial intelligence
computer may soon be able to perform jazz, but will it be able to compete
with Mozart, or will it inspire us to be more creative? Jazz is all about
expressing emotions in real-time. I don't understand why a machine should
do this because there won't be any communication. People's attitudes
toward creation may take decades, if not millennia, to shift. Machines will
progress, but one day they will be able to reproduce emotion to the point
that we will be unable to distinguish between what a person makes and
what a machine generates.

Figure 5. Can A. I can be creative

Imitation is a crucial component of creativity. Therefore, we'll need to


achieve something like deep unsupervised learning, which newborns and
youngsters do. Part of it, I believe, is shifting from mental nouns like home,
person, and face to the mind or speaking in a new way. You'll be astounded
when you look at some of our species' earliest examples of inventiveness.
Someone placed a lion's head on a human body 30,000 years ago and then
went out and made it in the world. Orville Wright spent two years pondering
how to fly before declaring, "Actually, we don't need to fly." Then
constructing it and turning it into an airplane, transforming the world.

After finishing this topic, I think being creative is something that all living
things can achieve. So, I believe that the word creative is subjective and will
always be.

Source:

Will Self-Taught, A.I. Powered Robots Be the End of Us? (2019, March
9). YouTube. https://www.youtube.com/watch?

v=IHc5Zt7qT6o&list=PLKy-B3Qf_RDXxyutMp4RNo-

4qrIrx1lGj&index=1

Date: March 16, 2022 For the third day, I continued to watch the video, for this day. Their first
Time In: 8am topic was A.I. writing a screenplay for a movie.
Time Out: 5pm
Hour Spent: 8 hours

Certified Correct by:

Engr. Edward Ang


Supervisor/Trainer

Figure 6. A.I writes a screenplay for a movie

Figure 7. The screenplay that A.I writes titled Sunspring (2016)

They like to think of this topic as an abstract task landscape. The height
represents how difficult it is for A.I. to execute each task on a human level,
while the sea level represents what A.I. can currently accomplish. The water
level rises, demonstrating that global warming affects the entire landscape.
How far are we from developing artificial general intelligence as the next
topic?

Of course, we're more general than all the machines, but our minds are.
Humans excel in just a few things, and tests like AlphaGo have shown in
recent years that we are entirely useless at Go. We're not particularly adept
at laying out a route from one city to the next. The next topic is whether
advanced A.I. will turn into terminators and take over the world. The desire
to take over is not connected with intelligence in humans, but rather the
opposite - we wish to prevent them from doing so. The urge to be in charge
has nothing to do with intellect. It's undoubtedly tied to testosterone, and I
think it's worth underlining why artificial general intelligence will be so
crucial if we ever get there. There is an evolutionary explanation for why you
would want to be the chief if you are foolish. Whether you were Google and
possessed artificial general intelligence, you could replace your 40,000
engineers with 40,00 AIs who could work significantly quicker and without
taking breaks. In that sense, it is powerful. Do we want whatever humans
happen to be operating the first AGI to be able to assume power over the
globe? If we have machines that can accomplish all we do, they may be able
to be utilized to create even better machines in the future. This might allow
A.I. to bootstrap itself to become not just a little bit smarter than humans
but significantly wiser - leading to the debate about singularity and an
intelligence explosion.

Figure 8. Human Consciousness

During your waking life and when you are dreaming, you are experiencing
the world. It's just that, but it's unique in that it's a domain of highly
precompiled representations that mental operators may manipulate. The
important operator, in my opinion, is attention, particularly volitional
attention. Consciousness has a function. It's to give these planned areas a
sense of place. This domain is entirely under our control. We can invent
anything and then build it in the actual world if we do want. In the first
quarter to a third of a second, a lot of processing occurs, and then you're in a
full-fledged universe.

Source:

Will Self-Taught, A.I. Powered Robots Be the End of Us? (2019, March

9). YouTube. https://www.youtube.com/watch?

v=IHc5Zt7qT6o&list=PLKy-B3Qf_RDXxyutMp4RNo-

4qrIrx1lGj&index=1

Date: March 17, 2022 For the fourth day of my synchronous online seminar/talk, I continued the
Time In: 8am video and the topic about the human consciousness. Consciousness is for a
Time Out: 5pm
reason, and it takes a long time to develop. At time zero, all the photons in
Hour Spent: 8 hours
the world strike your retina, but you are not awake. "In the first quarter to a
Certified Correct by: third of a second, there's a lot of processing going on, and then you
experience a full-blown universe." The great majority of the information
Engr. Edward Ang processing in the brain, including heartbeat control and most other things, is
Supervisor/Trainer just part of the calculation that gets sent as the result. '. Because, in most
cases, if you have a great science topic that has lingered for hundreds of
years, it's because people have just disregarded it rather than performing
the necessary research.
Figure 9. Human Consciousness

As I continued to watch the video, I became interested in knowing more


about a consciousness detector that can tell physicians if a patient is in a
coma or has locked-in syndrome, since it has a potential to benefit many
patients or humanity. Furthermore, I feel that the topic of consciousness is
not well addressed since scientists in the 18th and 17th centuries were
baffled by the fact that humans see things upside down. We feel this
question is illogical since we know the nature of information processing. We
aren't asking the right questions regarding some dimensions of
consciousness.

Figure 10. Separating intelligence from consciousness

But I don't see how a system that has never had pain can understand what
they were saying. Perhaps awareness is a side effect of our low-capacity
systems, and AGI was lucky that humans had low-capacity systems.
According to artificial intelligence pioneer David Weinberger, if we want to
understand machine consciousness, we need to investigate whether an A.I.
design has a conscious experience rather than assuming it is merely because
it looks human. We want a lot of positive experiences in the future if we
want to be decent individuals. It's all about the negative aspects of the
current subjective experience. It's the same as closing your laptop if they're
not aware. That isn't a problem at all. Not if you have a backup plan in place.

Source:

Will Self-Taught, A.I. Powered Robots Be the End of Us? (2019, March

9). YouTube. https://www.youtube.com/watch?

v=IHc5Zt7qT6o&list=PLKy-B3Qf_RDXxyutMp4RNo-

4qrIrx1lGj&index=1

Date: March 18, 2022 For the last day of my synchronous online seminar/talk, I continued the
Time In: 8am videos, and for today's topic is will machines ever have emotions. From my
Time Out: 5pm
perspective, it’s not impossible that they can have an emotion someday.
Hour Spent: 8 hours
Since if we want the machines to think from themselves, they need to have
Certified Correct by: an emotion first.

Engr. Edward Ang


Supervisor/Trainer

Figure 11. Will machines ever have emotions

According to the participants, developing or building autonomous intelligent


devices that don't have feelings isn't possible. Emotions are a part of the
brain. "We will have self-driving vehicles that have no feelings because
they're merely supposed to drive your automobile," says artificial
intelligence pioneer Ray Kurzweil. Emotions will be vital in the creation of
artificial general intelligence. The growth of animals can teach us about the
origins of awareness. Emotions are conscious states, yet they're teleological
states inside a consciousness often concerned with the unseen. I feel that
depicting the invisible has become increasingly crucial. We have evolved to
behave in the environment even when we don't have any input. These
teleological moods, which compel us to seek mates, food, and other
necessities, allowed humans to create desert routes. I believe not just
computers, but evolution's other experiments will be a highly intriguing area
to explore for lessons on how to construct artificial intelligence.

Figure 12. Will artificial general superintelligence be a good or bad for


humans.

As a result, I believe that humans are still figuring out how to make
computers execute our brain functions in this scenario. In the end, based on
what the participants said, it will be AGI superintelligence. How do we make
sure that its objectives are aligned with ours? We're spending billions of
dollars to improve A.I., but we also need to invest in the knowledge required
to keep it functioning. The ethical frameworks we inherited from our
forebears, according to the participants, are insufficient to deal with this.
"Okay, God said don't sleep with his wife and don't take his possessions,"
and so on were on a list of horrible things you could do 2,000 years ago.
Since we wonder what's good and bad for superintelligence - or A.I. - I'd
suggest starting with what's beneficial for life. That way, we may tackle
various issues and consider not just what we can do but also what we should
do.

Before we can achieve powerful AGIs, we must first obtain very weak AGIs.
According to the talk's attendees, the first AGI will have the autonomy and
intelligence of a rat, if not less. It will not be able to take over the world. We
can experiment with this to see if we can program it to behave in society
rather than destroy everything in its path. Some people believe that
memories and information are stored in synaptic weights and cells.
Glanzmann believes it is due to DNA methylation patterns. MIT and UCLA
have done some fantastic work that has convincingly proved that, while
synaptic weights may be the mechanism to retrieve information, the basic
information may exist within the cells.

After finishing this video, I think that A.I.'s future will depend on us if we
work carefully and do not do something dangerous that can disrupt the A.I.
We can create a world where A.I. will never become hostile to humans.

Source:

Will Self-Taught, A.I. Powered Robots Be the End of Us? (2019, March

9). YouTube. https://www.youtube.com/watch?

v=IHc5Zt7qT6o&list=PLKy-B3Qf_RDXxyutMp4RNo-

4qrIrx1lGj&index=1

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy