Unit 1&2
Unit 1&2
Bhopal
Department of computer science & Engineering
(1st year)
Subject: Basic Computer Engineering
Subject Code: BE-205
Faculty:
Vikas Kumar Tiwari
Assistant Professor
TIT
Page 1
Unit 1
Introduction of Computer
Computer
A computer is an electronic machine that takes data and instruction as a input from the user process data and provide
useful information.A computer is a programmable machine. The two principal characteristics of a computer are: it
responds to a specific set of instructions in a well-defined manner and it can execute a prerecorded list of
instructions (a program).
Classification of computer
According to Size and Power
Computers can be generally classified by size and power as follows, though there is considerable overlap:
personal computer: a small, single-user computer based on a microprocessor. In addition to the
microprocessor.
Micro computer: micro computer is defines as a computer that has a micro processor.
workstation: a powerful, single-user computer. A workstation is like a personal computer, but it has a
more powerful microprocessor and a higher-quality monitor.
minicomputer: a multi-user computer capable of supporting from 10 to hundreds of users
simultaneously.
mainframe: a powerful multi-user computer capable of supporting many hundreds or thousands of
users simultaneously.
supercomputer: an extremely fast computer that can perform hundreds of millions of instructions per
second.
Page 2
GENERATION OF COMPUTER
Computer Generations Generation in computer terminology is a change in technology a computer is/was being used. Initially, the
generation term was used to distinguish between varying hardware technologies. But nowadays, generation includes
both hardware and software, which together make up an entire computer system.
There are totally five computer generations known till date. Each generation has been discussed in detail along with
their time period, characteristics. We've used approximate dates against each generation which are normally
accepted.
Following are the main five generations of computers
First Generation
The period of first generation was 1946-1959.
First generation of computer started with using vacuum tubes as the basic components for memory and circuitry for
CPU(Central Processing Unit). These tubes like electric bulbs produced a lot of heat and were prone to frequent
fusing of the installations, therefore, were very expensive and could be afforded only by very large organisations. In
this generation mainly batch processing operating system were used. In this generation Punched cards, Paper tape,
Magnetic tape Input & Output device were used. There were Machine code and electric wired board languages used.
Examples of this generations computers ENIAC, EDVAC, UNIVAC, IBM-701, IBM-650
Page 3
Huge size
Need of A.C.
Non portable
Consumed lot of electricity
Second Generation
The period of second generation was 1959-1965.
This generation using the transistor were cheaper, consumed less power, more compact in size, more reliable and
faster than the first generation machines made of vaccum tubes.In this generation, magnetic cores were used as
primary memory and magnetic tape and magnetic disks as secondary storage devices.
In this generation assembly language and high level programming language like FORTRAN, COBOL were used.
There was Batch processing and Multiprogramming Operating system used
The main features of Second Generation are:
Use of transistors
Reliable as compared to First generation computers
Smaller size as compared to First generation computers
Generate less heat as compared to First generation computers
Consumed less electricity as compared to First generation computers
Faster than first generation computers
Still very costly
A.C. needed
Support machine and assmebly languages
Examples: IBM 1620, IBM 7094, CDC 1604, CDC 3600.
Third Generation
The period of third generation was 1965-1971.
The third generation of computer is marked by the use of Integrated Circuits (IC's) in place of transistors. A single
I.C has many transistors, resistors and capacitors along with the associated circuitry. The I.C was invented by Jack
Kilby. This development made computers smaller in size, reliable and efficient. In this generation Remote
processing, Time-sharing, Real-time, Multi-programming Operating System were used.
High level language (FORTRAN-II TO IV, COBOL, PASCAL PL/1, BASIC, ALGOL-68 etc.) were used during
this generation.
The main features of Third Generation are:
IC used
More reliable
Smaller size
Generate less heat
Faster
Lesser maintenance
Page 4
Still costly
A.C needed
Consumed lesser electricity
Support high level language
Examples: IBM-360 series, Honeywell-6000 series, PDP(Personal Data Processor), IBM-370/168,
TDC-316.
Fourth Generation
The period of Fourth Generation was 1971-1980.
The fourth generation of computers is marked by the use of Very Large Scale Integrated (VLSI) circuits.VLSI
circuits having about 5000 transistors and other circuit elements and their associated circuits on a single chip made it
possible to have microcomputers of fourth generation. Fourth Generation computers became more powerful,
compact, reliable, and affordable. As a result, it gave rise to personal computer (PC) revolution.
In this generation Time sharing, Real time, Networks, Distributed Operating System were used.
All the Higher level languages like C and C++, DBASE etc. were used in this generation
Page 5
Game Playing
Development of expert systems to make decisions in real life situations.
Natural language understanding and generation.
The main features of Fifth Generation are:
ULSI technology
Development of true artificial intelligence
Development of Natural language processing
Advancement in Parallel Processing
Advancement in Superconductor technology
More user friendly interfaces with multimedia features
Availability of very powerful and compact computers at cheaper rates
Examples: Desktop, Laptop, NoteBook, UltraBook, ChromeBook
Registers
A register consists of a group of flip-flops with a common clock input. Registers are commonly used to store and
shift binary data. A counter is constructed from two or more flip-flops which change state in a prescribed sequence.
In computer architecture, a processor register is a small amount of storage available as part of a CPU or other
digital processor.
User-accessible registers The most common division of user-accessible registers is into data registers and address
registers.
Data registers can hold numeric values such as integer and floating-point values, as well as characters, small bit
arrays and other data. In some older and low end CPUs, a special data register, known as the accumulator, is used
implicitly for many operations.
Address registers hold addresses and are used by instructions that indirectly access primary memory.
Some processors contain registers that may only be used to hold an address or only to hold numeric values (in some
cases used as an index register whose value is added as an offset from some address); others allow registers to hold
either kind of quantity. A wide variety of possible addressing modes, used to specify the effective address of an
operand, exist.
The stack pointer is used to manage the run-time stack. Rarely, other data stacks are addressed by dedicated address
registers, see stack machine.
A Memory Buffer Register : It is used for storing data received from or sent to cpu. MBR is the register in
a computer's processor, or central processing unit, CPU, that stores the data being transferred to and from the
immediate access store. It acts as a buffer allowing the processor and memory units to act independently without
being affected by minor differences in operation. A data item will be copied to the MBR ready for use at the
nextclock cycle, when it can be either used by the processor or stored in main memory.
Page 6
The Memory Data Register : It is used for storing operands and data. MDR is the register of a computer's control
unit that contains the data to be stored in the computer storage (e.g. RAM), or the data after a fetch from the
computer storage.
The MDR is a two-way register. When data is fetched from memory and placed into the MDR, it is written to in one
direction. When there is a write instruction, the data to be written is placed into the MDR from another CPU register,
which then puts the data into memory.
In a computer, the Memory Address Register (MAR) is a CPU register that either stores the memory address from
which data will be fetched to the CPU or the address to which data will be sent and stored.
The program counter, PC, is a special-purpose register that is used by the processor to hold the address of the next
instruction to be executed.
Instruction Set - In computing, an instruction register is the part of a CPU's control unit that stores the instruction
currently being executed or decoded
Accumulator(ACC) For storing the results produced by arithmetic and logic units.
Bus Architecture
-Memory bus
Memory bus (also called system bus since it interconnects the subsystems)
Interconnects the processor with the memory systems and also connects the I/O bus
Page 7
Three sets of signals address bus, data bus, and control bus.
System Bus
A systems bus characteristics according to the needs of the processor, speed, and word length for instructions
and data
Processor internal bus(es) characteristics differ from the system external bus(es).
Address Bus
Through the address bus, processor issues the address of the instruction byte or word to the memory system
Through the address bus, processor execution unit, when required, issues the address of the data (byte or word) to
the memory system
Data Bus
When the Processor issues the address of the instruction, it gets back the instruction through the data bus
When it issues the address of the data, it loads the data through the data bus
When it issues the address of the data, it stores the data in the memory through the data bus.
A data bus of 32-bits fetches, loads, or stores the instruction or data of 32-bits at one time.
Fig: Buses to interconnect the processor Functional units to memory and IO systems
Memory System
There are two kinds of computer memory: primary and secondary . Primary memory is accessible directly by the
processing unit. RAM is an example of primary memory. As soon as the computer is switched off the contents of the
Page 8
primary memory is lost. You can store and retrieve data much faster with primary memory compared to secondary
memory. Secondary memory such as floppy disks , magnetic disk , etc., is located outside the computer. Primary
memory is more expensive than secondary memory. Because of this the size of primary memory is less than that of
secondary memory.
Computer memory is used to store two things:
i) Instructions to execute a program .
ii) Data .
When the computer is doing any job, the data that have to be processed are stored in the primary memory. This data
may come from an input device like keyboard or from a secondary storage device like a floppy disk.
The following terms related to memory of a computer are discussed below :
The primary storage is referred to as random access memory (RAM) because it is possible to randomly select and
use any location of the memory directly store and retrieve data . It takes same time to any address of the memory as
the first address. It is also called read/write memory .The storage of data and instructions inside the primary storage
is temporary . It disappears from RAM as soon as the power to the computer is switched off. The memories,which
loose their content on failure of power supply, are known as volatile memories. So now we can say that RAM is
volatile
memory.
Memory Hierarchy
Page 9
Computer Ethics
Ethics is a set of moral principles that govern the behavior of a group or individual. Therefore, computer ethics is set
of moral principles that regulate the use of computers. Some common issues of computer ethics include intellectual
property rights (such as copyrighted electronic content), privacy concerns, and how computers affect society.
For example, while it is easy to duplicate copyrighted electronic (or digital) content, computer ethics would suggest
that it is wrong to do so without the author's approval.
Computer Applications
e-Business
eBusiness (e-Business), or Electronic Business, is the administration of conducting business via the internet. This
would include the buying and selling of goods and services, along with providing technical or customer support
through the Internet
Bioenformatics
Bioinformatics has become an important part of many areas of biology. In experimental molecular biology,
bioinformatics techniques such as image and signal processing allow extraction of useful results from large amounts
of raw data. At a more integrative level, it helps analyze and catalogue the biological pathways and networks that are
Page 10
an important part of systems biology. In structural biology, it aids in the simulation and modeling of DNA, RNA,
and protein structures as well as molecular interactions.
GIS and remote sensing
A geographic information system (GIS) is a computer-based tool for mapping and analyzing feature events on earth.
GIS technology integrates common database operations, such as query and statistical analysis, with maps. GIS
manages location-based information and provides tools for display and analysis of various statistics, including
population characteristics, economic development opportunities, and vegetation types
Remote sensing
Remote sensing is the acquisition of information about an object or phenomenon without making physical contact
with the object. In modern usage, the term generally refers to the use of aerial sensor technologies to detect and
classify objects on Earth (both on the surface, and in the atmosphere and oceans) by means of propagated signals
(e.g. electromagnetic radiation emitted from aircraft or satellites).
Animation
'To animate' literally means to give life to. Animating is moving something that cannot move on it's own. Animation
adds to graphics the dimensions of time, which tremendously increase the potential of transmitting the desired
information. In order to animate something the animator has to be able to specify directly or indirectly how the
'thing' has to move through time and space.
With time the technique of animation has become more and more computer -assisted and
computer- generated. All of such techniques require a trade-off between the level of control that the animator has
over the finer details of the motion and the amount of work that the computer does on its own. Broadly, the
computer animation falls into three basic categories: keyframing, motion capture, and simulation
Multimedia
Many clients choose multimedia to explain technical issues. Multimedia uses a navigational approach to accessing
data, allowing one to display video, animation, graphics, drawings, documents, and still images as needed during a
presentation or testimony.
Meteorology
Meteorology is the interdisciplinary scientific study of the atmosphere. Studies in the field stretch back millennia,
though significant progress in meteorology did not occur until the 18th century. The 19th century saw breakthroughs
occur after observing networks developed across several countries.
Climatology
Climatology is the study of climate, scientifically defined as weather conditions averaged over a period of time. This
modern field of study is regarded as a branch of the atmospheric sciences and a subfield of physical geography,
which is one of the Earth sciences. Climatology now includes aspects of oceanography and biogeochemistry.
Instruction set: It is a set of instruction which are executed by a processor to perform the different
operations. Instruction set are two types which are defined on the basis of the complexity and the number
of instructions used and they are
1. Complex instruction set and
2. Reduced instruction set.
Page 11
CISC
32-50mhz in 1992
Page 12
Unit 2
Operating System
Operating System
Definition: An Operating System is a computer program that manages the resources of a computer. It accepts
keyboard or mouse inputs from users and displays the results of the actions and allows the user to run applications,
or communicate with other computers via networked connections.
Also known as an "OS," this is the software that communicates with computer hardware on the most basic level.
Without an operating system, no software programs can run. The OS is what allocates memory, processes tasks,
accesses disks and peripherials, and serves as the user interface.
Page 13
Page 14
It's important to differentiate between multi-user operating systems and single-user operating systems that support
networking. Windows 2000 and Novell Netware can each support hundreds or thousands of networked users, but the
operating systems themselves aren't true multi-user operating systems. The system administrator is the only "user"
for Windows 2000 or Netware. The network support and all of the remote user logins the network enables are, in the
overall plan of the operating system, a program being run by the administrative user.
File systems under Microsoft Windows
Windows makes use of the FAT and NTFS file systems.
FAT
The File Allocation Table (FAT) filing system, supported by all versions of Microsoft Windows, was an evolution
of that used in Microsoft's earlier operating system (MS-DOS which in turn was based on 86-DOS). FAT ultimately
traces its roots back to the short-lived M-DOS project and Standalone disk BASIC before it. FAT32 also addressed
many of the limits in FAT12 and FAT16, but remains limited compared to NTFS.
NTFS
NTFS, introduced with the Windows NT operating system, allowed ACL-based permission control. Hard links,
multiple file streams, attribute indexing, quota tracking, sparse files, encryption, compression, reparse points
(directories working as mount-points for other file systems, symlinks, junctions, remote storage links) are also
Page 15
Process
In computing, a process is an instance of a computer program that is being executed. It contains the program code
and its current activity. Depending on the operating system (OS), a process may be made up of multiple threads of
execution that execute instructions concurrently.
Multitasking is a method to allow multiple processes to share processors (CPUs) and other system resources. Each
CPU executes a single task at a time. However, multitasking allows each processor to switch between tasks that are
being executed without having to wait for each task to finish. Depending on the operating system implementation,
switches could be performed when tasks perform input/output operations, when a task indicates that it can be
switched, or on hardware interrupts.
Process states
The various process states, displayed in a state diagram, with arrows indicating possible transitions between
states.
An operating system kernel that allows multi-tasking needs processes to have certain states. Names for these states
are not standardised, but they have similar functionality. [1]
First, the process is "created" - it is loaded from a secondary storage device (hard disk or CD-ROM...) into
main memory. After that the process scheduler assigns it the state "waiting".
While the process is "waiting" it waits for the scheduler to do a so-called context switch and load the
process into the processor. The process state then becomes "running", and the processor executes the
process instructions.
If a process needs to wait for a resource (wait for user input or file to open ...), it is assigned the "blocked"
state. The process state is changed back to "waiting" when the process no longer needs to wait.
Page 16
Once the process finishes execution, or is terminated by the operating system, it is no longer needed. The
process is removed instantly or is moved to the "terminated" state. When removed, it just waits to be
removed from main memory.
Process management
Process management is an integral part of any modern day operating system (OS). The OS must allocate resources
to processes, enable processes to share and exchange information, protect the resources of each process from other
processes and enable synchronisation among processes. To meet these requirements, the OS must maintain a data
structure for each process, which describes the state and resource ownership of that process, and which enables the
OS to exert control over each process.
Process creation
Operating systems need some ways to create processes. In a very simple system designed for running only a single
application (e.g., the controller in a microwave oven), it may be possible to have all the processes that will ever be
needed be present when the system comes up. In general-purpose systems, however, some way is needed to create
and terminate processes as needed during operation.
There are four principal events that cause a process to be created:
Page 17
System initialization.
Execution of process creation system call by running a process.
A user request to create a new process.
Initiation of a batch job.
When an operating system is booted, typically several processes are created. Some of these are foreground
processes, that interacts with a (human) user and perform work for them. Other are background processes, which are
not associated with particular users, but instead have some specific function. For example, one background process
may be designed to accept incoming e-mails, sleeping most of the day but suddenly springing to life when an
incoming e-mail arrives. Another background process may be designed to accept an incoming request for web pages
hosted on the machine, waking up when a request arrives to service that request.
Process termination
There are many reasons for process termination:
Batch job issues halt instruction
User logs off
Process executes a service request to terminate
Error and fault conditions
Normal completion
Time limit exceeded
Memory unavailable
Bounds violation; for example: attempted access of (non-existent) 11th element of a 10-element array
Protection error; for example: attempted write to read-only file
Arithmetic error; for example: attempted division by zero
Time overrun; for example: process waited longer than a specified maximum for an event
I/O failure
Invalid instruction; for example: when a process tries to execute data (text)
Privileged instruction
Data misuse
Operating system intervention; for example: to resolve a deadlock
Parent terminates so child processes terminate (cascading termination)
Parent request
Memory Management
Memory management is the act of managing computer memory. In its simpler forms, this involves
providing ways to allocate portions of memory to programs at their request, and freeing it for reuse when
no longer needed.
Mainly two techniques are used in memory management
1. Contiguous Memory allocation
2. Non Contiguous Memory allocation
Contiguous Allocation
Contiguous memory allocation is a classical memory allocation model that assigns a process
consecutive memory blocks. The memory is usually divided into two partitions: one for the
resident operating system and one for the user processes.
Page 18
Hole block of available memory; holes of various size are scattered throughout memory
When a process arrives, it is allocated memory from a hole large enough to accommodate
it
Operating system maintains information about:
a) allocated partitions b) free partitions (hole)
Page 19
o
o
o
Shuffle memory contents to place all free memory together in one large block
Compaction is possible only if relocation is dynamic, and is done at execution time
I/O problem
Latch job in memory while it is involved in I/O
Do I/O only into OS buffers
Non Contiguous Memory Allocation: Parts of a process can be allocated noncontiguous chunks of
memory
Paging
Divide physical memory into fixed-sized blocks called frames. Keep track of all free frames
Divide logical memory into blocks of same size called pages
To run a program of size n pages, need to find n free frames.
Set up a page table to translate logical to physical addresses
Remove/reduce external fragmentation. Internal fragmentation exists
Page 20
Address
translation
The operating system is responsible for the following activities in connections with file management:
Use a directory to describe the location of all files plus their attributes.
File Operations
Create
Delete
Open
Close
Read
Write
Page 21
Device Management
Track status of each device (such as tape drives, disk drives, printers, plotters, and terminals).
Use preset policies to determine which process will get a device and for how long.
At process level when I/O command has been executed & device is temporarily released
Page 22
Simplicity
A good programming language must be simple and easy to learn and use. A good programming language should
provide a programmer with a clear, simple and unified set of concepts which can be easily grasped. It is also easy to
develop and implement a compiler or an interpreter
Naturalness
It should provide appropriate operators, data structures, control structures, and a natural syntax in order to facilitate
the users to code their problem easily and efficiently. FORTRAN and COBOL are good examples of scientific and
business languages respectively.
Abstraction
Abstraction means the ability to define and then use complicated structures or operations in ways that allow many of
the details to be ignored. The degree of abstraction allowed by a programming language directly affects its
writability. For example, object-oriented languages support high degree of abstraction. Hence writing programs in
Efficiency
The program written in good programming language are efficiently translated into machine code, are
efficiently executed, and acquires as little space in the memory as possible.
Structuredness
Structuredness means that the language should have necessary features to allow its users to write their
programs based on the concepts of structured programming.
Locality
A good programming language should be such that while writing a program, a programmer need not jump
around visually as the text of the program is prepared. This allows the programmer to concentrate almost
solely on the part of the program around the statements currently being worked with. COBOL lacks
locality because data definitions are separated from processing statements, perhaps by many pages of
code.\
Page 23
Extensibility
A good programming language should allow extension through simple, natural, and elegant mechanisms.
Concepts of OOP:
Objects
Classes
Data Abstraction and Encapsulation
Inheritance
Polymorphism
Objects
Objects are the basic run-time entities in an object-oriented system. Programming problem is analyzed in terms of objects
and nature of communication between them. When a program is executed, objects interact with each other by sending
messages. Different objects can also interact with each other without knowing the details of their data or code.
Classes
A class is a collection of objects of similar type. Once a class is defined, any number of objects can be created which
belong to that class.
Data Abstraction and Encapsulation
Abstraction refers to the act of representing essential features without including the background details or explanations.
Classes use the concept of abstraction and are defined as a list of abstract attributes. Storing data and functions in a single
unit (class) is encapsulation. Data cannot be accessible to the outside world and only those functions which are stored in
the class can access it.
Inheritance
Inheritance is the process by which objects can acquire the properties of objects of other class.
Polymorphism
Polymorphism means the ability to take more than one form. An operation may exhibit different behaviors in different
instances. The behavior depends on the data types used in the operation
Page 24
Data Access
Data Hiding
Overloading
Examples
the system.
In POP, Most function uses Global data for sharing that can
be accessed freely from function to function in the system.
POP does not have any proper way for hiding data so it
is less secure.
In POP, Overloading is not possible.
Example of POP are : C, VB, FORTRAN, Pascal.
Page 25