Bca Ii Sem Operating Systems
Bca Ii Sem Operating Systems
Unit – I:
Operating system introduction: Operating systems objectives and functions, computer
System architecture, os structure, os operations, evolution of operating systems –
simple Batch, multi programmed, time shared, parallel, distributed systems, real-time
systems, Operating system services.
Unit – II:
Process and cpu scheduling - process concepts - the process, process state, process
control Block, threads, process scheduling - scheduling queues, schedulers, context
switch,
Preemptive scheduling, dispatcher, scheduling criteria, scheduling algorithms,
case studies: Linux, windows.
Process coordination - process synchronization, the critical section problem,
synchronization Hardware, semaphores, and classic problems of synchronization,
monitors, case studies: Linux, windows.
Unit –III:
Memory management and virtual memory - logical & physical address space,
swapping, Contiguous allocation, paging, structure of page table. Segmentation,
segmentation with Paging, virtual memory, demand paging, performance of
demanding paging, page
Replacement page replacement algorithms, allocation of frames.
Unit – IV:
File system interface - the concept of a file, access methods, directory structure, file
system Mounting, file sharing, protection, file system structure, Mass storage
structure - overview of mass storage structure, disk structure, disk Attachment, Disk
scheduling.
Unit –V:
Deadlocks - system model, deadlock characterization, methods for handling
deadlocks, Deadlock prevention, deadlock avoidance, deadlock detection and recovery
from deadlock.
.
References books:
1. Operating system principles, Abraham Silberchatz, peter b. Galvin, Greg Gagne 8th
Edition, wiley student edition.
2. Principles of operating systems by Naresh chauhan, oxford universit
UNIT-I
Operating system can also be defined as Resource allocator – manages and allocates
Resources. Control program – controls the execution of user programs and operations
of I/O devices.
Kernel – the one program running at all times (all else being application programs).
Process Management
Memory Management
File Management
I/O System Management
Secondary Storage Management
Security
Control over system performance
Job accounting
Error detecting aids
Coordination between other software and users
1. Process Management:
2. Memory Management:
3. File Management:
The main purpose of computer system is to execute programs. These programs with
the data they access must be in main memory during execution. As the main memory
is too small to accommodate all data & programs & because the data that it holds are
lost when power is lost. The computer system must provide secondary storage to
back-up main memory. Most modern computer systems are disks as the storage
medium to store data & program. The operating system is responsible for the following
activities of disk management.
Free space management.
Storage allocation.
Disk scheduling
Because secondary storage is used frequently it must be used efficiently.
A computer system can be divided into four components: the hardware, the
operating system, the application programs, and the users.
1. User's View
2. System View
User View: The user view of the computer refers to the interface being used. Such
systems are designed for one user to monopolize its resources, to maximize the work
that the user is performing. In these cases, the operating system is designed mostly
for ease of use, with some attention paid to performance, and none paid to resource
utilization.
2. Storage Structure
Computer programs must be in main memory (also called RAM) to be executed. Main
memory is the only large storage area that the processor can access directly. It forms
an array of memory words. Each word has its own address. Interaction is achieved
through a sequence of load or store instructions to specific memory addresses. The
load instruction moves a word from main memory to an internal register within the
CPU, whereas the store instruction moves the content of a register to main memory.
The instruction-execution cycle includes:
1) Fetches an instruction from memory and stores that instruction in the instruction
register. And increment the PC register.
2) Decode the instruction and may cause operands to be fetched from memory and
stored in some internal register.
2) Main memory is a volatile storage device that loses its contents when power is
turned off or otherwise lost.
3. I/O Structure
Storage is only one of many types of I/O devices with in a computer. A computer
system consists of CPUs and multiple device controllers that are connected
through a common bus. The device controller is responsible for moving the data
between the peripheral devices that it controls and its local buffer storage. Typically,
operating systems have a device driver for each device controller.
To start an I/O operation, the device driver loads the appropriate registers within the
device controller.
The device controller examines the contents of these registers to determine
what action to take.
The controller starts the transfer of data from the device to its local buffer. Once
the transfer of data is complete, the device controller informs the device driver
via an interrupt that it has finished its operation.
The device driver then returns control to the operating system. For other
operations, the device driver returns status information.
This form of interrupt-driven I/o is fine for moving small amount of data but can
produce high overhead when used for bulk data movement such as disk I/O.For
moving bulk data, direct memory access (DMA) is used.
After setting up buffers, pointers, and counters for the I/O device, the device
controller transfers an entire block of data directly to or from its own buffer
storage to memory, with no intervention by the CPU. Only one interrupt is
generated per block, to tell the device driver that the operation has completed.
Computer is an electronic machine that makes performing any task very easy. In
computer, the CPU executes each instruction provided to it, in a series of steps, this
series of steps is called Machine Cycle, and is repeated for each instruction. One
machine cycle involves fetching of instruction, decoding the instruction, transferring
the data, executing the instruction.
Single-processor system
As in the above diagram, there are multiple applications that need to be executed.
However, the system contains a single processor and only one process can be
executed at a time.
2. Multiprocessor systems
Most computer systems are single processor systems i.e they only have one
processor. However, multiprocessor or parallel systems are increasing in importance
nowadays. These systems have multiple processors working in parallel that share the
computer clock, memory, bus, peripheral devices etc. It is also known as Parallel
systems or tightly coupled systems. An image demonstrating the multiprocessor
architecture is
Advantages of Multiprocessor Systems
Enhanced Throughput
Asymmetric Multiprocessing
In asymmetric systems, each processor is assigned a specific
task.There is a master processor that gives instruction to all the other processors.
Asymmetric multiprocessor system contains a master slave relationship. There is one
master processor that controls remaining slave processor. The master processor allots
processes to slave processor, or they may have some predefined task to perform.
In case a master processor fails, one
processor among the slave processor is made the master processor to continue the
execution. In case if a slave processor fails, the other slave processor take over its job.
Asymmetric Multiprocessing is simple as there only one processor that is controlling
data structure and all the activities in the system.
Symmetric Multiprocessing
Symmetric Multiprocessing is one in which each
processor performs all tasks within the operating system. All the processors are in a
peer to peer relationship , it has no master-slave relationship like asymmetric
multiprocessing. All the processors here communicate using the shared memory.
3. Clustered systems
Simple Structure
There are several commercial system that don’t have a well- defined structure such
operating systems begins as small, simple & limited systems and then grow beyond
their original scope. MS-DOS is an example of such system. It was not divided into
modules carefully. Another example of limited structuring is the UNIX operating
system.
Operating systems such as MS-DOS and the original UNIX did not have well-
defined structures.
There was no CPU Execution Mode (user and kernel), and so errors in
applications could cause the whole system to crash.
Monolithic Approach
Functionality of the OS is invoked with simple function calls within the kernel,
which is one large program.
Device drivers are loaded into the running kernel and become part of the kernel.
Layered Approach
Microkernels
This structures the operating system by removing all nonessential portions of the
kernel and implementing them as system and user level programs.
A Microkernel architecture
.
6. EXPLAIN OPERATING SYSTEM OPERATIONS?
Interrupt driven by hardware.
Software error or request creates exception or trap.
Division by zero, request for operating system service.
Other process problems include infinite loop, processes modifying each Other or
the operating system.
Dual-mode operation allows OS to protect itself and other system Components.
User mode and kernel mode.
Mode bit provided by hardware.
Provides ability to distinguish when system is running user code or kernel code.
System call changes mode to kernel, return from call resets it to user.
Dual-Mode Operation
In order to ensure the proper execution of the operating
system, we must be able to distinguish between the execution of operating-system
code and user defined code. The approach taken by most computer systems is to
provide hardware support that allows us to differentiate among various modes of
execution. At the very least, we need two separate modes of operation: user
mode and kernel mode (also called supervisor mode, system mode, or
privileged mode). A bit, called the mode bit, is added to the hardware of the
computer to indicate the current mode: kernel (0) or user (1). With the mode bit, we
are able to distinguish between a task that is executed on behalf of the operating
system and one that is executed on behalf of the user.
When the computer system is executing on behalf of a user application, the
system is in user mode.
However, when a user application requests a service from the operating system
(via a system call), it must transition from user to kernel mode to fulfill the
request.
At system boot time, the hardware starts in kernel mode.
The operating system is then loaded and starts user applications in user mode.
Whenever a trap or interrupt occurs, the hardware switches from user mode to
kernel mode (that is, changes the state of the mode bit to 0).
Thus, whenever the operating system gains control of the computer, it is
in kernel mode.
The system always switches to user mode (by setting the mode bit to 1)
before passing control to a user program.
At system boot time, the hardware starts in kernel mode. The Operating system is
then loaded and starts user applications in user mode. Whenever a trap or interrupt
occurs, the hardware switches from user mode to kernel mode. Thus, whenever the
operating system gains control of the computer, it is in kernel mode. The system
always switches to user mode before passing control to a user program.
The dual mode of operation provides us with the means for protecting the operating
system from errant users—and errant users from one another. We accomplish this
protection by designating some of the machine instructions that may cause harm as
privileged instructions. The hardware allows privileged instructions to be executed
only in kernel mode. If an attempt is made to execute a privileged instruction in user
mode, the hardware does not execute the instruction but rather treats it as illegal and
traps it to the operating system.
System calls provide the means for a user program to ask the operating system to
perform tasks reserved for the operating system on the user program's behalf. A
system call is invoked in a variety of ways, depending on the functionality provided by
the underlying processor.
When a system call is executed, it is treated by the hardware as software interrupt.
Control passes through the interrupt vector to a service routine in the operating
system, and the mode bit is set to kernel mode. The system call service routine is a
part of the operating system. The kernel examines the interrupting instruction to
determine what system call has occurred; a parameter indicates what type of service
the user program is requesting. Additional information needed for the request may be
passed in registers, on the stack, or in memory (with pointers to the memory
locations passed in registers). The kernel verifies that the parameters are correct and
legal, executes the request, and returns control to the instruction following the
system call. The lack of a hardware-supported dual mode can cause serious
shortcomings in an operating system.
When a program error occurs, the operating system must terminate the program
abnormally. This situation is handled by the same code as is a user-requested
abnormal termination. An appropriate error message is given, and the memory of the
program may be dumped. The memory dump is usually written to a file so that the
user or programmer can examine it and perhaps correct it and restart the program.
Timer
The operating system maintains control over the CPU. We must prevent a
user program from getting stuck in an infinite loop or not calling system services
and never returning control to the operating system. To accomplish this goal, we can
use a timer. A timer can be set to interrupt the computer after a specified period. The
period may be fixed (for example, 1/60 second) or variable (for example, from 1
millisecond to 1 second). A variable timer is generally implemented by a fixed-rate
clock and a counter.
The operating system sets the counter. Every time the clock ticks, the counter is
decremented. When the counter reaches 0, an interrupt occurs. For instance, a 10-bit
counter with a 1-millisecond clock allows interrupts at intervals from 1 millisecond to
1,024 milliseconds, in steps of 1 millisecond. Before turning over control to the user,
the operating system ensures that the timer is set to interrupt. If the timer interrupts,
control transfers automatically to the operating system, which may treat the interrupt
as a fatal error or may give the program more time. Thus, we can use the timer to
prevent a user program from running too long. A simple technique is to initialize a
counter with the amount of time that a program is allowed to run.
Every second, the timer interrupts and the counter is decremented by 1. As long as
the counter is positive, control is returned to the user program. When the counter
becomes negative, the operating system terminates the program for exceeding the
assigned time limit.
i) Batch Systems
BATCH SYSTEMS
The users of a batch operating system do not interact with the computer directly. Each
user prepares his job on an off-line device like punch cards and submits it to the
computer operator. To speed up processing, jobs with similar needs are batched
together and run as a group. The programmers leave their programs with the operator
and the operator then sorts the programs with similar requirements into batches.
MULTIPROGRAMMING SYSTEM
In the multiprogramming, CPU utilization is increased. A single user can’t keep CPU
and I/O devices busy at all times. The idea of multiprogramming is as follows: The
operating system keeps several jobs in memory simultaneously.
Initially all the jobs are kept in job pool, a storage area in the disk. Some
jobs from job pool are placed in memory. This requires some type of job scheduling.
When operating system selects one job from the memory and begins to execute, this
job may require some input from I/O device to complete. In the non-
multiprogramming system, CPU will sit idle during this time. But in multiprogramming
system, CPU will not sit idle, instead of, it switches to another job. When that jobs
needs to wait, CPU switches to another job and soon. When the first finishes its I/O
operation, CPU will go back to execute another job and the process continues and
CPU will never sit idle.
Multiple jobs are executed by the CPU by switching between them, but the switches
occur so frequently. Thus, the user can receive an immediate response. For example,
in a transaction processing, the processor executes each user program in a short burst
or quantum of computation. That is, if n users are present, then each user can get a
time quantum. When the user submits the command, the response time is in few
seconds at most.
The operating system uses CPU scheduling and multiprogramming to provide each
user with a small portion of a time. Computer systems that were designed primarily as
batch systems have been modified to time-sharing systems.
Problem of reliability.
Question of security and integrity of user programs and data.
Problem of data communication.
Parallel Systems:
Many systems are single processor systems i.e., they have only one CPU. However,
Multiprocessor Systems (also known as Parallel Systems) have more than
one processor in close communication, sharing bus, the clock and sometimes memory
and peripheral devices. Hence, these systems can also be called “Tightly Coupled
Systems”. The advantages of parallel systems are
c) Increased reliability: If we have ten processors and one fails, then each of the
remaining nine processors must pick up the share of a job and complete it. So the
system works with less speed.
There are two types of multiprocessor systems. They are symmetric and
asymmetric.
The processors communicate with one another through various communication lines
(such as high-speed buses or telephone lines). These are referred as loosely coupled
systems or distributed systems. Processors in a distributed system may vary in size
and function. These processors are referred as sites, nodes, computers, and so on.
With resource sharing facility, a user at one site may be able to use the
resources available at another.
Speedup the exchange of data with one another via electronic mail.
If one site fails in a distributed system, the remaining sites can potentially
continue operating.
Better service to the customers.
Reduction of the load on the host computer.
Reduction of delays in data processing
A real-time system is defined as a data processing system in which the time interval
required to process and respond to inputs is so small that it controls the environment.
The time taken by the system to respond to an input and display of required updated
information is termed as the response time. So in this method, the response time is
very less as compared to online processing.
Real-time systems are used when there are rigid time requirements on the operation
of a processor or the flow of data and real-time systems can be used as a control
device in a dedicated application. A real-time operating system must have well-
defined, fixed time constraints, otherwise the system will fail. For example, scientific
experiments, medical imaging systems, industrial control systems, weapon systems,
robots, air traffic control systems, etc.
Hard real-time systems guarantee that critical tasks complete on time. In hard real-
time systems, secondary storage is limited or missing and the data is stored in ROM.
In these systems, virtual memory is almost never found.
Soft real-time systems are less restrictive. A critical real-time task gets priority over
other tasks and retains the priority until it completes. Soft real-time systems have
limited utility than hard real-time systems. For example, multimedia, virtual reality,
Advanced Scientific Projects like undersea exploration and planetary rovers, etc.
A Network Operating System runs on a server and provides the server the capability
to manage data, users, groups, security, applications, and other networking functions.
The primary purpose of the network operating system is to allow shared file and
printer access among multiple computers in a network, typically a local area network
(LAN), a private network or to other networks.
An Operating System provides services to both the users and to the programs.
Program execution
I/O operations
File System manipulation
Communication
Error Detection
Resource Allocation
Protection
Program execution Operating systems handle many kinds of activities from user
programs to system programs like printer spooler, name servers, file server, etc. Each
of these activities is encapsulated as a process.
I/O Operation An I/O subsystem comprises of I/O devices and their corresponding
driver software. Drivers hide the peculiarities of specific hardware devices from the
users.
An Operating System manages the communication between user and device drivers.
I/O operation means read or write operation with any file or any specific I/O
device.
Operating system provides the access to the required I/O device when required.
A file system is normally organized into directories for easy navigation and usage.
These directories may contain files and other directions. Following are the major
activities of an operating system with respect to file management −
The OS handles routing and connection strategies, and the problems of contention and
security. Following are the major activities of an operating system with respect to
communication −
Error handling Errors can occur anytime and anywhere. An error may occur in CPU,
in I/O devices or in the memory hardware. Following are the major activities of an
operating system with respect to error handling −
PROCESS
2Q) Write about Process Control Block (PCB) (OR) Task Control Block (TCB)
A)PCB:PCB stands for Process Control Block. It is also called as Task Control Block. In an O.S, the PCB
contain process ID, process state, program counter, memory, accounting information, I/O status information
and so on....The general structure of Process Control Block is represented as follows;
1
PROCESS I.D: In every O.S, each process has an identification
PROCESS STATE: Each process has a state. For example, the state may contain newborn state, ready state,
running state, waiting state and termination state.
PROGRAM COUNTER: The functionality of memory is used to store the data and
addresses. In the memory protection, it contains base register and unit register.
ACCOUNTING INFORMATION: In this functionality, it represents the data for time limits, cpu
utilization, resources of I/O and memory and so on.....
I/O STATUS INFORMATION: The functionality of I/O status is used to represent the status of all I/O
devices.
PCB: The functionality of PCB pointer is used to represent the next process.
In this scheduling, the middle term scheduler contain swap in, swap out functionalities. It also contain
context between the processes.
In the scheduling, it follows the header linked list. In the header
linked list, the queue header contains two parts. They are;
1. HEAD NODE
2. TAIL
The head part contains the address part of the 1st PCB, whereas the tail part contains addresses part of the
last.
2
4) Explain about threads?
A)A thread is a flow of execution through the process code, with its own program counter that keeps track of
which instruction to execute next, system registers which hold its current working variables, and a stack
which contains the execution history.
A thread shares with its peer threads few information like code segment, data segment and open files. When
one thread alters a code segment memory item, all other threads see that.
A thread is also called a lightweight process. Threads provide a way to improve application performance
through parallelism. Threads represent a software approach to improving performance of operating system
by reducing the overhead thread is equivalent to a classical process.
Each thread belongs to exactly one process and no thread can exist outside a process.
Each thread represents a separate flow of control. Threads have been successfully used in implementing
network servers and web server. They also provide a suitable foundation for parallel execution of
applications on shared memory multiprocessors. The following figure shows the working of a single-
threaded and a multithreaded process.
Advantages of Thread
Threads minimize the context switching time.
Use of threads provides concurrency within a process.
Efficient communication.
It is more economical to create and context switch threads.
Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.
Types of Thread
Threads are implemented in following two ways:
User Level Threads -- User managed threads
Kernel Level Threads -- Operating System managed threads acting on kernel, an operating system core.
User Level Threads: In this case, the thread management kernel is not aware of the existence of threads.
The thread library contains code for creating and destroying threads, for passing message and data between
threads, for scheduling thread execution and for saving and restoring thread contexts. The application starts
with a single thread.
Advantages
• Thread switching does not require Kernel mode privileges.
• User level thread can run on any operating system.
• Scheduling can be application specific in the user level thread.
3
• User level threads are fast to create and manage.
Disadvantages
• In a typical operating system, most system calls are blocking.
• Multithreaded application cannot take advantage of multiprocessing.
Kernel Level Threads
In this case, thread management is done by the Kernel. There is no thread management code in the
application area. Kernel threads are supported directly by the operating system. Any application can be
programmed to be multithreaded. All of the threads within an application are supported within a single
process.
The Kernel maintains context information for the process as a whole and for individuals threads within the
process. Scheduling by the Kernel is done on a thread basis. The Kernel performs thread creation, scheduling
and management in Kernel space. Kernel threads are generally slower to create and manage than the user
threads.
Advantages
• Kernel can simultaneously schedule multiple threads from the same process on multiple processes.
• If one thread in a process is blocked, the Kernel can schedule another thread of the same process.
• Kernel routines themselves can be multithreaded.
• Kernel threads are generally slower to create and manage than the user threads.
• Transfer of control from one thread to another within the same process requires a mode switch to the
Kernel.
5Q) Explain about CPU Scheduling algorithms ?
A) The functionality of scheduling is represented by order of execution. It is the functionality of
mulltiprogramming environment. There are various types of scheduling algorithms to support in the
operatingsystem. They are;
First Come First Serve
(FCFS) Shortest job first
(SJF) Priority scheduling
Round Robbin (RR) Multilevel queue
scheduling Multilevel feedback queue
scheduling
These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are designed so
that once a process enters the running state, it cannot be preempted until it completes its allotted time,
whereas the preemptive scheduling is based on priority where a scheduler may preempt a low priority
running process anytime when a high priority process enters into a ready state.
First Come, First Served (FCFS)
Jobs are executed on first come, first served
basis. It is a non-preemptive scheduling
algorithm. Easy to understand and implement.
Its implementation is based on FIFO queue.
Poor in performance, as average wait time is high.
4
7. Operating System ─ Scheduling Algorithms
Wait time of each process is as follows:
Process Wait Time : Service Time - Arrival Time
P0 0 - 0 = 0
P1 5 - 1 = 4
P2 8 - 2 = 6
P3 16 - 3 = 13
Average Wait Time: (0+4+6+13) / 4 = 5.75
Shortest Job Next (SJN)
This is also known as shortest job first, or SJF.
This is a non-preemptive scheduling algorithm.
Best approach to minimize waiting time.
Easy to implement in Batch systems where required CPU time is known in advance.
Impossible to implement in interactive systems where the required CPU time is not known.
The processer should know in advance how much time a process will take.
The processes are permanently assigned to one queue and executes its own scheduling algorithm
MULTILEVEL FEEDBACK QUEUE SCHEDULING:
In multilevel queue, the process are permanently assigned to a queue. The processes do not move between
the queues. In multilevel queue scheduling, it is possible to move between the
Queues. In generally, a multi-level feed back queue scheduler is defined by the following parameters.
* the no of queues
* the scheduling algorithm for each queue
* the method used to determine which queue a process while enter when that process needs the service.
For example, It is represented as follows;
8
Advantages
The spooling operation uses a disk as a very large buffer.
Spooling is capable of overlapping I/O operation for one job with processor
operations for another job.
Context switches are computationally intensive since register and memory state must be saved and restored.
To avoid the amount of context switching time, some hardware systems employ two or more sets of
processor registers. When the process is switched, the following information is stored for later use.
Program Counter
Scheduling information
Base and limit register value
Currently used register
Changed State
I/O State information
Accounting information
1.PROCESS CONTROL:
The system calls for the process control are represented as follows;
Load,execute
Create process, terminate process
Wait for time
Wait event, signal event
Allocate and free memory
2. FILE MANIPULATION:
It is one of the function of operating system. The system calls are represented as follows;
Create file, delete file
Open,close
Read, write, re-position
Set the attributes
3. DEVICE MANIPULATION:
It is one of the function of O.S. The system calls are represented as follows;
Request device, release device
Read, write, re-position
Logically attached or detached devices
4. INFORMATION MAINTAINANCE:
It is one of the function of O.S. It is used to store and retrieve the data. The system calls are represented as
follows;
Get time or date, set time or date
Get system data, set system data
Get process file, set process file
Get device attributes, set devive attributes
5. COMMUNICATION:
The functionality of communication is used to share the data or transfer the data from one process to another.
The system calls are represented as follows;
Create, delete communication connection
Send, receive messages
Transfer information status
Attach or detach remote devices
For example, the system calls are represented as follows;
Fork( ) It creates child processes
Fopen( ) It opens the file
Fclose( ) It closes the file
Create( ) It creates a file with specified format
DIRECT COMMUNICATION:
In the direct communication, each process is communicated with explicity name of the sender or receiver. In
this mechanism, the send & receive functions are represented as follows;
Send (A, message) Send a message to the process A.
Receive (B, message) Receives the message from the process B.
In this communication the link has following properties;
A link is associated with exactly two processes.
Between each pair of processes, there exists exactly one link.
The link may be unidirectional but it is usually bidirectional.
INDIRECT COMMUNICATION:
In this functionality, the messages are sent and received through mailbox. A mail box is an unique
identification between the processes. It is a shared mailbox.
Send (A, message) The message is sent to mailbox A.
Receive (A, message) Receive the message from mailbox A.
In this communication, the link has following
properties; It have a shared mailbox.
A link may be either unidirectional (or) bidirectional.
11
SYMMETRIC/ASYMMETRIC COMMUNICATION:
In the symmetric communication, both sender & receiverknows the receiver name & receiver knows the
sender name i.e;
Send (p, message)
Receive (q, message)
In asymmetric communication, only sender doesn’t require name of the sender
i.e; Send (p, message)
Receive (ID, message)
BUFFERING:
It is used to represent the number of messages that can be stored temporarily. There are 3 ways.
1. ZERO CAPACITY:In this functionality, the queue length is zero i.e; the link doesn’t have any messages.
2. BOUNDED CAPACITY: In this functionality, the queue has finite length i.e; the link contain n messages.
3. UNBOUNDED CAPACITY:In this functionality, the queue has infinite length i.e; the link contain any
number of messages.
The producer and consumer can be implemented through interprocess communication (IPC) is represented as
follows;
PRODUCER:
Repeat
Produce ‘a’ message
........................
Send (consumer, message)
....................
until false
CONSUMER:
Repeat
.......................
Receive (Sender, message);
Consume the message
......................
until false
12Q) Define job queue, ready queue, waiting time, response-time, turn-around time, throughput?
A) Job queue: when a job enter into the system, those jobs are kept in a queue is called job queue.
Ready queue: In job queue, which job is ready to execute those jobs are kept in a queue is called ready
queue. The queue follows FIFO.
Waiting time: Each job how much time spent in a ready queue is called waiting time.
Response time: if the process is executing, how much time taken to respond for the process(i.e.first reply) is
called response time.
Turn around time: The delay between job submission and completion is called turn around time.
Throughput: The number of processes that are executed for the unit period of time is called throughput.
12
Thread switching does not need to
Process switching needs interaction with operating interact with operating system.
system
In multiple processing All threads can share same set of open
environments, each process files, child processes
executes the same code but has its
own memory and file resources
If one process is blocked, then no While one thread is blocked and
other process can execute until the waiting, a second thread in the same
first process is unblocked. task can run.
Multiple processes without using Multiple threaded processes use fewer
threads use more resources resources.
Multiple threaded processes use fewer One thread can read, write or change
resources. another thread's data.
15Q) Comparison among Schedulers
Long-Term Scheduler Short-Term Scheduler Middle-Term
Scheduler
It is a job scheduler It is a CPU scheduler It is a process swapping
13
UNIT III
MEMORY MANAGEMENT
Address binding
➔ To be executed, the program must be brought into memory and placed within a
process.
➔ Depending on the memory management in use, the process may be
moved between disk and memory during its execution
address yes
CPU yes
≥ <
no no
Memory
Trap to operating system
Monitor- addressing
error
Relocation
register
14000
Physical
CPU Logical address
+ Address Memory
346 14346
MMU
Dynamic Loading
➔ The entire program and all data of a process must be in physical memory for
the process to execute.
➔ To obtain better memory space utilization, we can use dynamic loading.
➔ With dynamic loading, a routine is not loaded until it is called.
➔ All routine are kept on disk in a relocatable load format.
➔ When a routine needs to call another routine, the calling routine first checks
to see whether the other routine has been loaded.
➔ If not, the relocatable linking loader is called to load the desired routine into
memory and to update the program’s address tables to reflect this change.
➔ The advantage of dynamic loading is that an unused routine is never loaded.
Swapping
Backing store : fast disk large enough to accommodate copies of all memory images
for all users; must provide direct access to these memory images.
Roll out, Roll in : Swapping variant used for priority –based scheduling
algorithms; lower-priority process is swapped out so higher-priority process can
be loaded ad executed.
➔ Major part of swap time is transfer time; total transfer time is directly
proportional to the amount of memory swapped.
➔ Modified versions of swapping are found on many systems, i.e., UNIX,
Linux, and windows.
Operating
System
P1
Swap in
2 Process
P2
User
Main Memory Backing store
Space
➔ The main memory must accommodate both the operating system and
the various user processes.
➔ We therefore need to allocate different parts of the main memory in the
most efficient way possible.
➔ The memory is usually divided into two partitions : one for the resident
operating system and one for the user processes.
➔ We may place the operating system in either low memory.
➔ Programmers usually place the operating low memory
➔ In contiguous memory allocation , each process is contained in a
single contiguous section memory.
Memory Allocation :
Fragmentation
➔ As processes are loaded and removed from memory, the free memory space is
broken into little pieces.
➔ It happens after sometimes that processes cannot be allocated to memory
blocks considering their small size and memory blocks remains unused.
➔ This problem is known as Fragmentation.
1
External fragmentation
2
Internal fragmentation
Paging
➔ A computer can address more memory than the amount physically installed on
the system.
➔ This extra memory is actually called virtual memory and it is a section of a hard
that's set up to emulate the computer's RAM.
➔ Paging technique plays an important role in implementing virtual memory.
➔ Paging is a memory management technique in which process address space is
broken into blocks of the same size called pages (size is power of 2, between
512 bytes and 8192 bytes). The size of the process is measured in the number
of pages.
➔ Similarly, main memory is divided into small fixed-sized blocks of (physical)
memory called frames and the size of a frame is kept the same as that of a
page to have optimum utilization of the main memory and to avoid external
fragmentation.
Address Translation
➔ Page address is called logical address and represented by page number and
the offset.
P1 P2 d
➔ P1 is an index into the outer page table, and p2 is the displacement within
the page of the outer page table.
Address-Translation Scheme:
Address-translation scheme for a two-level 32-bit paging architecture and is shown in
the following diagram:
Shared Pages:
Another advantage of paging is the possibility of sharing common code. This concept
is very useful in time sharing systems. Consider a system with 40 users and each
process executes a text editor contains 150 KB
of code and 50 KB of data i.e., each text editor requires 200 KB and we require 8000
KB for 40 users. We assume that each text editor has a common code i.e., each text
editor shares the common code.
The following diagram shows sharing of pages for each process, For simplicity, we
assume 3 processes, 3 page tables and 12 frames and each frame length is 50 KB.
Q)Write about Segmentation?
A) Segmentation is a memory-management scheme that supports user view of
memory. A program is a collection of segments. A segment is a logical unit such as
main program, procedure, function, method, object, local variables, global variables,
common block, stack, symbol table and arrays.
User’s View of a Program:
In the above diagram, the physical address is of 32-bits and it is formed as follows:
Initially the segment no. and G are used to search in the descriptor table. The base
and limit information of segment along with offset forms a linear address. This
address is later transformed into physical address like in the paging scheme.
Here, an entire program is not loaded into memory at once. Instead of it, the
program is divided into parts. The required part is loaded into main memory for the
execution. This is not visible to the programmer. The programmer thinks that he has
available lot main memory, but it is not true. Actually the system contains a small
main memory and large amount of auxiliary(secondary) memory.
The address generated by CPU is known as logical address. The address seen by the
memory unit is known as physical address. The set of all logical addresses are known
as logical address space and set of physical addresses are known as physical address
space. Logical addresses are also known as virtual address. Virtual memory can be
implemented via:
a) Demand paging
b) Demand segmentation
Suppose we have 32 KB main memory and 1024 KB auxiliary memory. Then there
are
215 words in main memory, 220 words in auxiliary memory. The physical address
space is 215 and
logical address space is 220. The physical address contains 15-bits and logical
address contains
20-bits.
During address translation, if valid– invalid bit in page table entry is 0 page
fault.
But what happens if the process this to use page that was not brought into
memory.Access to a invalid page causes a page fault.
“In demand paging memory management scheme, when a required page is not in the
main storage, the hardware raises a missing page interrupt, is popularly known as a
page fault”.
Page Table When Some Pages Are Not in Main Memory:
Steps in Handling a Page Fault:
The total number of page faults for the given reference string is 15.
The FIFO algorithm is easy to understand. However the performance is not always
good.
The FIFO algorithm suffers from Belady’s anomaly i.e., the page faults rate increases
as the number of allocated frames increases. This is explained in the following graph:
2) Optimal Algorithm: An optimal page replacement algorithm is the lowest page
fault rate of all algorithms. This algorithm never suffers from Belady’s anomaly. In
optimal page replacement algorithm, we replace a page that will not be used for the
longest period of time.
Ex:
The total number of page faults for the given reference string is 9.
Unfortunately, the optimal page replacement algorithm is difficult to implement
because it future knowledge.
3) LRU (Least Recently Used) Algorithm: LRU algorithm replaces a page that is
last use. LRU chooses a page that has not been used for the longest period of time.
This algorithm looks backward rather than forward.
Ex:
The total number of page faults for the given reference string is 12.
UNIT-IV
FILE SYSTEM INTERFACE
1Q) Explain the concept of a file?
❖ File:“A file is a space on the disk where logically related information will be
stored.”or“A file is a named collection of related information that is recorded on
secondary storage device.”
❖ File types:A file has a certain defined structure according to its type. Different
types of file are
a) Text File:A text file is a sequence of characters organized into lines (and possibly
pages).
b) Source File: A source file is a sequence of subroutines and functions, each of
which is further organized as declarations followed by executable statements.
c) Object File: An object file is a sequence of bytes organized into blocks
understandable by the system’s linker.
d) Executable File: An executable file is a series of code sections that the loader can
bring into memory
and execute.
➔ File Attributes:A file is named, for the convenience of its human users and is
referred to by its name.
➔ A name is usually a string of characters such as “example.c”.
➔ Some systems differentiate between lower and upper case characters.
➔ A file has certain attributes, which are vary from one operating system to
another. They are
Name – The symbolic file name is the only information kept in human-readable form
Type – The information is needed for systems that support different types
Location – This information is a pointer to file location on device
Size – The current file size (in bytes, words or blocks)
Protection – Access-control information controls who can do reading, writing,
executing
➔ Time, date, and user identification – The can be useful data for protection,
security, and usage monitoring Information about files are kept in the directory
structure, which is maintained on the disk
File Operations: A file is an abstract data type.
➔ The operating system provides system calls to create, write, read, reposition,
delete and truncate files.
1. Create – Two steps are required to create a file. First, space for the file system
must be found for the file. Second, an entry for the new file must be made in
the directory.
2. Write – To write the data into a file, a system call is required for both name
and information to be written into the file. The system must keep a write a
pointer to the location in the file where the next write is to take place. The write
pointer must be updated whenever a write occurs.
3. Read – To read from a file, we use a system call that specifies both name of
the file and where the next block of the file should be put. The system keeps a
read pointer to the location in the file where the next read is to take place
6. Open(Fi) – search the directory structure on disk for entry Fi, and move the
content of entry to memory
7. Close (Fi) – move the content of entry Fi in memory to directory structure on
disk
❖ To open files, Several pieces of data are needed to manage open files:
❖ File pointer: pointer to last read/write location, per process that has the file
open
❖ File-open count: counter of number of times a file is open – to allow removal of
data from open-file table when last processes closes it.
❖ Disk location of the file: cache of data access information Access rights: per-
process access mode information
❖ Common File Types:The following table explains common file types:
❖ File Access Methods: File access mechanism refers to the manner in which
the records of a file may be accessed.
❖ There are several ways to access files –
a. Sequential access
b. Direct/Random access
c. Indexed sequential access
a. Sequential access:
✓ A sequential access is that in which the records are accessed in some
sequence, i.e., the information in the file is processed in order, one record
after the other.
✓ This access method is the most primitive one.
✓ Example: Compilers usually access files in this fashion.
b. Direct/Random access:
o Random access file organization provides, accessing the records directly.
o Each record has its own address on the file with by the help of which it
can be directly accessed for reading or writing.
o The records need not be in any sequence within the file and they need not
be in adjacent locations on the storage medium.
Two-Level Directory:
➔ In the two-level directory system, the system maintains a master block that
has one entry for each user.
➔ This master block contains the addresses of the directory of the users.
➔ There are still problems with two-level directory structure.
➔ This structure effectively isolates one user from another.
➔ This is an advantage when the users are completely independent, but a
disadvantage when the users want to cooperate on some task and access files
of other users.
Some systems simply do not allow local files to be accessed by other users.
Tree-Structured Directories:
➔ In the tree-structured directory, the directory themselves are files.
➔ This leads to the possibility of having sub-directories that can contain files and
sub-subdirectories.
➔ When a request is made to delete a directory, all of that directory's files and
subdirectories are also to be deleted.
Acyclic-Graph Directories:
➔ A tree structure prohibits the sharing of files or directories.
➔ An acyclic graph, which is a graph with no cycles, allows directories to have
shared subdirectories and files.
➔ This is explained in the following diagram:
b) Linked allocation:
➔ Linked allocation solves all problems of contiguous allocation.
➔ With linked allocation, each file is a linked list of disk blocks.
➔ The disk blocks may be scattered anywhere on the disk.
➢ The table has one entry for each disk block and is indexed by block number.
➢ The FAT is used much as is a linked list.
➢ The directory contains the block number of the first block of the file.
➢ The table entry indexed by that block number then contains the block number
of the next block in the file.
➢ This chain continues until the last block, which has a special end-of-file value as
the table entry.
➢ Unused blocks are indicated by a 0 (zero) table value.
c) Indexed Allocation:
✓ Linked allocation solves the external fragmentation problem and size declaration
problems of contiguous allocation.
✓ But in the absence of FAT, linked allocation can not support efficient direct
access because the pointers to the blocks are scattered with the blocks
themselves all over the disk and need to be retrieved in order.
✓ Indexed allocation solves the problem in linked allocation by bringing all
pointers together into one location is known as index block.
✓ The following diagram shows the logical view of the index table:
✓ Each file has its own index block, which is an array of disk-block addresses.
✓ The i th entry in the index block points to the i th block of the file.
✓ The directory entry contains the address of the index block.
✓ This is explained in the following diagram:
In the MS-Dos operating system the access rights are applied though “attriB
command is as follows
C:\\ Attrib +751<fIle name>
In the UNIX operating system the access rightes are applied though “chmod”
command is as follows
Chmod +751<file name>
Security:
Security requires not only an adequate protection system, but also consideration of
the external environment within which the system operates. The system protect it
from
i) unauthorized access.
ii) Malicious (harmful) modification or destruction
iii) accidental introduction of inconsistency.
Authentication:
➔ A major security problem for operating systems is authentication. Generally,
authentication is based on one or more of three items: user possession (a key
or card), user knowledge (a user_id and password) or user attributes (
fingerprint, retina pattern or signature).
➔ ¨ Authentication means Verifying the identity of another entity
– Computer authenticating to another computer
– Person authenticating to a local computer
– Person authenticating to a remote computer
Passwords:
➔ The most common authentication of a user’s identity is via a user password.
➔ In password authentication, users are authenticated by the password.
- User has a secret password.
- System checks it to authenticate the user.
- Vulnerable to eavesdropping when password is communicated from user
to system.
- The big problem with password-based authentication is eavesdropping.
Vulnerabilities :
1) External Disclosure – Password becomes known to an unauthorized person by a
means outside normal network or system operation. Includes storing passwords in
unprotected files or posting it in an unsecured place.
2) Password Guessing – Results from very short passwords, not changing passwords
often, allowing passwords which are common words or proper names, and using
default passwords.
3) Live Eavesdropping – Results from allowing passwords to transmit in an
unprotected manner.
4)Verifier Compromise – Occurs when the verifier’s password file is compromised via
an internal attack.
There are various Techniques to handle the passwords safely. They are
1) One-Time Passwords
2) Challenge-Response
3) On-Line Cryptographic Servers
4) Off-Line Cryptographic Servers
5) Use of Non-Repeating Values
6) Mutual Authentication Protocols
Linked List:
➔ In this representation ,to link together all the free disk blocks with the help of a
pointer the linked list is implemented.In the linked list,the first block contains a
pointer to the next n disk block and so on.
➔ For example,It can be representes as follows
Grouping:
➔ In this representation ,every block contains n free blocks.
➔ the last block contains the address of another n free blocks.the importance of
this implementation is used in the large no of free blocks
➢ The file organisation module defines the information about the files, logical
blocks as well as the physical blocks. It also include free space manager.
➢ The logical file system uses the directory structure to provide the file
organisation module. It is also responsible for protection and security.
➢ Application programs are written by the user with the help of languages like c
,c++,java,.., e.t.c. in the application programs, they are various types of files.
For example, they are text files, binary files, indexed files, random files,
executable files, e.t.c..
Thrashing:
✓ It is defined as “a situation in which a program cause page faults for every few
instructions.
✓ In multiprogramming CPU utilization increases in such a case the CPU
utilization is drops sharply.
✓ It can be represented as follows
1) The operating system is responsible for using hardware efficiently — for the disk
drives, this means having a fast access time and disk bandwidth.
2) Access time has two major components
a) Seek time is the time for the disk are to move the heads to the cylinder
containingthedesiredsector.
b) Rotational latency is the additional time waiting for the disk to rotate the
desiredsectortothediskhead.
3)Minimizeseektime
4)Seektime seekdistance
5) Disk bandwidth is the total number of bytes transferred, divided by the total time
between the first request for service and the completion of the last transfer.
1. FCFS Scheduling:
b) Sometimes this algorithm is also called the elevator algorithm because the disk
arm behaves just like elevator in a building, first servicing all the requests going up
and then reversing to service requests the other way.
Ex: This algorithm shows total head movement of 208 cylinders.
4) C-SCAN Scheduling:
a) This scheduling algorithm provides a more uniform wait time than SCAN.
b) The head moves from one end of the disk to the other, servicing requests along
the way. When it reaches the other end, however, it immediately returns
to the beginning of the disk, without servicing any requests on the return trip.
c) The C-SCAN scheduling algorithm treats the cylinders as a circular list that wraps
around from the last cylinder to the first one.
5) C-LOOK Scheduling:
a) This C-LOOK is the version of C-SCAN
b) More commonly, the arm only goes as far as the last request in each direction,
then reverses direction immediately, without first going all the way to the end of the
disk.
Ex:
UNIT-V
DEADLOCKS
1Q)Explain about deadlock in detail?
Deadlock:
❖ When several processes compete for a finite number of resources, a situation
may arise where a process requests a resource and the resource is not available
at that time, in that time the process enters a wait state.
❖ It may happen that waiting processes will never again change state because the
resources that they have requested are held by other waiting processes.
❖ This situation is called a deadlock.
❖ Deadlock can arise if four conditions hold simultaneously.
2) Hold and wait: a process holding at least one resource is waiting to acquire
additional resources held by other processes.
4) Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that
P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is
held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and P0 is waiting for
a resource that is held by P0.
Resource-Allocation Graph:
❖ Deadlocks can be described in terms of a directed graph called a system
resourceallocation graph.
❖ This graph consists of a set of vertices V and a set of edges E.
1) V is partitioned into two types:
a) P = {P1, P2, …, Pn}, the set consisting of all the processes in the system.
b) R = {R1, R2, …, Rm}, the set consisting of all resource types in the system.
2) request edge – directed edge P1 ® Rj
3) assignment edge – directed edge Rj ® Pi
c) Pi requests instance of Rj
d) Pi is holding an instance of Rj
2) Resource instances :
- One instance of resource type R1
- Two instances of resources type R2
- One instance of resource type R3
- Three instances of resource type R4
3) Process states :
- Process P1 is holding an instance of resource type R2 and is waiting for an instance
of resource type R1.
- Process P2 is holding an instance of R1 and R2 and is waiting for an instance of
resource type R3.
- Process P3 is holding an instance of R3.
If the graph consists no cycles, then no process in the system is deadlocked. If the
graph contains a cycle, then a deadlock may exist.
✓ Deadlock prevention is a set of methods for ensuring that at least one of the
necessary conditions cannot hold.
a) Mutual Exclusion:
✓ The mutual exclusion condition must hold for non sharable resources. For
example, a printer can not be accessed by more than one process at a time.
✓ But Read-only files are good example for sharable resources and can not be
involved in deadlock.
✓ A process never needs to wait for sharable resources.
✓ Hence we can not prevent deadlocks by denying the mutual exclusion condition
because some resources are basically non sharable.
b) Hold and Wait:
✓ To ensure that the hold and wait condition never occurs in the system. There
are two protocols to break the hold and wait.
Protocol 1 – Each process to request and be allocated all its resources before it begins
execution.
Protocol 2 – A process request some resources only when the process has none. A
process may request some resources and use them. Before it can request any
additional resources, it must release all the resources which are hold by it.
Disadvantages:
These two protocols have two main disadvantages.
1) Resource utilization may be low – many of the resources may be allocated but
unused for a long period. In the example, the process release the tape drive and disk
file, and then again request the disk file printer. If they are available at that time we
can allocate them. Otherwise the process must wait.
2) Starvation is possible – A process that needs several popular resources may have
to wait indefinitely because at least one of the resources that it needs is always
allocated to some other process.
Protocol 1 –
✓ If a process is holding some resources and requests another resource that
cannot be immediately allocated to it.
✓ All the resources currently being are preempted. The preempted resources are
added to the list of resources for which the process is waiting.
✓ The process will be restarted when it regain its old resources as well as the new
ones.
Protocol 2 –
✓ A process requests some resources, we first check whether they are available.
If they are available, we can allocate them.
✓ If they are not available, first we check whether they are allocated to some
other process that is waiting for additional resources.
✓ If so, preempt them from that process and allocated to the requesting process.
✓ If they are not available, the requesting process must wait.
✓ It may be happen that while it is waiting, some of its existing resources may be
preempted due to the requesting of some other process.
d) Circular Wait: The fourth and final condition for deadlocks is the circular-wait. To
ensure that this condition never holds in the system. To do so, each process requests
resources in an increasing order of enumeration. Let R = { R1, R2, R3, …….., Rm } be
the set of resource types. We assign to each resource type unique integer number,
which allows us to compare two resources and to determine whether one precedes
another in our ordering.
Protocol 1-
✓ Each process can request resources in an increasing order only. A process can
request the resources if and only if F (Rj) > F (Ri).
✓ If several instances of the same resource type are needed, as single request for
all of them must be issued.
Protocol 2 –
✓ Whenever a process requests an instance of resource type Rj , it has released
any resources Ri such that F ( Ri ) ³ F ( Rj ).
✓ If these two protocols are used, then the circular wait condition cannot be hold.
✓ Let the set of processes { P0, P1, ………., Pn }where Pi is waiting for a resource
Ri , which is held by Pi+1.
✓ Process Pi+1 is holding resource Ri, while requesting resource Ri+1, we must
have F (Ri ) < F ( Ri+1 ), for all i.
✓ But this condition means that F (R0) < F (R1) < …….. F ( Rn ) < F (R0 ). By
transitivity, F ( R0 ) < F ( R0 ), which is impossible.
✓ Therefore, there can be no circular wait.
If we follow these protocols to fail one of the 4 conditions for the deadlock, we can
easily prevent deadlock.
✓ A state is safe if the system can allocate the resources to each process (up its
maximum) in some order and still avoid a deadlock.
✓ A system is in a safe state only there exists a safe sequence.
✓ A sequence of processes < P1, P2 ………., Pn> is a safe sequence for the current
allocation state if, for each Pi, the resources that Pi can still request can be
satisfied by the currently available resources plus the resources held by all the
Pj, with j < i. In this situation,
a) If Pi resource needs are not immediately available, then Pi can wait until all Pj have
finished.
b) When Pj is finished, Pi can obtain needed resources, execute, and return allocated
resources, and terminate.
c) When Pi terminates, Pi+1 can obtain its needed resources, and so on.
If there is no safe sequence, the system is said to be unsafe.
Ex: Suppose a system with 12 magnetic tape drives and 3 processes: P0, P1 and P2 .
Process
At time t0, the system is in safe state. The sequence <P1, P0, P2> satisfies the safety
condition.
Safety Algorithm:
This algorithm is used to find out whether or not the system is in a safe state.
This algorithm consists the following steps:
3. Work = Work +
Allocationi Finish[i] = true
go to step 2.
4. If Finish [i] == true for all i, then the system is in a safe state.
Available:
A vector of length m indicates the number of available resources of each type.
Allocation:
An n x m matrix defines the number of resources of each type currently allocated to
each process.
Request:
An n x m matrix indicates the current request of each process. If Request [ i, j] = k,
then process Pi is requesting k more instances of resource type. Rj.
Algorithm:
3. Work = Work +
Allocationi Finish[i] = true
go to step 2.
The sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i.
Suppose P2 requests an additional instance of type C.
The Deadlock exists, consisting of processes P1, P2, P3, and P4.
❖ After the deadlock detection algorithm determines that a deadlock exists, and
then two possibilities exist.
❖ One is that a deadlock has occurred and the user needs to deal with deadlock.
❖ The other is to let the system recover from the deadlock.
❖ There are two options for breaking a deadlock.
❖ One solution is to abort one or more processes to break the circular wait (i.e.,
to kill one of its processes).
❖ The second option is to preempt some resources from one or more of the
deadlock processes.
b) Abort one process at a time until the dead cycle is eliminated: This method
incurs considerable overhead , since after each process is aborted, a deadlock-
detection algorithm must be invoked to determine whether any processes are still
deadlocked.