NOTES
NOTES
FLYNN’S CLASSIFICATION
It is based on the notion of a stream of information. Two types of information flow in to the processor: instructions and data.
The instruction stream is defined as the sequence of instructions executed by the processing unit.
The data stream is defined as the sequence of data including inputs, partial, or temporary results, called by the instruction stream.
According to Flynn’s classification, either of the instructions or data streams can be single or multiple. The Computer Architecture can
be classified into four categories:
- An SISD computer system is a uni-processor machine which is capable of executing a single instruction, operating on a
single data stream.
- Single Instruction: only one instruction stram is being acted on by the CPU during any one clock cycle.
- Single Data: only one data stream is being used as input during any clock cycle.
- Conventional single-procesor Von Neumann computers are classified as SISD systems.
- It is a serial (non-parallel) computer.
- Instructions are executed sequentially, but may be overlapped in their execution stages (Pipelining). Most SISD uni-processor
systems are pipelined.
- SISD computers may have more than one functional units, all under the supervision of control unit.
- Example: Most PC’s, single CPU workstations, minicomputers, mainframes, CDC-6600, VAX 11, IBM 7001 are SISD
computers.
a = b + c O(1)
V1 = V2 + V3 O(n)
M1 = M2 + M3 O(n2)
SIMD (Single-instruction stream Multiple-data streams): (available as GPU)
- An SIMD system is capable of executing the same instruction on all the CPUs but operating on different data streams
- Single Instruction: All processing units execute the same instruction at any given clock cycle.
- Multiple Data: Each processing unit can operate on a different data element.
- SIMD computer has single control unit which issues one instruction at a time but it has multiple ALU’s or processing units to
carry out on multiple data sets simultaneously.
- Well suited to scientific computing since they involve lots of vector and matrix operations.
- Example: Array Processor sand Vector Pipelines
o Array processors: ILLIAC-IV, MPP
o Vector Pipelines: IBM 9000, Cray X-MP, Y-MP & C90.
https://www.youtube.com/watch?
v=gKYGA7fFad4&list=PLz8TdOA7NTzQNlzLxRf
sv2KexBzRSn3MF&index=83
MISD (Multiple-instruction stream Single-data streams):
HYPER THREADING: (single bank with single window dealing with two queues.)
Hyper-Threading is a technology used by some Intel microprocessor s that allows a single microprocessor to
MULTICORE: (one bank with two windows dealing with two
queues each)
In a shared-memory system a collection of autonomous processors is connected to a memory system via an interconnection network,
and each processor can access each memory location. In a shared-memory system, the processors usually communicate implicitly by
accessing shared data structures.
- In shared memory architecture, multiple processors operate independently but share the common memory as global address
space.
- They are tightly coupled systems as processors share a common memory.
- Only one processor can access the shared memory at a time.
- Changes in a memory location effected by one processor are visible to all other processors.
- In shared-memory systems with multiple multicore processors, the interconnect can either connect all the processors directly
to main memory or each processor can have a direct connection to a block of main memory, and the processors can access
each other’s blocks of main memory through special hardware built into the processors.
- In Uniform Memory Access, or UMA system, the time to access all the memory locations will be same for all the cores.
- In Non-uniform Memory Access, or NUMA system, a memory location to which a core is directly connected can be
accessed more quickly than a memory location that must be accessed through another chip.
DISTRIBUTED-MEMORY
SYSTEMS:
Symmetric Multi-Processing
3. Multitasking/Multiprogramming:
Most modern operating systems are multitasking. This means that the operating system provides support for the apparent simultaneous
execution of multiple programs.
- Multitasking is possible even on a system with a single core, since each process runs for a small interval of time, often called
a time slice.
- In a multitasking OS if a process needs to wait for a resource—for example, it needs to read data from external storage—it
will block.
- Threading provides a mechanism for programmers to divide their programs into more or
less independent tasks with the property that when one thread is blocked another thread
can be run.
- Threads are contained within processes, so they can use the same executable, and they
usually share the same memory and the same I/O devices.
- In fact, two threads belonging to one process can share most of the process’ resources
- If a process is the “master” thread of execution and threads are started and stopped by
the process, then we can envision the process and its subsidiary threads as lines: when a
thread is started, it forks off the process; when a thread terminates, it joins the process.
4. Multithreading:
Hardware multithreading provides a means for systems to continue doing useful work when the task being currently executed has
stalled—for example, if the current task has to wait for data to be loaded from memory.
In a single program there are multiple execution paths. If one of the thread has some I/O
call then only that thread won’t schedule, and the rest will execute as scheduled.
Multiprocessing Environment
Multiprocessing, in computing, a mode of operation in which two or more processors in a computer simultaneously process two or
more different portions of the same program (set of instructions)
T3 is a thread of another process. It is possible that two tasks (To, and T3) are running at
the same time.
Shell Command
system() – apne program mai rehte hoye use run kar sakte hein. A computer user’s instruction that calls for action by the computer's
executive program.
fork() – to create duplicate or clone of its own program. It creates a child process which runs simultaneously with the parent process.
exec() – program A se B ko call karna hai tou yeh command use karte hien
wait() – child ka wait karte waqt yeh command use karte hien
IN DETAIL
Therefore, after the system call to fork(), a simple test can tell which process is the child. Please note that Unix will make an
exact copy of the parent's address space and give it to the child. Therefore, the parent and child processes have separate
address spaces.
vfork()
Process Creation
After Creation
After creating the process the Kernel can do one of the following, as part of the dispatcher routine:
int execve (const char *filename, char *const argv[], char *const
env[])
In Unix and Unix-like computer operating systems, a zombie process or defunct process is a process that has completed execution (via
the exit system call) but still has an entry in the process table: it is a process in the "Terminated state."
Iterative Server
Concurrent Server
Process Image
Even when fork command is used and a copy of the process is build, even
then only one process runs at a time in the processor, although they will have separate process images.
In this code segment, main will start executing, and call routine1.
Simultaneously, main will again start executing and call routine2. Now
all three threads are executing at the same time in the processor and both
routine’s variables are stored in the stack.
Advantage
- Interruption (for example, I/O call) in any of the thread will not
affect the execution of the other.
Problems in Multitasking
- Time to create new process
- Memory requirements
- Switching Time for scheduling
pid = fork()
if ( pid < 0 )
{ do_nothing(); exit(0); }
else /* this is the parent of the fork */
Benefits of Threads
- Less time to
o create a new thread than a process
o terminate a thread than a process
o switch between two threads within the same process
- Low Memory Requirements than Multiprogramming
- Since threads within the same process share memory and files, they can communicate with each other without invoking the
kernel.
MULTITHREADING IN C#
1- Join is used only when one thread must wait for the open to finish (let’s say thread A prepares a file and thread B cannot
continue until the file is ready).
2- If function/thread is returning some value.
Thread. sleep() method can be used to pause the execution of current thread for specified
time in milliseconds.
JAVA THREAD
For multithreading support in Java we have to write method “run().”
MULTITHREADING IN C
Concurrency Control
Thread is a call-back function. Thread is not invoked but created and called by OS.
A callback function is a function passed into another function as an argument, which is then invoked inside the outer function to
complete some kind of routine or action. It is invoked by the OS. For example: Events in C# windows form.
POSIOX Threads
- Historically, hardware vendors have implemented their own proprietary versions of threads.
- These implementations differed substantially from each other making it difficult for programmers to develop portable
threaded applications
- In order to take full advantage of the capabilities provided by threads, a standardized programming interface was required.
- For UNIX systems, this interface has been specified by the IEEE POSIX 1003.1c standard (1995).
- Latest version (IEEE POSIX 1003.1-2008)
Thread Management
attr: 0default thread type – to change the attribute type we use “pthread_attr_init (attr)”
– pthread_exit (status)
if we put pthread_exit(status) in place of start_routine that thread itself requests to exit and tries to return a value.
– pthread_cancel (thread)
It is used externally to kill a thread. No matter what state is the thread in at the time, it is forcefully killed.
– pthread_attr_init (attr)
– pthread_attr_destroy (attr)
– pthread_join (threadid,status)
– pthread_detach (threadid)
– pthread_attr_setdetachstate (attr,detachstate)
– pthread_attr_getdetachstate (attr,detachstate)