Page 38 - DCAP103_Principle of operating system
P. 38
Unit 1: Operating System
MISD Multiprocessing: Multiple Instruction, Single Data is a type of 1Tparallel computing1T Notes
1Tarchitecture1T where many functional units perform different operations on the same data.
1TPipeline1T architectures belong to this type, though a purist might say that the data is different
after processing by each stage in the pipeline.
MISD multiprocessing offers mainly the advantage of redundancy, since multiple processing
units perform the same tasks on the same data, reducing the chances of incorrect results if one
of the units fails. MISD architectures may involve comparisons between processing units to
detect failures. Fault-tolerant computers executing the same instructions redundantly in order
to detect and mask errors, in a manner known as 1Ttask replication1T, may be considered to
belong to this type.
Apart from the redundant and fail-safe character of this type of multiprocessing, it has few
advantages, and it is very expensive .It does not improve performance. Not many instances
of this architecture exist, as 1TMIMD1T architectures may be used in a number of application
areas such as 1Tcomputer-aided design1T/1Tcomputer-aided manufacturing1T, 1Tsimulation1T,
1Tmodeling1T, and as communication switches. MIMD machines can be of either 1Tshared
memory1T or 1Tdistributed memory1T categories. These classifications are based on how MIMD
processors access memory.
MIMD multiprocessing architecture is suitable for a wide variety of tasks in which completely
independent and parallel execution of instructions touching different sets of data can be put
to productive use. For this reason, and because it is easy to implement, MIMD predominates
in multiprocessing.
MIMD does raise issues of deadlock and resource contention, however, since threads may collide
in their access to resources in an unpredictable way that is difficult to manage efficiently. MIMD
requires special coding in the operating system of a computer but does not require application
changes. Both system and user software may need to use software constructs such as semaphores
(also called locks or gates) to prevent one thread from interfering with another if they should
happen to cross paths in referencing the same data. This gating or locking process increases code
complexity, lowers performance, and greatly increases the amount of testing required, although
not usually enough to negate the advantages of multiprocessing.
Symmetric Multiprocessing: In computing, symmetric multiprocessing or SMP involves a
multiprocessor computer architecture where two or more identical processors can connect
to a single shared main memory. Most common multiprocessor systems today use an SMP
architecture. In the case of multi-core processors, the SMP architecture applies to the cores.
Treating them as Separate Processors: SMP systems allow any processor to work on any task no
matter where the data for that task are located in memory; with proper operating system support,
SMP systems can easily move tasks between processors to balance the workload efficiently.
SMP represents one of the earliest styles of multiprocessor machine architectures, typically used
for building smaller computers with up to 8 processors. Larger computer systems might use
newer architectures such as NUMA (Non-Uniform Memory Access) and 1TSIMD1T are often
more appropriate for common data parallel techniques. Specifically, they allow better scaling
and use of computational resources than MISD does.
MIMD Multiprocessing: In 1Tcomputing1T, MIMD (Multiple Instruction stream, Multiple Data
stream) is a technique employed to achieve parallelism. Machines using MIMD have a number
of 1Tprocessors1T that function 1Tasynchronously1T and independently.
At any time, different processors may be executing different instructions on different pieces of
data MIMD.
LOVELY PROFESSIONAL UNIVERSITY 31