0% found this document useful (0 votes)
92 views

SIMD

SIMD (Single Instruction, Multiple Data) is a technique that achieves data parallelism by applying a single instruction to multiple data points simultaneously. It was first used in vector supercomputers in the 1970s. Modern CPUs have adopted SIMD through instruction set extensions like SSE and AVX to improve performance for tasks like graphics processing and multimedia applications that benefit from parallel operations on multiple data points. While SIMD provides performance benefits, it also has disadvantages like not all algorithms being suitable for vectorization and requiring manual optimization by programmers.

Uploaded by

Gourav Gupta
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views

SIMD

SIMD (Single Instruction, Multiple Data) is a technique that achieves data parallelism by applying a single instruction to multiple data points simultaneously. It was first used in vector supercomputers in the 1970s. Modern CPUs have adopted SIMD through instruction set extensions like SSE and AVX to improve performance for tasks like graphics processing and multimedia applications that benefit from parallel operations on multiple data points. While SIMD provides performance benefits, it also has disadvantages like not all algorithms being suitable for vectorization and requiring manual optimization by programmers.

Uploaded by

Gourav Gupta
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

In computing, SIMD (Single Instruction, Multiple Data; colloquially,

"vector instructions") is a technique employed to achieve data level


parallelism.
History
Supercomputers, popular in the 1980s such as the Cray X-MP were called "vector
processors." The Cray X-MP had up to four vector processors which could
function independently or work together using a programming model called
"autotasking". Autotasking was similar to OpenMP. These machines had very fast
scalar processors and also vector processors for long vector computations, for
example, adding two vectors of 100 numbers each. The Cray X-MP vector
processors were pipelined and had multiple functional units. Pipelining allowed
for one single instruction to move a long array of numbers sequentially into a
vector register. Multiple registers, compute units, pipelining, and chaining
allowed vector computers to compute Z = X*Y+V/W rapidly by streaming data
into registers to hide memory latency, overlapping computations, and producing
a resultant (Z) at each clock cycle.1
The first era of SIMD machines was characterized by supercomputers such as the
Thinking Machines CM-1 and CM-2. These machines had many limited
functionality processors that would work in parallel. For example, each of 64,000
processors in a Thinking Machines CM-2 would execute the same instruction at
the same time so that you could do 64,000 multiplies on 64,000 pairs of
numbers at a time.
Supercomputing moved away from the SIMD approach when inexpensive scalar
MIMD approaches based on commodity processors such as the Intel i860 XP 2
became more powerful, and interest in SIMD waned. Later, personal computers
became common, and became powerful enough to support real-time gaming.
This created a mass demand for a particular type of computing power, and
microprocessor vendors turned to SIMD to meet the demand. Sun Microsystems
introduced SIMD integer instructions in its "VIS" instruction set extensions in
1995, in its UltraSPARC I microprocessor. The first widely-deployed SIMD for
gaming was Intel's MMX extensions to the x86 architecture. IBM and Motorola
then added AltiVec to the POWER architecture, and there have been several
extensions to the SIMD instruction sets for both architectures. All of these
developments have been oriented toward support for real-time graphics, and are
therefore oriented toward vectors of two, three, or four dimensions. When new
SIMD architectures need to be distinguished from older ones, the newer
architectures are then considered "short-vector" architectures. A modern
supercomputer is almost always a cluster of MIMD machines, each of which
implements (short-vector) SIMD instructions. A modern desktop computer is
often a multiprocessor MIMD machine where each processor can execute short-
vector SIMD instructions.
DSPs
A separate class of processors exists for this sort of task, commonly referred to as
Digital Signal Processors, or DSPs. The main difference between DSP and other
SIMD-capable CPUs is that the DSPs are self-contained processors with their
own (often difficult to usecitation needed) instruction set, while SIMD-extensions
rely on the general-purpose portions of the CPU to handle the program details,
and the SIMD instructions handle the data manipulation only. DSPs also tend to
include instructions to handle specific types of data, sound or video for instance,
while SIMD systems are considerably of more generic purpose. DSPs generally
operate in Scratchpad RAM driven by DMA transfers initiated from the host
system and are unable to access external memory.
Some DSPs include SIMD instruction sets. The inclusion of SIMD units in
general purpose processors has supplanted the use of DSP chips in computer
systems, though they continue to be used in embedded applications. A sliding
scale exists - the Cell's SPUs and Ageia's PhysX Physics Processing Unit could be
considered half way between CPUs & DSPs, in that they are optimized for
numeric tasks & operate in local store, but they can autonomously control their
own transfers thus are in effect true CPUs.
Advantages
An application that may take advantage of SIMD is one where the same value is
being added (or subtracted) to a large number of data points, a common
operation in many multimedia applications. One example would be changing the
brightness of an image. Each pixel of an image consists of three values for the
brightness of the red, green and blue portions of the color. To change the
brightness, the R G and B values are read from memory, a value is added (or
subtracted) from them, and the resulting values are written back out to memory.

With a SIMD processor there are two improvements to this process. For one the
data is understood to be in blocks, and a number of values can be loaded all at
once. Instead of a series of instructions saying "get this pixel, now get the next
pixel", a SIMD processor will have a single instruction that effectively says "get
lots of pixels" ("lots" is a number that varies from design to design). For a variety
of reasons, this can take much less time than "getting" each pixel individually,
like with traditional CPU design.
Another advantage is that SIMD systems typically include only those instructions
that can be applied to all of the data in one operation. In other words, if the SIMD
system works by loading up eight data points at once, the add operation being
applied to the data will happen to all eight values at the same time. Although the
same is true for any superscalar processor design, the level of parallelism in a
SIMD system is typically much higher.
Disadvantages
Not all algorithms can be vectorized. For example, a flow-control-heavy task like
code parsing wouldn't benefit from SIMD.
Currently, implementing an algorithm with SIMD instructions usually requires
human labor; most compilers don't generate SIMD instructions from a typical C
program, for instance. Vectorization in compilers is an active area of computer
science research. (Compare vector processing.)
Programming with particular SIMD instruction sets can involve numerous low-
level challenges.
SSE has restrictions on data alignment; programmers familiar with the x86
architecture may not expect this.
Gathering data into SIMD registers and scattering it to the correct destination
locations is tricky and can be inefficient.
Specific instructions like rotations or three-operand addition aren't in some
SIMD instruction sets.
Instruction sets are architecture-specific: old processors and non-x86 processors
lack SSE entirely, for instance, so programmers must provide non-vectorized
implementations (or different vectorized implementations) for them. Similarly,
the next-generation instruction sets from Intel and AMD will be incompatible
with each other (see SSE5 and AVX).
The early MMX instruction set shared a register file with the floating-point stack,
which caused inefficiencies when mixing floating-point and MMX code. However,
SSE2 corrects this.
Chronology
The first use of SIMD instructions was in vector supercomputers of the early
1970s such as the CDC Star-100 and the Texas Instruments ASC. Vector
processing was especially popularized by Cray in the 1970s and 1980s.
Later machines used a much larger number of relatively simple processors in a
massively parallel processing-style configuration. Some examples of this type of
machine included:
ILLIAC IV, circa 1974
ICL Distributed Array Processor (DAP), circa 1974
Burroughs Scientific Processor, circa 1976
Geometric-Arithmetic Parallel Processor, from Martin Marietta, starting in 1981,
continued at Lockheed Martin, then at Teranex and Silicon Optix
Massively Parallel Processor (MPP), from NASA/Goddard Space Flight Center,
circa 1983-1991
Connection Machine, models 1 and 2 (CM-1 and CM-2), from Thinking Machines
Corporation, circa 1985
MasPar MP-1 and MP-2, circa 1987-1996
Zephyr DC computer from Wavetracer, circa 1991
Xplor disambiguation needed, from Pyxsys, Inc., circa 2001
There were many others from that era too.



Hardware
Small-scale (64 or 128 bits) SIMD has become popular on general-purpose CPUs
in the early 1990s and continuing through 1997 and later with Motion Video
Instructions (MVI) for Alpha. SIMD instructions can be found, to one degree or
another, on most CPUs, including the IBM's AltiVec and SPE for PowerPC, HP's
PA-RISC Multimedia Acceleration eXtensions (MAX), Intel's MMX and iwMMXt,
SSE, SSE2, SSE3 and SSSE3, AMD's 3DNow!, ARC's ARC Video subsystem,
SPARC's VIS, Sun's MAJC, ARM's NEON technology, MIPS' MDMX (MaDMaX)
and MIPS-3D. The IBM, Sony, Toshiba co-developed Cell Processor's SPU's
instruction set is heavily SIMD based. NXP founded by Philips developed several
SIMD processors named Xetal. The Xetal has 320 16bit processor elements
especially designed for vision tasks.
Modern Graphics Processing Units are often wide SIMD implementations,
capable of branches, loads, and stores on 128 or 256 bits at a time.
Future processors promise greater SIMD capability: Intel's AVX instructions will
process 256 bits of data at once, and Intel's Larrabee GPU promises two 512-bit
SIMD registers on each of its cores (VPU - Wide Vector Processing Units).
Software
SIMD instructions are widely used to process 3D graphics, although modern
graphics cards with embedded SIMD have largely taken over this task from the
CPU. Some systems also include permute functions that re-pack elements inside
vectors, making them particularly useful for data processing and compression.
They are also used in cryptography.123 The trend of general-purpose computing
on GPUs (GPGPU) may lead to wider use of SIMD in the future.
Adoption of SIMD systems in personal computer software was at first slow, due
to a number of problems. One was that many of the early SIMD instruction sets
tended to slow overall performance of the system due to the re-use of existing
floating point registers. Other systems, like MMX and 3DNow!, offered support
for data types that were not interesting to a wide audience and had expensive
context switching instructions to switch between using the FPU and MMX
registers. Compilers also often lacked support requiring programmers to resort to
assembly language coding.

SIMD on x86 had a slow start. The introduction of 3DNow! by AMD and SSE by
Intel confused matters somewhat, but today the system seems to have settled
down (after AMD adopted SSE) and newer compilers should result in more
SIMD-enabled software. Intel and AMD now both provide optimized math
libraries that use SIMD instructions, and open source alternatives like libSIMD
and SIMDx86 have started to appear.
Apple Computer had somewhat more success, even though they entered the
SIMD market later than the rest. AltiVec offered a rich system and can be
programmed using increasingly sophisticated compilers from Motorola, IBM and
GNU, therefore assembly language programming is rarely needed. Additionally,
many of the systems that would benefit from SIMD were supplied by Apple itself,
for example iTunes and QuickTime. However, in 2006, Apple computers moved
to Intel x86 processors. Apple's APIs and development tools (XCode) were
rewritten to use SSE2 and SSE3 instead of AltiVec. Apple was the dominant
purchaser of PowerPC chips from IBM and Freescale Semiconductor and even
though they abandoned the platform, further development of AltiVec is
continued in several Power Architecture designs from Freescale, IBM and P.A.
Semi.
SIMD within a register, or SWAR, is a range of techniques and tricks used for
performing SIMD in general-purpose registers on hardware that doesn't provide
any direct support for SIMD instructions. This can be used to exploit parallelism
in certain algorithms even on hardware that does not support SIMD directly.
Commercial applications
Though it has generally proven difficult to find sustainable commercial
applications for SIMD-only processors, one that has had some measure of success
is the GAPP, which was developed by Lockheed Martin and taken to the
commercial sector by their spin-off Teranex. The GAPP's recent incarnations
have become a powerful tool in real-time video processing applications like
conversion between various video standards and frame rates (NTSC to/from PAL,
NTSC to/from HDTV formats, etc.), deinterlacing, image noise reduction,
adaptive video compression, and image enhancement.
A more ubiquitous application for SIMD is found in video games: nearly every
modern video game console since 1998 has incorporated a SIMD processor
somewhere in its architecture. The PlayStation 2 was unusual in that its vector-
float units could function as autonomous DSPs executing their own instruction
streams, or as coprocessors driven by ordinary CPU instructions. 3D graphics
applications tend to lend themselves well to SIMD processing as they rely heavily
on operations with 4-dimensional vectors. Microsoft's Direct3D 9.0 now chooses
at runtime processor-specific implementations of its own math operations,
including the use of SIMD-capable instructions.
One of the very recent processors to use vector processing is the Cell Processor
developed by IBM in cooperation with Toshiba and Sony. It uses a number of
SIMD processors (each with independent RAM and controlled by a general
purpose CPU) and is geared towards the huge datasets required by 3D and video
processing applications.
Larger scale commercial SIMD processors are available from ClearSpeed
Technology, Ltd. and Stream Processors, Inc. ClearSpeed's CSX600 (2004) has
96 cores each with 2 double-precision floating point units while the CSX700
(2008) has 192. Stream Processors is headed by computer architect Bill Dally.
Their Storm-1 processor (2007) contains 80 SIMD cores controlled by a MIPS
CPU.
Coarse-Grained Method
Different from the completion of a series of operation at once in fine-grained approach
multiple data takes each operation so the latency is higher. However, for small number
of data the latter is simpler and more efficient lowering execution time. FIG. 4 shows an
example of inverse transform implemented in C-code in (a) and SIMD instructions in
(b). In (a), it is required that 4x4 additions, 4x2 shifts and 4x8 memory access but in (b)
four additions, four memory accesses and two shifts. In the example, four data moved to
128-bit XMM registers and processed simultaneously. In nested loops, coarse-grained
method can be shown as a parallelization of outer loops.

#ifndef MMX (a)
for (j=0;j<BLOCK_SIZE;j++){
for (i=0;i<BLOCK_SIZE;i++){
m5[i]=img->cof[i0][j0][i][j];
}
m6[0]=(m5[0]+m5[2]);
m6[1]=(m5[0]-m5[2]);
m6[2]=(m5[1]>>1)-m5[3];
m6[3]=m5[1]+(m5[3]>>1);
:
}
#else (b)
mptr = & (img->cof[i0][j0][0][0] );
__asm{
mov edx, mptr
movdqu xmm1, [edx]
packssdw xmm1,xmm1// read m50] from memory to xmm1
}
: // read m5[1],m5[6] from memory
__asm{
movdqu xmm4, [edx +48]
packssdw xmm4,xmm4// read m5[3] from memory
}
__asm{
movq xmm5,xmm1
psubw xmm1,xmm3 //m6[1]=(m5[0]-m5[2]);
paddw xmm3,xmm5 //m6[0]=(m5[0]+m5[2]);
movq xmm5, xmm2
psraw xmm2,1
psubw xmm2,xmm4 //m6[2]=(m5[1]>>1)-m5[3]
psraw xmm4,1
paddw xmm4,xmm5 //m6[3]=m5[1]+(m5[3]>>1)
:
}

Coarse-grained reconfigurable architectures (CGRAs) have potential
advantages to improve the power efficiency of the fine-grained
FPGAs.
Coarse-grained granularity: SmartCell is designed to generate coarse-
grained configurable system targeted for computation intensive applications. The
processing elements operate on 16-bit input signals and generate a 36-bit output
signal, which avoids high overhead and ensures better performance compared
with fine-grained architectures. (ii)Flexibility: due to the rich computing and
communication resources, versatile computing styles are feasible to be mapped
onto the SmartCell architecture, including SIMD, MIMD, and 1D or 2D systolic
array structures. This also expands the range of applications to be implemented.
(iii)Dynamic reconfiguration: by loading new instruction codes into the
configuration memory through the SPI structure, new operations can be executed
on the desired PEs without any interruption with others. The number of PEs
involved in the application is also adjustable for different system requirements.
(iv)Fault tolerance: fault tolerance is an important feature to improve the
production yields and to extend the device's lifetime. In the SmartCell system,
defective cells, caused by manufacturing fault or malfunctioned circuits, can be
easily turned off and isolated from the functional ones. (v)Deep pipeline and
parallelism: two levels of pipeline are achievedthe instruction level pipeline
(ILP) in a single processor element and the task level pipeline (TLP) among
multiple cells. The data parallelism can also be explored to concurrently execute
multiple data streams, which in combine ensures a high computing capacity.
(vi)Hardware virtualization: in our design, distributed context memories are used
to store the configuration signals for each PE. The cycle-by-cycle instruction
execution supports hardware virtualization that is able to map large applications
onto limited computing resources. (vii)Explicit synchronization: a program
counter (PC) is designed to schedule instruction execution time for each PE on
the fly. Variant delays are also available for input/output signals inside each PE.
Therefore, the SmartCell can provide explicit synchronization that eases the
exploration of computing parallelisms. (viii)Unique system topology: the cell
units are tiled in a 2D mesh structure with four PEs inside each cell. This
topology provides variant computing densities to meet different computational
requirements. With the help of the hierarchical on-chip connections, the
SmartCell architecture can be dynamically reconfigured to perform in variant
operational styles.
Cell Unit and Processing Element
The reconfigurable cell units are the fundamental components in SmartCell,
which are aligned in a 2D mesh structure as shown in Figure 1. Each cell consists
of four identical PEs. The PE is composed of an arithmetic unit and a logic unit,
I/O muxes, instruction controllers, local data registers, and instruction
memories, as shown in Figure 2. It can be configured to perform basic logic, shift,
and arithmetic functions. The arithmetic unit takes two 16-bit vectors as inputs
for basic arithmetic functions to generate a 36-bit output without loss of
precision during multiply-accumulate operations. The PE also includes some
logic and shift operators, usually found in targeted data streaming applications.
The basic operations supported by SmartCell processor are listed in Table 1.
Multiple PEs can be chained together through the programmable on-chip
connections to implement more complex algorithms.
Configuration and Control Flow
A serial peripheral interface (SPI) is designed to configure and update the
instruction memories, as shown in Figure 3. In this structure, the instruction
memories are linked in a ripple array fashion with the inputs and outputs chained
one to another. During the initial configuration procedure, the instruction code is
loaded to the first PE's instruction memory and is then shifted down to the
second one and so on. This procedure stops after the last active PE is configured.
The run-time reconfiguration can be achieved by the same SPI structure. Two
modes are provided for the fine-grained ID-based configuration and coarse-
grained broadcasting, as shown in Figures 5(a) and 5(b). Some applications
require fine control of individual PE to perform different tasks. The ID-based
fine-grained configuration is used in this case. The new instruction code and the
ID of the PE to be configured are sent into the SPI chain. The PE bypasses the
information to the next one until it reaches the desired PE. On the other hand, a
group of PEs is configured to perform the same operation in the SIMD style for
many other applications. To reduce latency, a cell broadcasting coarse-grained
configuration is designed to currently write the reconfiguring contexts into all
instruction memories in the same cell, based on the input Cell ID. In a 4 by 4
SmartCell system, 32 and 8 clock cycles are needed on average for an instruction
code to reach the configuration component in fine-grain and coarse-grain modes,
respectively.
Conclusions
It is a coarse-grained architecture that tiles a large number of processor elements
with reconfigurable communication fabrics. A prototype with 64 PEs is
implemented with TSMC 0.13 m technology. This chip consists of about 1.6
million gates with an average power consumption of 1.6 mW/MHz for the
evaluated benchmarks. The benchmarking results show that SmartCell is able to
bridge the energy efficiency gap between the fine-grained FGPAs and customized
ASICs. When compared with Montium and RaPid, SmartCell shows 4x and 2x
throughput gains and is about 8% and 69% more energy efficient, respectively.
The performance results show that SmartCell is a promising reconfigurable and
energy efficient platform for stream processing.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy