0% found this document useful (0 votes)
6 views

DIP Module I

The document outlines the course structure for Digital Image Processing at Bangalore Technological Institute for the academic year 2024-2025, detailing course objectives, outcomes, and a comprehensive syllabus divided into five modules. Key topics include image acquisition, enhancement, restoration, morphological processing, and compression techniques. The course aims to equip students with both theoretical knowledge and practical skills in image processing applications relevant to various fields such as medical imaging and robotics.

Uploaded by

usewaste69
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

DIP Module I

The document outlines the course structure for Digital Image Processing at Bangalore Technological Institute for the academic year 2024-2025, detailing course objectives, outcomes, and a comprehensive syllabus divided into five modules. Key topics include image acquisition, enhancement, restoration, morphological processing, and compression techniques. The course aims to equip students with both theoretical knowledge and practical skills in image processing applications relevant to various fields such as medical imaging and robotics.

Uploaded by

usewaste69
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

BANGALORE TECHNOLOGICAL INSTITUTE

(An ISO 9001:2015 Certified Institute)


Kodathi Village, Varthoor Hobli, Bangalore East Tq, Bangalore Urban District,
Bangalore-560035, Karnataka
principal@btibangalore.org www.btibangalore.org
Phone: 7090404050

DEPARTMENT OF ROBOTICS AND ARTIFICIAL INTELLIGENCE


ACADEMIC YEAR: 2024 - 2025 (EVEN SEMESTER)
Regulations 2022

Year/ Semester : III/VI Course Code : BRI602


Name of the Faculty: Dr.R.Felshiya Rajakumari Course Name : Digital Image Processing

Course Objectives
The course will enable the students to:
 Define the fundamental concepts in image processing.
 Evaluate techniques followed in image enhancements.
 Understand the Morphological Operations and Image Processing.
 Understand the image restoration techniques and methods used in digital image processing.
 Illustrate image compression algorithms.

Course Outcomes (Cos):


At the end of the course, the student will be able to:
Knowledge
CO No. Course Outcomes (Cos)
Level

CO1 Understand, ascertain and describe the basics of image processing K2


concepts through mathematical interpretation.
CO2 Apply image processing techniques in both the spatial and frequency K3
(Fourier) domains.
CO3 Design image analysis techniques in the form of Morphological Image K3
Processing
CO4 Demonstrate image restoration process and its respective filters required. K3

CO5 Conduct independent study and analysis of Image Compression K3


techniques.
Pos & PSO Reference

Graduates should be able to create networking


Engineering Environment & and embedded software solutions using data
PO1 PO7 PSO1
Knowledge Sustainability communication, sensors, robotics, virtual reality
and Internet of Things.
Graduates should be able to create software
Problem systems using expertise in data structures,
PO2 PO8 Ethics PSO2
Analysis algorithm analysis, web design, machine
learning, and image processing techniques.
Graduates should be able to create, select and
apply the theoretical knowledge of robotics and
Design & Individual &
PO3 PO9 PSO3 AI along with practical industrial tools and
Development Team Work
techniques to manage and solve wicked societal
problems.
PO1 Communication
PO4 Investigations
0 Skills
Modern PO1 Project Mgt. &
PO5
Tools 1 Finance
Engineer & PO1 Life Long
PO6
Society 2 Learning

CO-PO MAPPING

CO No PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12 PSO1 PSO2 PSO3

CO1 2 2 2 2 2 3 3 3 3
CO2 2 2 2 2 2 3 3 3 3
CO3 2 2 2 2 2 3 3 3 3
CO4 2 2 2 2 2 3 3 3 3
CO5 2 2 2 2 2 3 3 3 3
CO 2 2 2 2 2 3 3 3 3
SYLLABUS

Module I

Introduction: Fundamental Steps in Digital Image Processing, Components of an Image Processing


System, Sampling and Quantization, Representing Digital Images (Data structure), Some Basic
Relationships Between Pixels- Neighbors and Connectivity of pixels in image, Applications of
Image Processing: Medical imaging, Robot vision, Character recognition, Remote Sensing.

Module II

Image Enhancement in the Spatial Domain: Some Basic Gray Level Transformations, Histogram
Processing, Enhancement Using Arithmetic/Logic Operations, Basics of Spatial Filtering,
Smoothing Spatial Filters, Sharpening Spatial Filters, Combining Spatial Enhancement Methods.

Module III

Morphological Image Processing: Preliminaries, Erosion and Dilation, Opening and Closing.
Image Processing: Color Fundamentals, Color Models, Pseudo color Image Processing.

Module IV

Restoration: Noise models, Restoration in the Presence of Noise Only using Spatial Filtering and
Frequency Domain Filtering, Linear, Position-Invariant degradations Estimating the Degradation
Function, Inverse Filtering, Minimum Mean Square Error (Wiener) Filtering, Constrained Least
Squares Filtering.

Module V

Image Compression: Introduction, coding Redundancy , Inter-pixel redundancy, image


compression model, Lossy and Lossless compression, Huffman Coding, Arithmetic Coding, LZW
coding, Transform Coding, Sub-image size selection, blocking, DCT implementation using FFT,
Run length coding.
Module I

Introduction: Fundamental Steps in Digital Image Processing, Components of an Image Processing


System, Sampling and Quantization, Representing Digital Images (Data structure), Some Basic
Relationships Between Pixels- Neighbors and Connectivity of pixels in image, Applications of
Image Processing: Medical imaging, Robot vision, Character recognition, Remote Sensing.

Fundamental Steps in Digital Image Processing

Image Acquisition
 Image acquisition is the first step in digital image processing. In this step we get the image
in digital form.
 This is done using sensing materials like sensor strips and sensor arrays and electromagnetic
wave light source.
 The light source falls on an object and it gets reflected or transmitted which gets captured by
the sensing material.
 The sensor gives the output image in voltage waveform in response to electric power being
supplied to it.
 The example of a situation where reflected light is captured is a visible light source.
 Whereas, in X-ray light sources transmitted light rays are captured. The image captured is
analog image as the output is continuous.
 To digitise the image, we use sampling and quantization where discretize the image.
 Sampling is discretizing the image spatial coordinates whereas quantization is discretizing
the image amplitude values.

Image enhancement
 Image enhancement is the process of manipulating an image so that the result is more suitable
than the original for a specific application.
 The word specific is important here, because it establishes at the outset that enhancement
techniques are problem oriented.
 Thus, for example, a method that is quite useful for enhancing X-ray images may not be the best
approach for enhancing satellite images taken in the infrared band of the electromagnetic
spectrum
Image restoration
 It is an area that also deals with improving the appearance of an image.
 However, unlike enhancement, which is subjective, image restoration is objective, in the sense
that restoration techniques tend to be based on mathematical or probabilistic models of image
degradation.
 Enhancement, on the other hand, is based on human subjective preferences regarding what
constitutes a good enhancement result.
Color image processing
 It is an area that has been gaining in importance because of the significant increase in the use of
digital images over the Internet.
 It covers a number of fundamental concepts in color models and basic color processing in a
digital domain.
 Color is used also in later chapters as the basis for extracting features of interest in an image.
 Colour image processing is motivated by the fact that using colour it is easier to classify and the
human eye can easily see thousands of colours than shades of black and white.
 Colour image processing is divided into types - pseudo colour or reduced colour processing and
full colour processing.
 In pseudo colour processing, the grey scale is applied to one colour. It was used earlier. Now-a-
days, full colour processing is used for full colour sensors such as digital cameras or colour
scanners as the price of full colour sensor hardware is reduced significantly.
Wavelets
 It is the foundation for representing images in various degrees of resolution.
 In particular, this material is used in this book for image data compression and for pyramidal
representation, in which images are subdivided successively into smaller regions.
Compression
 It deals with techniques for reducing the storage required to save an image, or the bandwidth
required to transmit it.
 Although storage technology has improved significantly over the past decade, the same
cannot be said for transmission capacity.
 This is true particularly in uses of the Internet, which are characterized by significant pictorial
content. Image compression is familiar to most users of computers in the form of image file
extensions, such as the jpg file extension used in the JPEG (Joint Photographic Experts
Group) image compression standard.
Morphological processing
 It deals with tools for extracting image components that are useful in the representation and
description of shape.
Segmentation procedures
 It is a partition an image into its constituent parts or objects. In general, autonomous
segmentation is one of the most difficult tasks in digital image processing.
 A rugged segmentation procedure brings the process a long way toward successful solution of
imaging problems that require objects to be identified individually.
 On the other hand, weak or erratic segmentation algorithms almost always guarantee eventual
failure. In general, the more accurate the segmentation, the more likely recognition is to succeed.
Representation and description
 It almost always follow the output of a segmentation stage, which usually is raw pixel data,
constituting either the boundary of a region.
 Choosing a representation is only part of the solution for transforming raw data into a form
suitable for subsequent computer processing.
 A method must also be specified for describing the data so that features of interest are
highlighted.
 Description, also called feature selection, deals with extracting attributes that result in some
quantitative information of interest or are basic for differentiating one class of objects from
another.
Recognition and Knowledge
 It is the process that assigns a label to an object based on its descriptors.
 Knowledge about a problem domain is coded into an image processing system in the form of a
knowledge database.
 This knowledge may be as simple as detailing regions of an image where the information of
interest is known to be located, thus limiting the search that has to be conducted in seeking that
information.
 The knowledge base also can be quite complex, such as an interrelated list of all major possible
defects in a materials inspection problem or an image database containing high-resolution
satellite images of a region in connection with change-detection applications. In addition to
guiding the operation of each processing module, the knowledge base also controls the
interaction between modules.
 For example, image enhancement for human visual interpretation seldom requires use of any of
the other stages. In general, however, as the complexity of an image processing task increases,
so does the number of processes required to solve the problem.
Components of an Image Processing System
 Two subsystems are required to acquire digital images.
 The first is a physical sensor that responds to the energy radiated by the object we wish to
image.
 The second, called a digitizer, is a device for converting the output of the physical sensing
device into digital form.
 For instance, in a digital video camera, the sensors (CCD chips) produce an electrical output
proportional to light intensity. The digitizer converts these outputs to digital data.
Specialized Image Processing
 Specialized image processing hardware usually consists of the digitizer just mentioned, plus
hardware that performs other primitive operations, such as an arithmetic logic unit (ALU), that
performs arithmetic and logical operations in parallel on entire images.
 One example of how an ALU is used is in averaging images as quickly as they are digitized, for
the purpose of noise reduction.
 This type of hardware sometimes is called a front-end subsystem, and its most distinguishing
characteristic is speed.

 The computer in an image processing system is a general-purpose computer and can range
from a PC to a supercomputer.
 In dedicated applications, sometimes custom computers are used to achieve a required level
of performance, but our interest here is on general-purpose image processing systems.
 In these systems, almost any well-equipped PC-type machine is suitable for off-line image
processing tasks.

Image Processing Software


 Software for image processing consists of specialized modules that perform specific tasks.
 A well-designed package also includes the capability for the user to write code that, as a minimum,
utilizes the specialized modules.
 More sophisticated software packages allow the integration of those modules and general-purpose
software commands from at least one computer language.
Mass storage
 It is a image processing applications. An image of size 1024 1024× pixels, in which the intensity of
each pixel is an 8-bit quantity, requires one megabyte of storage space if the image is not
compressed.
 Digital storage for image processing applications falls into three principal categories: (1) short-term
storage for use during processing; (2) on-line storage for relatively fast recall; and (3) archival
storage, characterized by infrequent access.
Image displays
 It is used in mainly color, flat screen monitors.
 Monitors are driven by the outputs of image and graphics display cards that are an integral part of
the computer system.
Hardcopy devices
 It is for recording images include laser printers, film cameras, heat sensitive devices, ink-jet units,
and digital units, such as optical and CD-ROM disks.
 Film provides the highest possible resolution, but paper is the obvious medium of choice for
written material.
Networking and cloud communication
 It is almost default functions in any computer system in use today.
 Because of the large amount of data inherent in image processing applications, the key
consideration in image transmission is bandwidth.

Sampling and Quantization


 The output of most sensors is a continuous voltage waveform whose amplitude and spatial behavior
are related to the physical phenomenon being sensed.
 To create a digital image, we need to convert the continuous sensed data into a digital format. This
requires two processes: sampling and quantization.
 An image may be continuous with respect to the x- and y coordinates and also in amplitude.
Digitizing the coordinate values is called sampling. Digitizing the amplitude values is called
quantization.

Generating a digital image.


(a) Continuous image.
(b) A scan line from A to B in the continuous image, used to illustrate the concepts of sampling
and quantization.
(c) Sampling and quantization.
(d) Digital scan line.
 The one-dimensional function in Fig. (b) is a plot of amplitude values of the continuous image
along the line segment AB in Fig.(a).
 The random variations are due to image noise. To sample this function, we take equally spaced
samples along line AB, as shown in Fig.(c).
 The samples are shown as small dark squares superimposed on the function, and their (discrete)
spatial locations are indicated by corresponding tick marks in the bottom of the figure.
 The set of dark squares constitute the sampled function. However, the values of the samples still
span (vertically) a continuous range of intensity values. In order to form a digital function, the
intensity values also must be converted (quantized) into discrete quantities.
 In addition to the number of discrete levels used, the accuracy achieved in quantization is highly
dependent on the noise content of the sampled signal.
 When a sensing strip is used for image acquisition, the number of sensors in the strip establishes the
sampling limitations in one image direction. Mechanical motion in the other direction can be
controlled more accurately, but it makes little sense to try to achieve sampling density in one

direction that exceeds the sampling limits established by the number of sensors in the other.
Quantization of the sensor outputs completes the process of generating a digital image.
 In digital image processing, two fundamental concepts are image sampling and quantization.
These processes are crucial for converting an analog image into a digital form that can be stored,
manipulated, and displayed by computers.
 Despite being closely related, sampling and quantization serve distinct purposes and involve
different techniques.
Sampling
 Since an analogue image is continuous not just in its co-ordinates (x axis), but also in its
amplitude (y axis), so the part that deals with the digitizing of co-ordinates is known as
sampling. In digitizing sampling is done on independent variable.
 In case of equation y = sin(x), it is done on x variable.
 In sampling, reduce this noise by taking samples. It is obvious that more samples are quality of
the image would be more better, the noise would be more removed and same happens vice versa.
 However, take sampling on the x axis, the signal is not converted to digital format, unless take
sampling of the y-axis too which is known as quantization.
 Sampling has a relationship with image pixels. The total number of pixels in an image can be
calculated as Pixels = total no of rows * total no of columns. For example, let’s say we have
total of 36 pixels, that means we have a square image of 6*6.

Quantization
 Quantization is opposite to sampling because it is done on “y axis” while sampling is done on
“x axis”.
 Quantization is a process of transforming a real valued sampled image to one taking only a
finite number of distinct values. Under quantization process the amplitude values of the image
are digitized.

 In the image shown below, these vertically ranging values have been quantized into 5 different
levels or partitions. Ranging from 0 black to 4 white.
 This level could vary according to the type of image.
 The above quantized image represents 5 different levels of gray and that means the image
formed from this signal, would only have 5 different colors.
 It would be a black and white image more or less with some colors of gray. The number of
quantization levels should be high enough for human perception of fine shading details in the
image.
 The occurrence of false contours is the main problem in image which has been quantized with
insufficient brightness levels.
Representing Digital Images
 A digital image is a representation of a two-dimensional image as a finite set of digital values,
known as picture elements or pixels.
 Each pixel has a specific value that determines its color and brightness, and collectively, these
pixels form the complete image.
 Digital images are the foundation of various technologies and applications in fields ranging from
photography and medical imaging to remote sensing and computer vision.
 The three basic ways to represent f (x,y).It is a plot of the function, with two axes determining
spatial location and the third axis being the values of f (intensities) as a function of the two
spatial variables x and y.
 This representation is useful when working with gray-scale sets whose elements are expressed as
triplets of the form (x,y,z), where x and y are spatial coordinates and z is the value of f at
coordinates (x, y).
 f(x,y) as it would appear on a monitor or photograph. Here, the intensity of each point is
proportional to the value of f at that point.

 Both sides of this equation are equivalent ways of expressing a digital image quantitatively. The
right side is a matrix of real numbers.
 Each element of this matrix is called an image element, picture element, pixel, or pel. The terms
image and pixel are used throughout the book to denote a digital image and its elements.
Types of Digital Images
Raster Images
Raster images, or bitmap images, are composed of a grid of individual pixels. Each pixel is
assigned a specific color value.
Raster images are resolution-dependent, meaning they can lose quality when scaled up. Common
raster formats include JPEG, PNG, and GIF.
Vector Images
Vector images use mathematical equations to represent shapes and lines, rather than individual
pixels.
They are resolution-independent, meaning they can be scaled to any size without losing quality.
Common vector formats include SVG, EPS, and PDF. Vector images are often used for logos,
illustrations, and typography.
Creation of Digital Images
Digital Cameras and Scanners
 Digital cameras capture images by converting light into electrical signals using a sensor, such
as a CCD (charge-coupled device) or CMOS (complementary metal-oxide-semiconductor).
 These signals are then processed and stored as digital image files. Scanners digitize printed
images by passing them over a light source and capturing the reflected light with a sensor.
Computer Graphics
 Digital images can also be created from scratch using computer graphics software. Programs
like Adobe Photoshop, Illustrator, and GIMP allow users to draw, paint, and manipulate
images digitally.
 3D graphics software like Blender and Autodesk Maya can create three-dimensional models
and render them into 2D images.
Image Processing
 Image processing techniques can be used to enhance or alter digital images. Common
techniques include filtering, edge detection, and color correction.
 Advanced techniques such as machine learning and artificial intelligence can be used to
generate or modify images, as seen in applications like deepfake technology and neural style
transfer.
Applications of Digital Images
Photography
 Digital photography has largely replaced traditional film photography.
 Digital cameras offer immediate image review, easy editing, and the ability to share
images instantly.
 Professional photographers use high-resolution digital cameras and software to capture
and enhance their work.
Medical Imaging
 Digital images are crucial in medical diagnostics.
 Techniques like X-rays, MRIs, and CT scans produce digital images that allow doctors to
examine the inside of a patient's body without invasive procedures.
 These images can be enhanced and analyzed using specialized software to aid in diagnosis
and treatment planning.
Remote Sensing
 Satellites and drones capture digital images of the Earth's surface for applications in
agriculture, environmental monitoring, and urban planning.
 These images provide valuable data for mapping, analysis, and decision-making.
Computer Vision
 Computer vision is a field of artificial intelligence that enables computers to interpret and
understand digital images.
 Applications include facial recognition, autonomous vehicles, and image-based search
engines.

 Techniques like object detection, image segmentation, and image classification are
fundamental to computer vision.
Spatial and Intensity Resolution
Quantitatively, spatial resolution can be stated in a number of ways, with line pairs per unit
distance, and dots (pixels) per unit distance being among the most common measures. Intensity
resolution similarly refers to the smallest discernible change in intensity level.
 Intensity quantization using 32 bits is rare. Sometimes one finds systems that can digitize the
intensity levels of an image using 10 or 12 bits, but these are the exception, rather than the rule.
 Unlike spatial resolution, which must be based on a per unit of distance basis to be meaningful,
it is common practice to refer to the number of bits used to quantize intensity as the intensity
resolution. For example, it is common to say that an image whose intensity is quantized into 256
levels has 8 bits of intensity resolution.
Image Interpolation:
 Interpolation is a basic tool used extensively in tasks such as zooming, shrinking, rotating, and
geometric corrections.
 Our principal objective in this section is to introduce interpolation and apply it to image resizing.
Fundamentally, interpolation is the process of using known data to estimate values at unknown
locations. Suppose that an image of size 500 * 500 pixels has to be enlarged 1.5 times to 750 *
750 pixels.
 A simple way to visualize zooming is to create an imaginary 750 * 750 grid with the same pixel
spacing as the original, and then shrink it so that it fits exactly over the original image.
Obviously, the pixel spacing in the shrunken grid will be less than the pixel spacing in the
original image.
 The method just discussed is called nearest neighbor interpolation because it assigns to each new
location the intensity of its nearest neighbor in the original image.
 A more suitable approach is bilinear interpolation, in which we use the four nearest neighbors to
estimate the intensity at a given location. Let (x, y) denote the coordinates of the location

 where the four coefficients are determined from the four equations in four unknowns that can be
written using the four nearest neighbors of point (x,y).
 Bilinear interpolation gives much better results than nearest neighbor interpolation, with a
modest increase in computational burden.

Relationships Between Pixels- Neighbors and Connectivity of pixels in image Neighbors of a Pixel
 A pixel p at coordinates (x,y) has four horizontal and vertical neighbors whose coordinates are
given by

 This set of pixels, called the 4-neighbors of p, is denoted by N 4(p). Each pixel is a unit distance
from (x,y) and some of the neighbor locations of p lie outside the digital image if (x,y) is on the
border of the image.
 The four diagonal neighbors of p have coordinates

Adjacency, Connectivity, Regions, and Boundaries


 Let V be the set of intensity values used to define adjacency.
 For example, in the adjacency of pixels with a range of possible intensity values 0 to 255, set
V could be any subset of these 256 values. We consider three types of adjacency:
(a) 4-adjacency. Two pixels p and q with values from V are 4-adjacent if q is in the set N 4(p).
(b) 8-adjacency. Two pixels p and q with values from V are 8-adjacent if q is in the set N8(p).
(c) m-adjacency (mixed adjacency). Two pixels p and q with values from V are m-adjacent if
(i) q is in N4(p) or
(ii) q is in ND(p) and the set N4(p) Ո N4(q) has no pixels whose values are from V.
 Mixed adjacency is a modification of 8-adjacency. It is introduced to eliminate the ambiguities
that often arise when 8-adjacency is used.
 A (digital) path (or curve) from pixel p with coordinates (x,y) to pixel q with coordinates (s, t) is
a sequence of distinct pixels with coordinates

 Let S represent a subset of pixels in an image. Two pixels p and q are said to be connected in S
if there exists a path between them consisting entirely of pixels in S.
 For any pixel p in S, the set of pixels that are connected to it in S is called a connected
component of S.
 If it only has one connected component, then set S is called a connected set. Let R be a subset of
pixels in an image.
 Two regions and are said to be adjacent if their union forms a connected set. Regions that are not
adjacent are said to be disjoint.

(a) An arrangement of pixels. (b) Pixels that are 8-adjacent (adjacency is


shown by dashed lines; note the ambiguity). (c) m-adjacency. (d) Two regions (of 1s) that
are adjacent if 8-adjecency is used. (e) The circled point is part of the boundary of the
1-valued pixels only if 8-adjacency between the region and background is used. (f) The
inner boundary of the 1-valued region does not form a closed path, but its outer
boundary does.

 The boundary (also called the border or contour) of a region R is the set of points that are
adjacent to points in the complement of R.
 The preceding definition sometimes is referred to as the inner border of the region to distinguish
it from its outer border, which is the corresponding border in the background.
 This distinction is important in the development of border-following algorithms.
 Such algorithms usually are formulated to follow the outer boundary in order to guarantee that
the result will form a closed path.
 The concept of an edge is found frequently in discussions dealing with regions and boundaries.
There is a key difference between these concepts, however. The boundary of a finite region
forms a closed path and is thus a “global” concept.
Distance Measures
 For pixels p, q, and z, with coordinates (x, y), (s, t), and (v, w), respectively, D is a distance
function or metric if
 The Euclidean distance between p and q is defined as

 For this distance measure, the pixels having a distance less than or equal to some value r from
(x, y) are the points contained in a disk of radius r centered at (x, y).
 The D4 distance (called the city-block distance) between p and q is defined as

 In this case, the pixels having a distance D 4 from (x, y) less than or equal to some value r form a
diamond centered at (x, y).
 For example, the pixels with D 4 distance ≤ 2 from (x, y) (the center point) form the following
contours of constant distance:

 The pixels with D4 =1 are the 4-neighbors of (x, y).


 The distance D8 (called the chessboard distance) between p and q is defined as

 In this case, the pixels with D 8 distance from (x, y) less than or equal to some value r form a
square centered at (x,y) .
 For example, the pixels with D 8 distance ≤ 2 from (x, y) (the center point) form the following
contours of constant distance

 The pixels with D8 =1 are the 8-neighbors of (x, y).


 The D4 and D8 distances between p and q are independent of any paths that might exist between
the points because these distances involve only the coordinates of the points.

Applications of Image Processing: Medical imaging, Robot vision, Character recognition,


Remote Sensing
Medical imaging
 There are several applications in the medical field which depend on the functioning of digital
image processing.
 Many methods are used such as segmentation and texture analysis, which are further used for
cancer and other disorder identifications.

 Medical image processing incorporates the use and exploration of 3D image datasets of the
human body, found generally from a Computed Tomography (CT) or Magnetic Resonance
Imaging (MRI) scanner.
 It further helps to diagnose pathologies or steer medical interventions such as surgical planning,
or for research purposes. Gamma-ray imaging, PET scan, X-Ray Imaging, Medical CT scan, and
UV imaging are a few medical image processing applications.
Robot vision
 Robot Vision entails using a combination of camera hardware and computer algorithms that
allow robots to process visual data from the world. Several robotic machines work on digital
image processing.

 Through image processing technique robot finds their way. Without digital image processing in
Robots, it is essentially blind.

 Robots use vision to carry out advanced tasks in an environment that is constantly changing. The
technology of digital cameras is extremely advanced, and they can relocate high-resolution pixel
arrays to the robot's computer.

 Algorithms for digital image processing augment and interpret these images.

 Nowadays, it is vital to make the robot able to see things, identify any hurdles, and allow
humans to make the most of the technology and digital image processing.
Character recognition
 Optical Character Recognition (OCR): Used to convert scanned documents and handwritten
text into editable and searchable formats.
 License Plate Recognition: Used in traffic monitoring systems to identify vehicle registration
numbers.
 Handwriting Recognition: Useful in digital note-taking and archiving.
 Captcha Recognition: To verify users in automated systems.
 Assistive Technologies: Tools for visually impaired individuals to read printed materials using
audio output.
 Signature Verification: For secure banking and document authentication.
 Form Data Extraction: Automates the extraction of data from structured or semi-structured
forms.
Remote Sensing:
 Environmental Monitoring: Analyzing land use, deforestation, desertification, and biodiversity
patterns.
 Disaster Management: Detecting and assessing natural disasters like floods, hurricanes, and
forest fires.

 Urban Planning: Monitoring urban sprawl, infrastructure development, and land-use changes.
 Agriculture: Precision farming, crop health monitoring, and yield estimation.
 Climate Studies: Studying changes in glaciers, sea levels, and temperature patterns.
 Military Surveillance: Monitoring borders and identifying suspicious activities.
 Marine Applications: Detecting oil spills, monitoring ocean currents, and fishing area
identification.
 Resource Management: Mapping water bodies, forests, and mineral resource.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy