0% found this document useful (0 votes)
4 views

CCV-Preview

The document discusses the book 'Concise Computer Vision - An Introduction into Theory and Algorithms' by Reinhard Klette, published in January 2014. It serves as a textbook for undergraduate courses in computer vision, covering fundamental theories, algorithms, and applications in various fields. The book includes supplemental resources, exercises, and is designed for students with some prior knowledge in related areas.

Uploaded by

anshulacademics
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

CCV-Preview

The document discusses the book 'Concise Computer Vision - An Introduction into Theory and Algorithms' by Reinhard Klette, published in January 2014. It serves as a textbook for undergraduate courses in computer vision, covering fundamental theories, algorithms, and applications in various fields. The book includes supplemental resources, exercises, and is designed for students with some prior knowledge in related areas.

Uploaded by

anshulacademics
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/259196682

Concise Computer Vision - An Introduction into Theory and Algorithms

Book · January 2014


DOI: 10.1007/978-1-4471-6320-6

CITATIONS READS

302 17,948

1 author:

Reinhard Klette
Shandong Academy of Sciences
790 PUBLICATIONS 10,496 CITATIONS

SEE PROFILE

All content following this page was uploaded by Reinhard Klette on 15 December 2013.

The user has requested enhancement of the downloaded file.


Concise Computer Vision
– Samples from the Book–

Reinhard Klette

The .enpeda. Project, Department of Computer Science


The University of Auckland, New Zealand

Abstract. Springer London published in January 2014 a book on Con-


cise Computer Vision - An Introduction into Theory and Algorithms,
authored by myself.
This pdf is a brief information about this book. Springer’s website
about the book is at www.springer.com/computer/image+processing/
book/978-1-4471-6319-0.
For lecture notes to the book, see the supplemental ressources on web-
page www.researchgate.net/publication/259196682_Concise_Computer_
Vision_-_An_Introduction_into_Theory_and_Algorithms.
The website to the book is at www.cs.auckland.ac.nz/~rklette/
Books/K2014; with links to lecture notes, additional exercises, test data
(still images, stereo pairs, image sequences, and also extensive data sets
for selected subjects, e.g. extending EISATS), and further material re-
lated to the book.

1 Book Preface

This is a textbook for a third- or fourth-year undergraduate course on Computer


vision, which is a discipline in science and engineering.
Subject Area of the Book. Computer Vision aims at using cameras for an-
alyzing or understanding scenes in the real world. This discipline studies method-
ological and algorithmic problems as well as topics related to the implementation
of designed solutions.
In computer vision we may want to know how far away a building is to a
camera, whether a vehicle drives in the middle of its lane, how many people
are in a scene, or we even want to recognize a particular person - all to be
answered based on recorded images or videos. Areas of application have expanded
recently due to a solid progress in computer vision. There are significant advances
in camera and computing technologies, but also in theoretical foundations of
computer vision methodologies.
In recent years, computer vision became a key technology in many fields.
For modern consumer products, see, for example apps for mobile phones, driver-
assistance for cars, or user interaction with computer games. In industrial au-
tomation, computer vision is routinely used for quality or process control. There
are significant contributions for the movie industry (e.g. the use of avatars or the
2 Reinhard Klette

creation of virtual worlds based on recorded images, the enhancement of historic


video data, or high-quality presentations of movies). This is just mentioning a
few application areas, which all come with particular image or video data, and
particular needs to process or analyze those data.
Features of the Book. This text book provides a general introduction
into basics of computer vision, as potentially of use for many diverse areas of
applications. Mathematical subjects play an important role, and the book also
discusses algorithms. The book is not addressing particular applications.
Inserts (gray boxes) in the book provide historic context information, refer-
ences or sources for presented material, and particular hints on mathematical
subjects discussed first time at a given location. They are additional readings to
the baseline material provided.
The book is not a guide on current research in computer vision, and it pro-
vides only a very few references; the reader can locate more easily on the net
by searching for keywords of interest. The field of computer vision is actually
so vivid, with countless references, such that any attempt would fail to insert
in the given limited space a reasonable collection of references. But here is one
hint at least: visit homepages.inf.ed.ac.uk/rbf/CVonline/ for a web-based
introduction into topics in computer vision.
Target Audiences. This text book provides material for an introductory
course at third- or fourth-year level in an Engineering or Science undergraduate
programme. Having some prior knowledge in image processing, image analysis,
or computer graphics is of benefit, but the first two chapters of this text book
also provide a first-time introduction into computational imaging.
Previous Uses of the Material. Parts of the presented materials have
been used in my lectures in the Mechatronics and Computer Science programmes
at The University of Auckland, New Zealand, at CIMAT Guanajuato, Mexico,
at Freiburg and Göttingen University, Germany, at the Technical University
Cordoba, Argentina, at the Taiwan National Normal University, Taiwan, and at
Wuhan University, China.
The presented material also benefits from four earlier book publications,
[R. Klette and P. Zamperoni. Handbook of Image Processing Operators. Wiley, Chich-
ester, 1996], [R. Klette, K. Schlüns, and A. Koschan. Computer Vision. Springer, Sin-
gapore, 1998], [R. Klette and A. Rosenfeld. Digital Geometry. Morgan Kaufmann, San
Francisco, 2004], and [F. Huang, R. Klette, and K. Scheibe. Panoramic Imaging. Wiley,
West Sussex, 2008].
The first two of those four books accompanied computer vision lectures of
the author in Germany and New Zealand in the 1990s and early 2000s, the third
one also more recent lectures.
Notes to the Instructor and Suggested Uses. The book contains more
material than what can be covered in a one-semester course. An instructor should
select according to given context such as prior knowledge of students and research
focus in subsequent courses.
Concise Computer Vision 3

Each chapter ends with some exercises, including programming exercises.


The book does not favor any particular implementation environment. Using
procedures from systems such as OpenCV will typically simplify the solution. Pro-
gramming exercises are intentionally formulated in a way to offer students a wide
range of options for answering them. For example, for exercise (1) in Chapter 2,
you can use Java applets to visualize results (but the text does not ask for it),
you can use small or large-sized images (the text does not specify it), and you
can limit cursor movement to a central part of the input image such that the
11 × 11 square around location p is always completely contained in your image
(or you can also cover special cases when moving the cursor also closer to the
image’s border). As a result, every student should come up with her/his individ-
ual solution to programming exercises, and creativity in the designed solution
should also be honored.
Supplemental Resources. The book is accompanied by supplemental ma-
terial (data, sources, examples, presentations) on a website. See the author’s
home page for a link to this website.
Acknowledgements. In alphabetical order of surnames, I am thanking the
following colleagues, former or current students, and friends (if I am just men-
tioning a figure then I am actually thanking for joint work or contacts about a
subject related to that figure):
A-Kn. Ali Al-Sarraf (Figure 2.32), Hernan Badino (Figure 9.25), Anko Börner
(various comments on drafts of the book, and also contributions to Subsec-
tion 5.4.2), Hugo Carlos (support while writing the book at CIMAT), Diego
Caudillo (Figures 1.9, 5.28, and 5.29), Gilberto Chávez (Figures 3.39 and 5.36,
top row), Chia-Yen Chen (Figures 6.21 and 7.25), Kaihua Chen (Figure 3.33),
Ting-Yen Chen (Figure 5.35, contributions to Subsection 2.4, to Chapter 5, and
provision of sources), Eduardo Destefanis (contribution to Example 9.1 and Fig-
ure 9.5), Uwe Franke (Figures 3.36, 6.3, and bottom, right, in 9.23), Stefan
Gehrig (comments on stereo analysis parts and Figure 9.25), Roberto Guzmán
(Figure 5.36, bottom row), Wang Han (having his students involved in checking
a draft of the book), Ralf Haeusler (contributions to Subsection 8.1.5), Gabriel
Hartmann (Figure 9.24), Simon Hermann (contributions to Subsections 5.4.2
and 8.1.2, Figures 4.16 and 7.5), Václav Hlaváč (suggestions for improving the
contents of Chapters 1 and 2), Heiko Hirschmüller (Figure 7.1), Wolfgang Hu-
ber (Figure 4.2.2, bottom, right), Fay Huang (contributions to Chapter 6, in
particular to Section 6.1.4), Ruyi Jiang (contributions to Section 9.3.3), Waqar
Khan (Figure 7.17), Ron Kimmel (presentation suggestions on local operators
and optic flow - which I need to keep mainly as a project for a future revision
of the text), Karsten Knoeppel (contributions to Section 9.3.4),
Ko-Sc. Andreas Koschan (comments on various parts of the book and Fig-
ure 7.18, right), Vladimir Kovalevsky (Figure 2.15), Peter Kovesi (contributions
to Chapters 1 and 2 regarding phase congruency, including the permission to re-
produce figures), Walter Kropatsch (suggestions to Chapters 2 and 3), Richard
Lewis-Shell (Figure 4.2.2, bottom, left), Fajie Li (Exercise 5.9), Juan Lin (con-
4 Reinhard Klette

tributions to Section 10.3), Yizhe Lin (Figure 6.19), Dongwei Liu (Figure 2.16),
Yan Liu (permission to publish Figure 1.6), Rocı́o Lizárraga (permission to pub-
lish Figure 5.1, bottom row), Peter Meer (comments on Subsection 2.4.2), James
Milburn (contributions to Section 4.4). Pedro Real (comments on geometric and
topologic subjects), Mahdi Rezaei (contributions to face detection in Chapter 10,
including text and figures, and Exercise 10.2), Bodo Rosenhahn (Figure 7.9,
right), John Rugis (definition of similarity curvature and Exercises 7.2 and 7.6),
James Russell (contributions to Subsection 5.1.1), Jorge Sanchez (contribution
to Example 9.1, Figures 9.1, right, and 9.5), Konstantin Schauwecker (com-
ments on feature detectors and RANSAC plane detection, Figures 6.10, right,
7.19, 9.9, and 2.23), Karsten Scheibe (contributions to Chapter 6, in particular
to Section 6.1.4, and Figure 7.1), Karsten Schlüns (contributions to Section 7.4),

Sh-Z. Bok-Suk Shin (Latex editing suggestions, comments on various parts of


the book, contributions to Subsections 5.1.1 and 3.4.1, and Figure 9.23 with
related comments), Eric Song (Figure 5.6, left), Zijiang Song (contributions to
Chapter 9, in particular to Subsection 9.2.4), Kathrin Spiller (contribution to
3D case in Subsection 7.2.2), Junli Tao (contributions to pedestrian detection in
Chapter 10, including text and figures and Exercise 10.1, and comments about
the structure of this chapter), Akihiko Torii (contributions to Section 6.1.4),
Johan VanHorebeek (comments on Chapter 10), Tobi Vaudrey (contributions
to Subsection 2.3.2 and Figure 4.18, contributions to Section 9.3.4, and Exer-
cise 9.6), Mou Wei (comments on Chapter 7.4.3), Shou-Kang Wei (joint work
on subjects related to Section 6.1.4), Tiangong Wei (contributions to Subsec-
tion 7.4.3), Jürgen Wiest (Figure 9.1, left), Yihui Zheng (contributions to Sub-
section 5.1.1), Zezhong Xu (contributions to Subsection 3.4.1 and Figure 3.41),
Shenghai Yuan (comments on Subsections 3.3.1 and 3.3.2), Qi Zang (Exercise 5.5,
and Figures 2.21,5.37, and 10.1), Yi Zeng (Figure 9.15), and Joviša Žunić (con-
tributions to Subsection 3.3.2).

The author is, in particular, indebted to Sandino Morales (D.F., Mexico)


for implementing and testing algorithms, providing many figures, contributions
to Chapters 4, 5, and 8, and for numerous comments about various parts of
the book, to Wladyslaw Skarbek (Warsaw, Poland) for manifold suggestions for
improving the contents, and for contributing Exercises 1.10, 2.10, 2.11, 3.12,
4.11, 5.7, 5.8, and 6.10, and to Garry Tee (Auckland, New Zealand) for careful
reading, commenting, for parts of Insert 5.9, the footnote on Page 412, and many
more valuable hints.

I thank my wife, Gisela Klette, for authoring Subsection 3.2.4 about the
Euclidean distance transform, and critical views on structure and details of the
book while the book was written at CIMAT Guanajuato between mid July to
beginning of November 2013 during a sabbatical leave from The University of
Auckland, New Zealand.
Concise Computer Vision 5

2 Contents
Page numbers are for the submitted manuscript, and will certainly change in the
finalized book (after the editing process by Springer).

1 Image Data . 1
1.1 Images in the Spatial Domain . 1
1.1.1 Pixels and Windows . 2
1.1.2 Image Values and Basic Statistics . 3
1.1.3 Spatial and Temporal Data Measures . 8
1.1.4 Step-Edges . 10
1.2 Images in the Frequency Domain . 14
1.2.1 Discrete Fourier Transform . 15
1.2.2 Inverse Discrete Fourier Transform . 16
1.2.3 The Complex Plane . 17
1.2.4 Image Data in the Frequency Domain . 19
1.2.5 Phase-Congruency Model for Image Features . 25
1.3 Color and Color Images . 27
1.3.1 Color Definitions . 28
1.3.2 Color Perception, Visual Deficiencies, and Gray-Levels . 32
1.3.3 Color Representations . 36
1.4 Exercises . 41
1.4.1 Programming Exercises . 41
1.4.2 Non-Programming Exercises . 43

2 Image Processing . 45
2.1 Point, Local, and Global Operators . 45
2.1.1 Gradation Functions . 45
2.1.2 Local Operators . 48
2.1.3 Fourier Filtering . 51
2.2 Three Procedural Components . 54
2.2.1 Integral Images . 54
2.2.2 Regular Image Pyramids . 55
2.2.3 Scan Orders . 57
2.3 Classes of Local Operators . 59
2.3.1 Smoothing . 59
2.3.2 Sharpening . 62
2.3.3 Basic Edge Detectors . 64
2.3.4 Basic Corner Detectors . 69
2.3.5 Removal of Illumination Artefacts . 72
2.4 Advanced Edge Detectors . 75
2.4.1 LoG and DoG, and Their Scale Spaces . 75
2.4.2 Embedded Confidence . 80
2.4.3 The Kovesi Algorithm . 83
2.5 Exercises . 88
6 Reinhard Klette

2.5.1 Programming Exercises . 88


2.5.2 Non-Programming Exercises . 90

3 Image Analysis . 91
3.1 Basic Image Topology . 91
3.1.1 4- and 8-Adjacency for Binary Images . 92
3.1.2 Topologically-Sound Pixel Adjacency . 96
3.1.3 Border Tracing . 100
3.2 Geometric 2D Shape Analysis . 103
3.2.1 Area . 103
3.2.2 Length . 106
3.2.3 Curvature . 109
3.2.4 Distance Transform (by Gisela Klette) . 112
3.3 Image Value Analysis . 119
3.3.1 Co-Occurrence Matrices and Measures . 120
3.3.2 Moment-Based Region Analysis . 122
3.4 Detection of Lines and Circles . 125
3.4.1 Lines . 125
3.4.2 Circles . 131
3.5 Exercises . 132
3.5.1 Programming Exercises . 132
3.5.2 Non-Programming Exercises . 136

4 Dense Motion Analysis . 139


4.1 3D Motion and 2D Optic Flow . 139
4.1.1 Local Displacement Versus Optic Flow . 139
4.1.2 Aperture Problem and Gradient Flow . 143
4.2 The Horn-Schunck Algorithm . 145
4.2.1 Preparing for the Algorithm . 146
4.2.2 The Algorithm . 151
4.3 Lucas-Kanade Algorithm . 156
4.3.1 Linear Least-Squares Solution . 157
4.3.2 Original Algorithm and Algorithm with Weights . 159
4.4 The BBPW Algorithm . 160
4.4.1 Used Assumptions and Energy Function . 161
4.4.2 Outline of the Algorithm . 163
4.5 Performance Evaluation of Optic Flow Results . 164
4.5.1 Test Strategies . 165
4.5.2 Error Measures for Available Ground Truth . 167
4.6 Exercises . 168
4.6.1 Programming Exercises . 168
4.6.2 Non-Programming Exercises . 170

5 Image Segmentation . 171


5.1 Basic Examples of Image Segmentation . 172
Concise Computer Vision 7

5.1.1 Image Binarization . 173


5.1.2 Segmentation by Seed Growing . 176
5.2 Mean-Shift Segmentation . 181
5.2.1 Examples and Preparation . 181
5.2.2 Mean-Shift Model . 184
5.2.3 Algorithms and Time Optimization . 187
5.3 Image Segmentation as an Optimization Problem . 193
5.3.1 Labels, Labeling, and Energy Minimization . 193
5.3.2 Examples of Data and Smoothness Terms . 196
5.3.3 Message Passing . 198
5.3.4 Belief-Propagation Algorithm . 200
5.3.5 Belief Propagation for Image Segmentation . 206
5.4 Video Segmentation and Segment Tracking . 208
5.4.1 Utilizing Image Feature Consistency . 208
5.4.2 Utilizing Temporal Consistency . 210
5.5 Exercises . 214
5.5.1 Programming Exercises . 214
5.5.2 Non-Programming Exercises . 216

6 Cameras, Coordinates, and Calibration . 221


6.1 Cameras . 222
6.1.1 Properties of a Digital Camera . 222
6.1.2 Central Projection . 227
6.1.3 A Two-Camera System . 229
6.1.4 Panoramic Camera Systems . 231
6.2 Coordinates . 234
6.2.1 World Coordinates . 234
6.2.2 Homogeneous Coordinates . 236
6.3 Camera Calibration . 238
6.3.1 A Users Perspective on Camera Calibration . 238
6.3.2 Rectification of Stereo Image Pairs . 243
6.4 Exercises . 247
6.4.1 Programming Exercises . 247
6.4.2 Non-Programming Exercises . 249

7 3D Shape Reconstruction . 251


7.1 Surfaces . 251
7.1.1 Surface Topology . 252
7.1.2 Local Surface Parametrizations . 255
7.1.3 Surface Curvature . 259
7.2 Structured Lighting . 262
7.2.1 Light Plane Projection . 262
7.2.2 Light Plane Analysis . 264
7.3 Stereo Vision . 266
7.3.1 Epipolar Geometry . 267
8 Reinhard Klette

7.3.2 Binocular Vision in Canonical Stereo Geometry . 268


7.3.3 Binocular Vision in Convergent Stereo Geometry . 272
7.4 Photometric Stereo Method . 275
7.4.1 Lambertian Reflectance . 276
7.4.2 Recovering Surface Gradients . 279
7.4.3 Integration of Gradient Fields . 282
7.5 Exercises . 289
7.5.1 Programming Exercises . 289
7.5.2 Non-Programming Exercises . 292

8 Stereo Matching . 293


8.1 Matching, Data Cost, and Confidence . 294
8.1.1 Generic Model for Matching . 295
8.1.2 Data-Cost Functions . 299
8.1.3 From Global to Local Matching . 301
8.1.4 Testing Data Cost Functions . 304
8.1.5 Confidence Measures . 306
8.2 Dynamic Programming Matching . 308
8.2.1 Dynamic Programming . 309
8.2.2 Ordering Constraint . 310
8.2.3 DPM Using the Ordering Constraint . 313
8.2.4 DPM Using a Smoothness Constraint . 318
8.3 Belief-Propagation Matching . 323
8.4 Third-Eye Technique . 326
8.4.1 Generation of Virtual Views for the Third Camera . 327
8.4.2 Similarity Between Virtual and Third Image . 331
8.5 Exercises . 334
8.5.1 Programming Exercises . 334
8.5.2 Non-Programming Exercises . 336

9 Feature Detection and Tracking . 339


9.1 Invariance, Features, and Sets of Features . 339
9.1.1 Invariance . 340
9.1.2 Keypoints and 3D Flow Vectors . 341
9.1.3 Sets of Keypoints in Subsequent Frames . 344
9.2 Examples of Features . 348
9.2.1 Scale-Invariant Feature Transform . 349
9.2.2 Speeded-Up Robust Features . 350
9.2.3 Oriented Robust Binary Features . 352
9.2.4 Evaluation of Features . 355
9.3 Tracking and Updating of Features . 357
9.3.1 Tracking is a Sparse Correspondence Problem . 359
9.3.2 Lucas-Kanade Tracker . 361
9.3.3 Particle Filter . 366
9.3.4 Kalman Filter . 373
Concise Computer Vision 9

9.4 Exercises . 379


9.4.1 Programming Exercises . 379
9.4.2 Non-Programming Exercises . 383

10 Object Detection . 385


10.1 Localization, Classification, and Evaluation . 385
10.1.1 Descriptors, Classifiers, and Learning . 386
10.1.2 Performance of Object Detectors . 390
10.1.3 Histogram of Oriented Gradients . 393
10.1.4 Haar Wavelets and Haar Features . 395
10.1.5 Viola-Jones Technique . 398
10.2 AdaBoost . 402
10.2.1 Algorithm . 402
10.2.2 Parameters . 404
10.2.3 Why Those Parameters? . 407
10.3 Random Decision Forests . 409
10.3.1 Entropy and Information Gain . 409
10.3.2 Applying a Forest . 412
10.3.3 Training a Forest . 414
10.3.4 Hough Forests . 418
10.4 Pedestrian Detection . 420
10.5 Exercises . 422
10.5.1 Programming Exercises . 422
10.5.2 Non-Programming Exercises . 423

Symbols . 425

Index . 427

Persons . 439

3 Sample: Section 8.1.2 on Data Cost Functions

A stereo matcher is often defined by the data and smoothness-cost terms used,
and by a control structure how those terms are applied for minimizing the total
error of the calculated labeling function f . Smoothness terms are very much
generically defined, and we present possible control structures later in this chap-
ter. Data-cost calculation is the “core component” of a stereo matcher. We define
a few data-cost functions with a particular focus on ensuring some invariance
with respect to lighting artifacts in recorded images, or brightness differences
between left and right images.
Zero-Mean Version. Instead of calculating a data-cost function such as
ESSD (x, l) or ESAD (x, l) on the original image data, we calculate at first the
10 Reinhard Klette

l,k
mean B x of a used window Wxl,k (B), the mean M x+d of a used window Wx+d (M ),
l,k
subtract B x from all intensity values in Wx (B) and M x+d from all values in
l,k
Wx+d (M ), and calculate then the data-cost function in its zero-mean version.
This is one option for reducing the impact of lighting artefacts (i.e. for not
depending on the ICA).
We indicate this way of processing by starting the subscript of the data-cost
function with a Z. For example, EZSSD or EZSAD are the zero-mean SSD or
zero-mean SAD data-cost function, respectively, formally defined by
l k
X X  2
EZSSD (x, d) = (Bx+i,y+j − B x ) − (Mx+i+d,y+j − M x+d ) (1)
i=−l j=−k
l
X k
X
EZSAD (x, d) = [Bx+i,y+j − B x ] − [Mx+d+i,y+j − M i+d ] (2)
i=−l j=−k

NCC Data Cost. The normalized cross correlation (NCC) was defined in
Insert 2.5 for comparing two images. The NCC is already defined by zero-mean
normalization, but we add the Z to the index for uniformity in notation. The
NCC data cost is defined by
Pl Pk   
i=−l j=−k Bx+i,y+j − B x Mx+d+i,y+j − M x+d
EZN CC (x, d) = 1 − q
2
σB,x 2
· σM,x+d
(3)
where
l k
2
X X  2
σB,x = Bx+i,y+j − B x (4)
i=−l j=−k
l k
2
X X  2
σM,x+d = Mx+d+i,y+j − M x+d (5)
i=−l j=−k

ZNCC is also an option for going away from the ICA.


The Census Data-Cost Function. The zero-mean normalized census cost
function is defined as follows:
l
X k
X
EZCEN (x, d) = ρ(x + i, y + j, d) (6)
i=−l j=−k

with 
0 Buv ⊥ B x and Mu+d,v ⊥ M x+d
ρ(u, v, d) = (7)
1 otherwise
with ⊥ either < or > in both cases. By using Bx instead of B x , and Mx+d
instead of M x+d , we have the census data-cost function ECEN (without zero-
mean normalization).
Concise Computer Vision 11

Example 1. (Example for Census Data Cost) Consider the following 3 × 3


windows Wx (B) and Wx+d (M ):
2 1 6 5 5 9
1 2 4 7 6 7
2 1 3 5 4 6

We have that B x ≈ 2.44 and M x+d ≈ 6.11.


Consider i = j = −1, resulting in u = x − 1 and v = y − 1. We have that
Bx−1,y−1 = 2 < 2.44 and Mx−1+d,y−1 = 5 < 6.11, thus ρ(x − 1, y − 1, d) = 0.
As a second example, consider i = j = +1. We have that Bx+1,y+1 = 3 >
2.44, but Mx+1+d,y+1 = 6 < 6.11, thus ρ(x + 1, y + 1, d) = 1.
In case i = j = −1, values are in the same relation with respect to the mean,
but at i = j = +1 they are in opposite relationships. For the given example it
follows that EZCEN = 2. The spatial distribution of ρ-values is illustrated by
the matrix
0 0 0
1 0 0
0 0 1

The following vector cx,d lists these ρ-values in a left-to-right, top-to-bottom


order: [0, 0, 0, 1, 0, 0, 0, 0, 1]> . t
u
Let bx be the vector listing results sgn(Bx+i,y+j − B x ) in a left-to-right,
top-to-bottom order, where sgn is the signum function. Similarly, mx+d lists
values sgn(Mx+i+d,y+j − M x+d ). For the values in Example 1, we have that

bx = [−1, −1, +1, −1, −1, +1, −1, −1, +1]> (8)
mx+d = [−1, −1, +1, +1, −1, +1, −1, −1, −1] (9)
cx,d = [ 0 , 0 , 0 , 1 , 0 , 0 , 0 , 0 , 1 ]> (10)

Vector cx,d shows exactly the positions where vectors bx and mx+d differ in
values; the number of positions where two vectors differ is known as the Hamming
distance of those two vectors.

Observation 11 The zero-mean normalized census data cost EZCEN (x, d) equals
the Hamming distance between vectors bx and mx+d .

By adapting the definition of both vectors bx and mx+d to the census data-
cost function ECEN , we can also obtain those costs as the Hamming distance.

===================================
Hamming. The US-American mathematician R. W. Hamming (1915 – 1998)
contributed to computer science and telecommunications. The Hamming code,
Hamming window, Hamming numbers, and the Hamming distance are all named
after him. ===================================
By replacing values “-1” by “0” in vectors bx and mx+d , the Hamming
distance for the resulting binary vectors can be calculated very time-efficiently.1

4 Index of Subjects

Page numbers are, as in the list of contents, for the submitted manuscript, and
will certainly change in the finalized book (after the editing process by Springer).

Index

Gmax , 4 SouthLeft, 45
Ω, 2, 5 SouthRight, 45
atan2, 21, 67 Spring, 206, 207, 215
pos, 86, 87 Straw, 21
Altar, 62 Taroko, 10
AnnieYukiTim, 27, 172, 214 Tomte, 99
Aussies, 10, 180, 215 Uphill, 45
Crossing, 293, 294, 317 Wiper, 45
Donkey, 22 WuhanU, 14
Emma, 52, 55 Xochicalco, 215
Fibers, 21 Yan, 6, 172
Fountain, 4 bicyclist, 141, 210, 355
Kiri, 99 motorway, 210
LightAndTrees, 72 queenStreet, 163
MainRoad, 72 tennisball, 141, 208, 209
1D, 15
Michoacan, 399
2D, 15
MissionBay, 176, 214
3D, 6
Monastry, 178
Neuschwanstein, 5 absolute difference, 297
NorthLeft, 72 AC, 21
NorthRight, 72 accumulated cost, 294
Odense, 182, 189 accuracy
OldStreet, 10 – sub-cell, 129
PobleEspanyol, 99 – subpixel, 124
RagingBull, 46 AD, 297, 315, 318
Rangitoto, 99 AdaBoost, 402
RattusRattus, 173 adaptive boosting, 402
Rocio, 172, 401 adjacency
SanMiguel, 3, 8 -, 92
Set1Seq1, 49, 61, 68, 70, 71, 77, 82 -, 136
Set2Seq1, 74, 167 -, 92
1
See [H. S. Warren. Hacker’s Delight. Pages 65–72, Addison-Wesley Longman, New
York, 2002].
Concise Computer Vision 13

– K-, 99, 137 Bebenhausen, 178


affine transform, 234 belief-propagation algorithm, 198
albedo, 277 belief-propagation matching, 303, 323
algorithm Benham disk, 33
– BBPW, 161 Berlin, 251, 267, 275
– belief-propagation, 198 binarization
– condensation, 367 – Otsu, 174
– fill-, 179 binary robust independent elementary fea-
– Frankot-Chellappa, 287, 289 tures, 352
– Horn-Schunck, 141, 152 bird’s-eye view, 368
– Kovesi, 27, 83 border, 252
– Lucas-Kanade optic flow, 156 – inner, 102
– Marr-Hildreth, 75 – outer, 41, 102
– mean-shift, 181, 209 border cycles, 102
– Meer-Georgescu, 80 bounding box, 385, 390, 393
– meta-, 402 box filter, 52, 59
– optical flow, 212 BP, 198
– pyramidal, 164 – pyramidal, 205
– recursive labeling, 179 BPM, 303, 323, 332, 411
– two-scan, 283 BRIEF, 352
– Voss, 101 brightness, 35
– Wei-Klette, 287, 289 butterfly, 130
amplitude, 21
anaglyphic image, 249 calibration mark, 124, 241, 242
angle camera
– slope, 12, 110 – fish-eye, 232
angular error, 167 – omnidirectional, 231
aperture problem, 143, 153 – panoramic, 231
arc – rotating sensor-line, 231
– Jordan, 106 camera matrix, 244
area, 104 camera obscura, 40, 222
artefacts Canny operator, 67
– illumination, 72 cardinality, 5
aspect ratio, 224 carrier, 1, 5
Auckland, 233, 248, 303 catadioptric, 231
Cañada de la Virgin, 3
background plane, 256 CCD, 222
backtracking, 309, 321 census cost function, 300
bad pixel, 305 central projection, 228
band, 4 centroid, 122, 182
barber pole, 140 channel, 4
base distance, 229, 273 – intensity, 8
base image, 295 CIE, 29, 44
base line, 273 CIMAT, 1
basis functions, 16, 51 circle, 131
Bayer pattern, 225 – osculating, 111
Bayesian network, 195 class, 386
BBPW algorithm, 161 classification, 389
– pyramidal, 164 classifier, 387
beam splitter, 225 – strong, 387, 399
14 Reinhard Klette

– weak, 387, 402 – RGB, 37


clustering, 218 cumulative frequencies, 6
– segmentation by, 181 curvature, 110
CMOS, 222 – curve, 110
co-occurrence matrix, 120 – Gaussian, 259, 260
co-occurrence measures, 122 – main, 260
coefficients – mean, 260
– Fourier, 17 – normal, 259, 260
College Park, 78 – principal, 260
color, 27 – similarity, 261, 292
– blindness, 32 curve
color checker, 39, 225 – Jordan, 95
color key, 141, 153 – simple, 95
color perception – smooth, 109
– primary, 35 – smooth Jordan, 109
color space
– CIE, 30 data cost matrix, 298
column, 2 data energy, 148
component, 94, 133 data error, 148
concave point, 110 DC, 21, 26
condensation algorithm, 367 deficit
confidence, 80 – isoperimetric, 134
confidence measure, 80 density, 6
conjugate complex number, 18 density estimator, 185
connectedness, 93, 177 depth, 256
consensus set, 346 depth map, 256
consistency depth-first visits, 179
– temporal, 208, 210 derivative
contrast, 7, 9 – discrete, 64
control, 374 Descartes-Euler theorem, 92
control matrix, 374 descriptor, 341, 387
controllability, 373 descriptor matrix, 394
convex point, 110 detection
convolution, 25, 49, 51, 76 – of faces, 385, 401, 407
coordinate system – of lane borders, 135
– left-hand, 2 – of pedestrians, 409, 420, 422
– world, 234 determinant, 70, 103
coordinates deviation
– homogeneous, 236, 346 – relative, 105
– spherical, 257 DFT, 15
corner, 69 – inverse, 16
cornerness measure, 70 dichromatic reflectance model, 291
corresponding pixel, 295 difference quotient, 145
cosine law, 276 differential quotient, 145
cost, 199 diffuse reflector, 276
– accumulated, 294 digital geometry, 108
cross product, 237 digital straight segment, 108
cross-correlation digitization, 104
– normalized, 166 dioptric, 231
cube disk of influence, 340
Concise Computer Vision 15

disparity, 267, 269 – optic flow, 146


displacement, 139 equations
dissimilarity vector, 363 – Euler-Lagrange, 163
distance equivalence class, 177
– between functions, 10 equivalence relation, 177
– Euclidean, 188 error, 148, 194, 388
– Hamming, 301 – angular, 167
– Mahanalobis, 380 – endpoint, 167
distance map, 256 – prediction, 375
distance transform, 112 error function
divergence, 163 – partial, 314
DoG, 60, 75 essential matrix, 246
– modified, 176 Euclidean distance transform, 113
domain Euler characteristic, 254
– frequency, 14, 15 Euler formula, 260
– spatial, 15 Euler number, 60
dot product, 147, 258 Euler’s formula, 93
DPM, 309, 320 Euler-Lagrange equations, 163
drift, 365 Eulerian formula, 15
DSS, 108
Dunedin, 153 factor
dynamic-programming matching, 309 – shape, 133
false-negative, 386
eccentricity, 123 false-positive, 386
ECCV, 169 FAST, 71, 352
edge, 7, 10, 75 Fast Fourier Transform, 19
edge map, 14, 74 feature, 341, 387
EDT, 203 FFT, 19
ego-motion, 359 fill-algorithm, 179
ego-vehicle, 357 filter
eigenvalue, 69, 160 – Fourier, 15
eigenvalues, 260 – high pass, 52
EISATS, 74, 127, 166, 168–170, 305, 336 – low pass, 52
elevation map, 256 – sigma, 61
endpoint error, 167 filter kernel, 49
energy, 148, 194 filtering
entropy, 409 – Fourier, 51
– conditional, 410 flow
– normalized, 410 – gradient, 143, 153
envelope flow vectors
– lower, 117, 202 D, 340
Epanechnikov function, 185 focal length, 139, 227
epipolar geometry, 267 focus of expansion, 381
– canonical, 268 footprint, 173
epipolar line, 247, 268 – temporal, 213
epipolar plane, 267 forest, 389, 409
epipolar profile, 312 formula
equalization – Eulerian, 15
– histogram, 47 Fourier coefficients, 51
equation Fourier filter, 15
16 Reinhard Klette

Fourier transform, 15 gradient constancy, 161


– local, 26 gradient flow, 143, 153
fps, 139 gradient histogram, 350
frame, 9 gradient space, 257
Frankot-Chellappa algorithm, 287, 289 graph, 93
Frenet frame, 109, 110 – planar, 93
frequency, 15 Gray code, 263
– absolute, 6 gray-level, 35
– cumulative, 6 grid
– relative, 6 – regular, 1
frontier, 97, 98, 252 grid point, 2
function grid squares, 2
– ceiling, 76 ground plane, 252
– density, 181 ground truth, 167, 219
– Epanechnikov , 185 Guanajuato, 1, 4, 62
– error, 195
– Gauss, 59 Haar descriptor, 399
– gradation, 45 Haar feature, 398
– kernel, 184 Haar transform
– labeling, 148, 163, 194 – discrete, 396
– linear cost, 197, 201 Haar wavelet, 395
– LSE, 150 Haar-like features, 352
– Mexican hat, 76 Hamming distance, 301
– quadratic cost, 198, 203 Harris detector, 70
– split, 415 Harris filter, 354
fundamental matrix, 246 HCI, 321
fundamental theorem of algebra, 16 Heidelberg Robust Vision Challenge, 74,
169
Göttingen, 59 height, 256
Gabor wavelets, 83 height map, 256
gamma compression, 33 Hessian matrix, 69, 87
gamma expansion, 33 high pass, 52
gamut, 30, 31 highlight removal, 292
gap in surface, 252 Hilbert scan, 58
Gauss filter, 59 histogram, 6, 134, 174
Gauss function, 77, 185 – n-dimensional, 191
Gauss–Seidel relaxations, 205 D, 89, 182
Gaussian filter, 159 – cumulative, 6
Gaussian sphere, 257 – gradient, 350
GCA, 161 – gray-level, 6
geometry histogram equalization, 46
– digital, 108 histogram of oriented gradients, 393
– epipolar, 267 hit, 386
Gibbs random field, 195 HoG, 393
global integration, 284 HoG descriptor, 394
global matching, 302 hole, 102
GM, 302 holography, 261
Goldcoast, 10 homeomorphic, 96
gradient, 12 homogeneity, 122, 134
– spatio-temporal, 162 homography, 246, 368
Concise Computer Vision 17

Horn-Schunck algorithm, 141, 152, 165 integration matrix, 319


– pyramidal, 155, 167 intensity, 35, 37, 38
Horn-Schunck constraint, 146, 161 intensity channel, 8
Hough transform intensity constancy, 72, 146, 161
– by Duda and Hart, 127 intensity profile, 8
– original, 126 interest point, 341
– standard, 129 intergral image, 54
HS, 146 interior, 98
HSI, 37, 42 invariance, 340
hue, 38 – rotation, 124
hysteresis, 67, 82 inverse perspective mapping, 368
Hz, 139 iSGM, 411
Ishihara color test, 33
IAPR, 50 isoperimetric deficit, 134
ICA, 72, 146, 155, 161, 297, 299, 300, 333 isothetic, 108, 135
ICCV, 95 isotropic, 124, 341
ICPR, 50 isotropy, 340
iff, 3 iterative solution scheme, 151
image, 1
– anaglyphic, 249 Jacobi method, 151
– as a surface, 12 Jacobian matrix, 363
– base, 295 Jet Propulsion Laboratory, 2
– binary, 4, 132 Jordan arc, 106
– gray-level, 4 – rectifiable, 106
– integral, 54 Jordan curve, 95
– match, 295 Jordan surface, 255
– residual, 74 Jordan-Brouwer theorem, 95
– scalar, 4 jpg, 41
– vector-valued, 4
– virtual, 328 K-adjacency, 99
image binarization, 173 Kalman filter, 373, 380, 381
image retrieval, 379 – iconic, 381
image segmentation, 172 Kalman gain, 376
image similarity, 379 key
image stitching, 248 – color, 141, 153, 164
images keypoint, 341
– residual, 169 Kinect 1, 261
imaginary unit, 15 Kinect 2, 261
importance KITTI, 74, 169, 305, 320
– order of, 98 Kovesi algorithm, 27, 83
inequality
– isoperimetric, 134 labeling, 149, 194
information gain, 412 – of segments, 178
inlier, 345 labeling function, 148, 149, 162, 163
innovation step, 376 labeling problem, 195, 284
integrability condition, 282 Lambertian reflectance, 275
integrating, 1 Lambertian reflectance map, 278
integration, 282 Lambertian reflector, 276
– global, 284 lane border detection, 135
– local, 283 Laplacian, 13, 67, 75
18 Reinhard Klette

laser scanner, 261 – co-occurrence, 120


layer, 61 – control, 374
LBP, 353 – cross product, 247
Le Gras, 221 – data cost, 298
leaf node, 389 – descriptor, 394
learning, 389 – diagonal, 159
– supervised, 389 – essential, 246
– unsupervised, 390 – fundamental, 246
least squares method – Hessian, 69, 87, 364
– linear, 157 – integration, 319
least-square error optimization, 149 – Jacobian, 363
left-right consistency, 306 – mean, 188
length, 13, 106 – observation, 374
lens distortion, 226 – residual variance, 376
line, 125, 148 – state transition, 374
linear algebra, 187 – system, 373
linear dynamic system, 373 matrix sensor, 222
local binary pattern, 353 Mavica, 223
local integration, 283 maximum
LoG, 75, 76 – local, 48
low pass, 52 mean, 4, 182
lower envelope, 202, 203 – local, 48
LSE, 149, 362 mean-shift algorithm, 181, 209
Lucas-Kanade optic-flow algorithm, 156, meander, 57
159 measure
Lucas-Kanade tracker, 362 – accuracy, 305
luminance, 35 – co-occurrence, 134
– confidence, 306
magnitude, 13, 21 – cornerness, 70
Mahanalobis distance, 380 – data, 9
main axis, 122, 123 – dissimilarity, 212
map – error, 167
– depth, 256 – for performance of a classifier, 391
– distance, 256 median operator, 59
– edge, 14 Meer-Georgescu algorithm, 80
– elevation, 256 message, 199
– height, 256 – initial, 201
– Lambertian reflectance, 278 message board, 200, 324
– needle, 154 method
– reflectance, 278 – red-black, 204
– Weingarten, 260 – steepest-ascent, 183
Markov random field, 195 method of least squares, 157
Marr-Hildreth algorithm, 75 metric, 10, 212
mask, 332, 398 Mexican hat, 76
masking Middlebury data, 166, 305
– unsharp, 62 Minneapolis, 206
match image, 295 miss, 386
matching problem, 345 mode, 183
matrix model
– camera, 244 – additive color, 36
Concise Computer Vision 19

– grid cell, 2, 92 – median, 59


– grid point, 2, 92 – point, 45, 50
– HSI color, 38 – Sobel, 66
– phase-congruency, 10 optic axis, 227
– Potts, 197 optic flow, 140
– RGB, 4 optic flow equation, 146
– step-edge, 10, 80 optical flow algorithm, 212
– subtractive color, 36 optimal Kalman gain, 376
Moebius band, 255 optimization
moments, 122, 182 – least-square error, 149
– central, 123 – TVL1 , 162
motion – TVL2 , 149, 162
D, 140 ORB, 352
D, 140 order
Mpixel, 224 – circular, 101, 137
MRF, 195, 198 – of a moment, 122
ordering constraint, 310
NCC, 166, 299 orientation, 104
NCC data cost, 299 – coherent, 254
needle map, 153 – of a triangle, 254
neighborhood, 93 oriented robust binary features, 352
D, 342 Otsu binarization, 174
noise, 45, 373 outlier, 344, 345
– Gaussian, 46
– observation, 374 pair
– system, 374 – Fourier, 22
non-photorealistic rendering, 215 panorama
norm – cylindric, 231
– L2 , 18 – stereo, 233
normal, 12, 257 parabola, 113, 117, 138, 203
– unit, 257 parameters
normalization – extrinsic, 238
– directional, 135 – intrinsic, 238
– of functions, 9 Parseval’s theorem, 21, 286
normalized cross-correlation, 166 part
NPR, 215 – imaginary, 17
– real, 17
object candidate, 385 particle filter, 366
object detector, 390 partition, 177
observability, 373 Pasadena, 2
observation matrix, 374 patch, 390
octave, 61 path
operation – in pyramid, 56
– local, 48 PDF, 6
operator peak, 128, 181, 197
– box, 59 – local, 183, 209
– Canny, 67 penalizer
– global, 50 – quadratic, 162
– local, 49 performance evaluation, 164
– local linear, 49 performance measure for classifiers, 391
20 Reinhard Klette

perimeter, 105, 133 query by example, 379


phase, 21
– being in, 26 random decision forest, 409
phase congruency, 25, 26 random sample consensus, 340, 345
photograph, first, 221 RANSAC, 340, 345
photometric stereo method, 275 Rattus rattus, 173
pinhole camera RDF, 409
– model of a, 227 recovery rate, 218
pixel, 1, 2 rectification
– bad, 305 – geometric, 243
– corresponding, 295 red-black method, 204
pixel feature, 177 reference point, 3
pixel location, 2 reflectance, 278
plane – Lambertian, 275
– complex, 17 reflectance map, 278
– tangential, 12 region, 94
point region of interest, 385
– concave, 110 relation
– convex, 110 – equivalence, 177
– corresponding, 267 – reflexive, 177
– singular, 109 – symmetric, 177
point at infinity, 236 – transitive, 177
polygon, 104 rendering
polyhedron, 92 – non-photorealistic, 215
polynomial repeatability, 356
– second order, 112 representation
posterization effect, 215 – explicit, 256
Potts model, 197, 201, 219 – implicit, 256
Prague, 231 resampling, 372
prediction error, 375 residual vector
principal point, 229 – measurement, 376
probability, 6, 174 RGB, 4, 7, 37, 191
– conditional, 410 RGB primaries, 30
problem Rio de Janeiro, 95
– aperture, 143 RoI, 385
– labeling, 195 root of unity, 18
product rotation angles
– dot, 147 – Eulerian, 235
– inner, 147 row, 2
– vector, 147 run-time
profile – asymptotic, 217
– intensity, 8
projection center, 139, 227 SAD, 297
property sample, 1, 414
– symmetry, 20 sampling, 1, 76
PSM, 275, 279 saturation, 38
– albedo-independent, 279 scale, 60, 67
– inverse, 280 scale space, 60, 61
pyramid, 55, 205 – box-filter, 379
pyramidal algorithm, 164 – DoG, 78
Concise Computer Vision 21

– Gaussian, 61 – descriptor, 387


– LoG, 77 – feature, 182
scale-invariant feature transform, 349 – gradient, 257
scaling – Hough, 128
– conditional, 48 – velocity, 147
– linear, 47, 77 spectrum, 21
scan order, 57 – visible, 28
scanline, 313 speeded-up robust features, 350
scanner split function, 415
D, 261 split node, 389
scenario, 321 square
search interval, 295 – magic, 58
SEDT, 115 SSD, 297
seed pixel, 176 stability, 373
seed point, 390 staircase effect, 107
segment, 172 standard deviation, 5, 60
– corresponding, 211 state, 373
segment labeling state transition matrix, 374
– recursive, 178 static, 140
segmentation statistics
– mean-shift, 215 – spatial value, 9
– video, 208 – temporal value, 9
semi-global matching, 303 stereo analysis
– basic, 322 – uncertainty of, 291
– iterative, 323 stereo geometry
separability – canonical, 230
– linear, 388 stereo matcher, 298
set stereo pair, 293
– closed, 98 stereo points
– compact, 98, 292 – corresponding, 246
– of labels, 194 stereo visualization, 249
– open, 98 stitching, 248
SGM, 303 straight line
Shanghai, 10 – dual, 258
shape factor, 133 structure-texture decomposition, 74
sharpening, 62 structured light, 262
SIFT, 349 stylization
sigma filter, 61 – Winnemöller, 175
similarity subpixel accuracy, 124, 306
– structural, 10 sum of absolute differences, 297
situation, 321, 333 sum of squared differences, 297
slope angle, 12, 109 suppression
smoothing, 59 – non-maxima, 67, 72, 81, 87
– Gauss, 56 SURF, 350
smoothness energy, 149 surface, 252
smoothness error, 149 – Jordan, 254
snake – nonorientable, 254
– rotating, 36 – orientable, 254
Sobel operator, 66, 90 – polyhedral, 252
space – smooth, 252
22 Reinhard Klette

surface patch, 256 triangulation, 265


surveillance tristimulus values, 29
– environmental, 173 true-negative, 386
symmetric difference, 212 true-positive, 386
symmetry property, 20 truncation, 198
system matrix, 373 TUD Multiview Pedestrians, 412
TV, 149
Tübingen, 178, 272 TVL2 , 74
Taiwan, 10
Taylor expansion, 145, 161, 363 uncertainty, 291
term uniformity, 122, 134
– continuity, 195 uniqueness constraint, 313
– data, 195 unit
– neighborhood, 195 – imaginary, 15
– smoothness, 195 unit vector, 155
theorem unsharp masking, 62
– by Meusnier, 260
– convolution, 25, 51 Valenciana
– four-color , 194 – baroque church, 1, 62
– Jordan-Brouwer, 95 variance, 5, 60
– Parseval’s, 286 – between-class, 174
third-eye technique, 328 variation
thresholding, 132 – quadratic, 13
tilt, 257, 275 vector
time complexity – magnitude, 155
– asymptotic, 217 – tangent, 110
topology, 92, 97 – unit, 155
– digital, 92 – unit normal, 110
– Euclidean, 97 vector field, 140
total variation, 149 – dense, 141
Tour de France, 137 – sparse, 141
trace, 69, 70 vector product, 147
tracing vectors
– border, 100 – cross product, 237
tracking, 210 velocity, 139, 140
training, 387 velocity space, 147
transform vergence, 275
– affine, 234 video
– barrel, 226 – progressive, 223
– cosine, 15 video segmentation, 208
– distance, 112 video surveillance, 216
– Euclidean distance, 113, 203 Voss algorithm, 101, 102, 137
– Fourier, 14
– histogram, 45 warping, 362
– integral, 15 wavelets, 83
– linear, 234 Wei-Klette algorithm, 287, 289
– pincushion, 226 weighted graph, 309
transpose, 80 weights, 159, 185, 366, 388, 396
triangle wide angle, 228
– oriented, 254 window, 3
– default, 6 ZCEN, 300, 320
Winnemöller stylization, 172, 175 zero-crossing, 12, 76
Wuhan, 14 zero-mean version, 299

5 Index of Persons
Page numbers are for the submitted manuscript, and will certainly change in the
finalized book (after the editing process by Springer).

Index

Akhtar, M.W., 131 Dalal, N., 394


Alempijevic, A., 368 Dalton, J., 32
Appel, K., 194 Daniilidis, K., 233
Atiquzzaman, M., 131 Davies, M.E., 3
Descartes, R., 16
Badino, H., 383 Destefanis, E., 344
Baker, H.H., 313 Dissanayake, G., 368
Bay, H., 352 Drummond, T., 72
Bayer, B., 225 Duda, R.O., 95, 127
Bayes, T., 195
Bellman, R., 308 Epanechnikov, V.A., 185
Benham, C., 34 Euclid of Alexandria, 58
Betti, E., 254 Euler, L., 16, 163, 253, 260
Binford, T.O., 313
Bolles, R.C., 345 Felzenszwalb, P.F., 324
Borgefors, G., 113 Feynman, R.P., 33
Bouget, J.-Y., 239 Fischler, M.A., 345
Bradski, G., 187, 354 Fourier, J.B.J., 15
Breiman, L., 413 Frankot, R.T., 285
Brouwer, L.E.J., 96, 255 Frenet, J.F., 110
Brox, T., 161 Freud, Y., 402
Bruhn, A., 161 Fua, P., 354
Burr, D.C., 27 Fukunaga, K., 181
Burt, P.J., 78
Gabor, D., 85
Calonder, M., 354 Gall, J., 419
Canny, J., 66 Gauss, C.F., 19, 59, 205, 257, 259
Chellappa, R., 285 Gawehn, I., 254
Cheng, Y., 181 Gehrig, S., 383
Comaniciu, D., 181 Georgescu, B., 80
Cooley, J.M., 19 Gerling, C.L., 205
Crow, F.C., 55 Gibbs, J.W., 195
Crowley, J.L., 79 Gray, F., 264
24 Reinhard Klette

Grimson, W.E.L., 216 Lambert, J.H., 276, 278


Laplace, P. S. Marquis de, 67
Haar, A., 396 Leibe, B., 419
Hadamard, J., 396 Leibler, R., 353
Haken, W., 194 Leighton, R.B., 3
Halmos, P.R., 3 Lempitsky, V., 419
Hamming, R.W., 301 Leovy, C.B., 3
Harris, C., 70 Lepetit, V., 354
Hart, P.E., 95, 127 Lewis, J.P., 55
Hartley, R., 247 Lewis, P.A., 19
Harwood, D., 353 Lindeberg, T., 79, 341
He, D.C., 353 Listing, J.B., 92
Hermann, S., 323 Longuet-Higgins, H.C., 246
Herriman, A.G., 3 Lowe, D.G., 349
Hertz, H., 139 Lucas, B.D., 156, 360
Hesse, L.O., 69, 87, 364 Luong, Q.T., 246
Hilbert, D., 58
Hildreth, E., 75 Markov, A.A., 195
Hirata, T., 113 Marr, D., 75, 335
Hirschmüller, H., 322 Martelli, A., 309
Horn, B.K.P., 147 Meer, P., 80, 181
Horowitz, N.H., 3 Meusnier, J.B.M., 260
Hostetler, L.D., 181 Montanari, U., 309
Hough, P.V.C, 127 Morales, S., iv, 328
Hu, M.K., 124 Morrone, M.C., 27
Huang, F., 233, 252 Munson, J.H., 95
Huttenlocher, D.P., 324
Newton, I., 361
Ishihara, S., 32 Niépce, N., 221
Itten, J., 39
Ohta, Y., 313
Jacobi, C.G.J., 151, 205, 363 Ojala, T., 353
Jones, M., 55, 397 Otsu, N., 174
Jordan, C., 95, 106, 111, 254 Owens, R.A., 27

Kaehler, A., 187 Papenberg, N., 161


Kalman, R.E., 376 Parseval, M.-A., 21
Kanade, T., 156, 313, 360 Peano, G., 58, 106
Kanatani, K., 247 Pfaltz, J.L., 95, 113
Kehlmann, D., 59 Pietikäinen, M., 353
Kitaoka, A., 36 Potts, R.B., 197
Klette, G., iv, 112
Klette, R., 50, 108, 173, 233, 252, 285, 323, Rabaud, V., 354
328, 344, 358 Rademacher, H., 396
Kodagoda, S., 368 Radon, J., 127
Konolige, K., 354 Raphson, J., 361
Kovalevsky, V.A., 309 Rosenfeld, A., 78, 95, 108, 113, 134, 254
Kovesi, P.D., 85 Ross, J.R., 27
Kullback, S., 353 Rosten, E., 72
Rublee, E., 354
Lagrange, J.-L., 163 Russell, J., 173
Concise Computer Vision 25

Saito, T., 113 Tomasi, C., 360


Sanchez, J.A., 344 Toriwaki, J., 113
Sanderson, A.C., 79 Triggs, B., 394
Schapire, R., 402 Tukey, J.W., 19
Scheibe, K., 233, 252 Tuytelaars, T., 352
Schiele, B., 419
Schunck, B.G., 147 Van Gool, L., 352
Sehestedt, S., 368 Vaudrey, T., 383
Seidel, P.L. von, 205 Viola, P., 55, 397
Shannon, C. E., 409 Voss, K., 103
Shin, B.-S., 173
Shum, H.Y., 324 Walsh, J.L., 396
Skarbek, W., iv Wang, L., 353
Smith, B.A., 3 Warren, H.S., 301
Wei, T., 285
Sobel, I.E., 66
Weickert, J., 161
Stauffer, C., 216
Welch, P.D., 19
Stephens, M., 70
Winnemöller, H., 175
Strecha, C., 354
Witkin, A.P., 79
Sun, J., 324
Svoboda, T., 362 Young, A.T., 3
Swerling, P., 376 Young, D.W., 205

Tao, J., 422 Zamperoni, P., 50, 88


Tarjan, R., 218 Zeng, Y., 358
Taylor, B., 145 Zheng, N.N., 324
Tee, G., iv Zheng, Y., 173
Thiele, T.N., 376 Zisserman, A., 247

View publication stats

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy