0% found this document useful (0 votes)
14 views

Photometric Image Formation

This document summarizes key concepts in photometric image formation: - Image irradiance is proportional to scene radiance based on the lens projection model. The measured pixel intensity then depends on this irradiance. - Common camera sensors include color filter arrays that require demosaicing to obtain a full color image, and 3-sensor prism cameras for better color separation. - Factors that affect light when it strikes a surface include reflection, transmission, scattering, and absorption, though computer vision typically assumes light leaves in the arrival direction.

Uploaded by

saranraj
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Photometric Image Formation

This document summarizes key concepts in photometric image formation: - Image irradiance is proportional to scene radiance based on the lens projection model. The measured pixel intensity then depends on this irradiance. - Common camera sensors include color filter arrays that require demosaicing to obtain a full color image, and 3-sensor prism cameras for better color separation. - Factors that affect light when it strikes a surface include reflection, transmission, scattering, and absorption, though computer vision typically assumes light leaves in the arrival direction.

Uploaded by

saranraj
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

Photometric Image Formation

Computer Vision I
CSE 252A
Lecture 3

CSE 252A, Fall 2021 Computer Vision I


Announcements
• Assignment 0 is due Oct 6, 11:59 PM
• Assignment 1 will be released Oct 6
– Due Oct 20, 11:59 PM
• Reading:
– Szeliski
• Section 2.2

CSE 252A, Fall 2021 Computer Vision I


Geometric image formation

CSE 252A, Fall 2021 Computer Vision I


The projective camera
• Extrinsic Parameters: Since camera may not be at the
origin, there is a rigid transformation between the world
coordinates and the camera coordinates
• Intrinsic parameters: Since scene units (e.g., cm) differ
image units (e.g., pixels) and coordinate system may
not be centered in image, we capture that with a 3x3
transformation comprised of focal length, principal
point, pixel aspect ratio, and skew

Intrinsic Extrinsic
parameters parameters
CSE 252A, Fall 2021 Computer Vision I
Photometric image formation

CSE 252A, Fall 2021 Computer Vision I


Beyond the pinhole Camera
Getting more light – Bigger Aperture

CSE 252A, Fall 2021 Computer Vision I


Pinhole Camera Images with Variable
Aperture

2 mm 1mm

.6 mm .35 mm

.15 mm .07 mm
CSE 252A, Fall 2021 Computer Vision I
The reason for lenses
We need light, but big pinholes cause blur.

CSE 252A, Fall 2021 Computer Vision I


Thin Lens

O Optical axis

• Rotationally symmetric about optical axis


• Spherical interfaces

CSE 252A, Fall 2021 Computer Vision I


Thin Lens: Center

F
O

• All rays that enter lens along line pointing at


O emerge in same direction

CSE 252A, Fall 2021 Computer Vision I


Thin Lens: Focus

F O

Parallel lines pass through the focus F

CSE 252A, Fall 2021 Computer Vision I


Thin Lens: Image of Point

F O
P’
All rays passing through lens and starting at P
converge upon P’
So light gather capability of lens is given the area of
the lens and all the rays focus on P’ instead of become
blurred like a pinhole
CSE 252A, Fall 2021 Computer Vision I
Thin Lens: Image of Point

P
f
F O
Z’ -Z
P’

1 1 1 Relation between depth of Point (-Z)


− = and the depth where it focuses (Z’)
z' z f
CSE 252A, Fall 2021 Computer Vision I
Thin Lens: Image Plane

Q’
P

F O
P’
Q
Image Plane
A price: Whereas the image of P is in focus,
the image of Q is not

CSE 252A, Fall 2021 Computer Vision I


Thin Lens: Aperture

O
P’
Image Plane • Smaller Aperture
-> Less Blur
• Pinhole -> No Blur

CSE 252A, Fall 2021 Computer Vision I


Photometric image formation
• Light incident on a given pixel

CSE 252A, Fall 2021 Computer Vision I


Measuring Angle

• The solid angle subtended by an object from a point P is the


area of the projection of the object onto the unit sphere
centered at P
• Definition is analogous to projected angle in 2D
• Measured in steradians, sr
• If I am at P and I look out, the solid angle tells me how much
of my view is filled with an object
CSE 252A, Fall 2021 Computer Vision I
Radiance Irradiance
• Power traveling at some point • Total power arriving at the
in a specified direction, per unit surface (from all incoming
area perpendicular to the angles)
direction of travel, per unit solid – Units: power per unit area,
angle W/m2 = W m-2
– Units: watts per square meter per
steradian, W/m2/sr =
W m-2 sr-1

θ (θ, φ)


radiance in direction x
different from
surface normal, use
spherical coordinates
x
dA
CSE 252A, Fall 2021 Computer Vision I
Visible Light Spectrum

CSE 252A, Fall 2021 Computer Vision I


Camera sensor
• Measured pixel intensity is a function of irradiance E
integrated over
– Pixel’s area (x,y) spatial spectral
– range of wavelengths λ response response
of pixel of pixel
– some period of time t

• Ideally, the camera response function R is linear to the


radiance, but it may not be

CSE 252A, Fall 2021 Computer Vision I


Image irradiance is proportional scene radiance
For a camera with a thin lens,
it can be shown that L
E(x) = kLL
where E(x)
• E(x) is the image irradiance
at point x
• L is the radiance coming
from a scene point Combined with linear sensor
projecting to image point x model, we have
• kL is a proportionality I = kckLL
constant that may depend In other words, the measured
on the lens and may be a pixel intensity is proportional to
function of x the radiance
CSE 252A, Fall 2021 Computer Vision I
Image acquisition

CSE 252A, Fall 2021 Computer Vision I


Color Cameras
Eye:
Three types of Cones

Cameras:
1. Filter wheel
2. Prism (with 3 sensors)
3. Filter mosaic
… and X3
CSE 252A, Fall 2021 Computer Vision I
Filter wheel
Rotate multiple filters in front of lens
Allows more than 3 color bands

Only suitable for static scenes

CSE 252A, Fall 2021 Computer Vision I


Prism color camera
Separate light in 3 beams using dichroic prism
Requires 3 sensors & precise alignment
Good color separation

CSE 252A, Fall 2021 Computer Vision I


Filter mosaic
Coat filter directly on sensor

Demosaicing (obtain full color & full resolution image)

CSE 252A, Fall 2021 Computer Vision I


Color CMOS sensor
Foveon’s X3

smarter pixels
better image quality

CSE 252A, Fall 2021 Computer Vision I


Light at surfaces
Many effects when light strikes a We will assume:
surface -- could be:
• All the light leaving a
• Reflected point is due to that
– Mirror
arriving at that point
• Transmitted
– Skin, glass • Surfaces don’t fluoresce
• Scattered – e.g., scorpions, detergents
– Milk • Surfaces don’t emit light
• Travel along the surface and (i.e., are cool)
leave at some other point
• Absorbed

CSE 252A, Fall 2021 Computer Vision I


Light at surfaces

CSE 252A, Fall 2021 Computer Vision I


BRDF
• Bi-directional Reflectance
Distribution Function
ρ(θin, φin ; θout, φout)

• Function of
– Incoming light direction:
(θin,φin)
θin , φin ^
n
– Outgoing light direction:
θout , φout
(θout,φout)
• Ratio of incident irradiance to
emitted radiance

CSE 252A, Fall 2021 Computer Vision I


Lighting, reflectance, and shading

BRDF

CSE 252A, Fall 2021 Computer Vision I


Specular reflection
• Ideal specular reflection is mirror reflection
– Perfectly smooth surface
– Incoming light ray is bounced in single
direction
– Angle of incidence equals angle of reflection

CSE 252A, Fall 2021 Computer Vision I


Specular Reflection: Smooth Surface

N
N

ωi ωo
θi θo

• N, ωi, ωo are coplanar


• θi = θo

Speculum – Latin for “Mirror”


CSE 252A, Fall 2021 Computer Vision I
Diffuse surface
• Ideal diffuse material reflects light equally
in all directions
• View-independent
• Matte, not shiny materials
– Paper
– Unfinished wood
– Unpolished stone

CSE 252A, Fall 2021 Computer Vision I


Diffuse reflection
• Beam of parallel rays shining on a surface
– Area covered by beam varies with the angle between the beam and the normal
– The larger the area, the less incident light per area
– Incident light per unit area is proportional to the cosine of the angle between the
normal and the light rays
• Object darkens as normal turns away from light
• Lambert’s cosine law (Johann Heinrich Lambert, 1760)
• Diffuse surfaces are also called Lambertian surfaces

n n n

CSE 252A, Fall 2021 Computer Vision I


Lambertian (Diffuse) Reflection
^n
^s

a
I(u,v)

The intensity (irradiance) I(u,v) of a pixel at (u,v) is:

• a(u,v) is the albedo of the surface


projecting to (u,v) Do not allow angles less than 0
• ^
n(u,v) is the direction of the surface (light is behind surface)
normal
• s0 is the light source intensity
• ^
s is the direction to the light source
CSE 252A, Fall 2021 Computer Vision I
Glossy surface
• Assume surface composed of small mirrors with random
orientation (micro-facets)
• Smooth surfaces
– Micro-facet normals close to surface normal
– Sharp highlights
• Rough surfaces
– Micro-facet normals vary strongly
– Blurry highlight
Polished
Smooth
Rough
Very rough
CSE 252A, Fall 2021 Computer Vision I
Glossy reflection
• Expect most light to be reflected in mirror
direction
• Because of micro-facets, some light is
reflected slightly off ideal reflection
direction
• Reflection
– Brightest when view vector is aligned with
reflection
– Decreases as angle between view vector and
reflection direction increases
CSE 252A, Fall 2021 Computer Vision I
Phong reflectance model

Phong Lobe
(Lobe illustrates brightness in a
direction)

CSE 252A, Fall 2021 Computer Vision I


CSE 252A, Fall 2021 Computer Vision I
General BRDF

Example: velvet

Portrait of Sir Thomas


Morre, Hans Holbein the
Younger, 1527

CSE 252A, Fall 2021 Computer Vision I


Isotropic BRDF

From Hertzmann & Seitz, CVPR’03


Isotropic BRDF’s are symmetric about the surface normal. If
the surface is rotated about the normal for the same incident
and emitting directions, the value of the BRDF is the same.
CSE 252A, Fall 2021 Computer Vision I
Anisotropic BRDF

From Hertzmann & Seitz, CVPR’03

CSE 252A, Fall 2021 Computer Vision I


Ways to measure BRDFs

CSE 252A, Fall 2021 Computer Vision I


Gonioreflectometers
• Three degrees of freedom spread among
light source, detector, and/or sample

CSE 252A, Fall 2021 Computer Vision I


Gonioreflectometers
• Three degrees of freedom spread among
light source, detector, and/or sample

CSE 252A, Fall 2021 Computer Vision I


Gonioreflectometers
• Can add fourth degree
of freedom to measure
anisotropic BRDFs

CSE 252A, Fall 2021 Computer Vision I


Marschner’s Image-Based
BRDF Measurement
• For uniform BRDF, capture 2-D slice
corresponding to variations in normals

CSE 252A, Fall 2021 Computer Vision I


Ward’s BRDF Measurement Setup
• Collect reflected light with hemispherical
(should be ellipsoidal) mirror

CSE 252A, Fall 2021 Computer Vision I


Ward’s BRDF Measurement Setup
• Result: each image captures light at all
exitant angles

CSE 252A, Fall 2021 Computer Vision I


Light sources and shading
• How bright (or what color) are objects?

• One more definition: Exitance of a source


is the internally generated power radiated
per unit area on the radiating surface
• Also referred to as radiant emittance
• Similar to irradiance
– Same units, W/m2 = W m-2

CSE 252A, Fall 2021 Computer Vision I


Light
• Special light sources
– Point sources
– Distant point sources
– Area sources

CSE 252A, Fall 2021 Computer Vision I


Point light source
• Similar to light bulbs
• An infinitesimally small point that radiates
light equally in all directions
– Light vector varies across receiving surface
– Intensity drops off proportionally to the inverse
square of the distance from the light
• Reason for inverse square falloff:
Surface area of sphere A = 4πr2

CSE 252A, Fall 2021 Computer Vision I


Standard nearby point source model
• N is the surface normal
• ρ is diffuse (Lambertian) albedo
• S is source vector - a vector from x to the
source, whose length is the intensity term
 N (x ) S (x ) 
T
ρ d (x ) 

 r ( x )2

N Remember, do not allow angles less
S than 0 (light is behind surface)

CSE 252A,
CSE 252A, Fall
Fall 2021
2019 Computer Vision I
Light from a distant source
• Note, if light is very far away, then view light as
coming from a direction in 3D
• Directional light source
– Light rays are parallel
– Direction and intensity are the same everywhere
– As if the source were infinitely far away

(
ρ d (x ) N (x )T S (x ) )
S Remember, do not allow angles less
N than 0 (light is behind surface)

CSE 252A, Fall 2021 Computer Vision I


Shadows
• Give additional cues on scene lighting

CSE 252A, Fall 2021 Computer Vision I


Shadows
• Contact points
• Depth cues

CSE 252A, Fall 2021 Computer Vision I


Shadows cast by a point source
• A point that cannot see the source is in shadow
• For point sources, two types of shadows: cast
shadows & attached shadows

Cast Shadow

Attached Shadow
CSE 252A, Fall 2021 Computer Vision I
Terminology
Umbra: fully shadowed region
Penumbra: partially shadowed region
(area) light source

occluder

penumbra

umbra
receiver
shadow
CSE 252A, Fall 2021 Computer Vision I
Penumbra and Umbra

CSE 252A, Fall 2021 Computer Vision I


Hard and soft shadows
• Point and directional lights lead to hard shadows,
no penumbra
• Area light sources lead to soft shadows, with
penumbra
point directional area

umbra penumbra
CSE 252A, Fall 2021 Computer Vision I
Hard and soft shadows

Hard shadow from Soft shadow from


point light source area light source

CSE 252A, Fall 2021 Computer Vision I


Next Lecture
• Photometric Stereo
• Reading:
– Szeliski
• Section 13.1.1

CSE 252A, Fall 2021 Computer Vision I

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy