Final - Demo Camera Calib PDF
Final - Demo Camera Calib PDF
Written by:
Andrew Abril
Jose Rafael Caceres
May 2, 2011
Contents
1 Introduction 1
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Procedure 2
2.1 OpenCV and OpenGL on the iPhone . . . . . . . . . . . . . . . . . . . . . 2
2.1.1 OpenCV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.1.2 OpenGL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2 Calibrated System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2.1 Intrinsic Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2.2 Extrinsic Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2.3 Optical Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2.4 Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 OpenGL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3.1 OpenGL - Calibrated . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3.2 Blender . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3 Experimental Results 8
3.1 Calibrated Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.1.1 Intitial Intrinsic Calibration . . . . . . . . . . . . . . . . . . . . . . 8
3.1.2 Extrinsic Calibration Ambiguity . . . . . . . . . . . . . . . . . . . 8
3.1.3 OpenCV Pose Problem . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1.4 OpenGL Pose Problem . . . . . . . . . . . . . . . . . . . . . . . . 10
4 Discussion 12
4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.2 Uncalibrated - GRIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.3 Stereo reconstruction - 3D world points . . . . . . . . . . . . . . . . . . . 12
4.4 OpenGL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
References 15
6 What we did 16
6.1 Andrew Abril . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
6.2 Jose Rafael Caceres . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
7 Appendix 17
1 Introduction
1.1 Overview
With the rising popularity of using computer vision in smart phones, there has been a
need to implement unique ways to interact with the phone and the software. The recent
trend has been toward augmented reality, which is bringing virtual scene into reality.
There has been little attempt to do the opposite, bringing the user into the virtual world.
[2]
1.2 Objective
To acheive our objective, these goals must be met:
• Capture the iphone’s camera pose by using OpenCv and chessboard as a marker
• Use the captured poses to transform them from the real world to the virtual world
using OpenGL.
• Use OpenGLto create a virtual world with one camera as the view point. The
virtual world will be displayed on the iPhone depending on the user’s movements.
1
2 Procedure
2.1 OpenCV and OpenGL on the iPhone
Originally, both OpenCV and OpenGL have been mainly used on desktops and laptops
in order to take advantage of their processing power. Recently, there has been an increase
in demand to use these libraries on mobile devices for many applications.
2.1.1 OpenCV
Since porting the OpenCV library onto the iPhone is such a new idea, there is little
documentation available for it. Many of the pre-compiled projects that include the
OpenCV libraries only do a subset of OpenCV’s functions. This is mainly due to Apple’s
recent acceptance of letting user’s implement applications using the iPhone’s camera.
The most stable version of OpenCV used on the iPhone is OpenCV 2.0, limiting the
functions available from the current version OpenCV 2.2. The version used in iWorld is
OpenCV 2.1, which allowed the use of some of OpenCV’s defined flags; however, many
of the functions introduced in OpenCV 2.1 could not be used due to technical reasons.
Many of OpenCV’s functions were designed to take advantage of a 32-bit processor.
These functions suffer in performance when implemented on an iPhone. Specifically, the
type of images OpenCV(IplImage) uses is different than the iPhone’s (UIImage). The
conversion between these two images is an important factor in the application because
OpenCV does not recognize UIImages as inputs to functions, and the iPhone does not
recognize IplImages as an output to the screen. Another function that slowed down the
application was finding the chessboard every frame. Compared to a laptop, the iPhone
ran at much lower speed. In order to fix this problem, optical flow was used to track the
features after they were initially found using OpenCV’s find chessboard function.
2.1.2 OpenGL
OpenGL has been officially supported by the iPhone but as a trimmed down version,
named OpenGL ES. OpenGL ES only supports primitive vertices that are either trian-
gular, linear or just mere points. This cause drawing to be cumbersome than the original
OpenGL that support more vertices such as quads. Another important function that is
not available is the perspective projection of a real world camera’s intrinsic parameters
to an OpenGL “camera.” OpenGL ES uses a different function, glFrustrum (that takes
in different inputs), to set up this projection matrix; therefore it was necessary to make
an intermediate function that took the real world camera’s intrinsic parameters and
convert them to parameters used by glFrustrum. Yet, this function was easy enough to
implement as there are many implementations available.
2
MATLAB calibration toolbox. This was done because the MATLAB toolbox provides
more detailed information about the calibration results than OpenCV’s cvCalibrate-
Camera2() function. For instance, MATLAB provides axis orientation information and
re-projection error. Using the intrinsic values given by MATLAB, the extrinsic param-
eters were calculated every frame. This provided the rotation and translation of the
world with respect to the camera.
Figure 1: Images of a chessboard at different orientation used to find the intrinsic pa-
rameters.
3
Figure 2: This image shows the axis orientation of iPhone camera, where the x-axis lies
along the vertical (where the x-axis becomes more positive going downward), the y-axis
is along the horizontal (where the y-axis becomes more positive going to the right) and
the z-axis is out of the board.
4
is located within the image by using OpenCV’s chessboard detector. After the corner
points have been extracted, optical flow is used to track their affine motion. If a tracked
feature is not successfully found, OpenCV’s chessboard detector is called to relocate the
corner points.
2.2.4 Enhancements
RGB to Gray-scale:
The two main bottle necks of the application was tracking features (finding the chess-
board) and converting the RGB (red, blue, green) image to a gray-scale image. OpenCV
requires gray-scale images for their finding and tracking feature algorithms; however,
their implementation of converting an image from RGB to gray-scale was slow in a
mobile device because this function was optimized to run on a 32-bit processor. This
conversion combined with the iPhone’s limited processing power, caused the application
to run extremely slow. As a result, another algorithm was implemented in order to
remedy the problem.
To convert to gray-scale, the weighted sum of the RGB components of the image was
taken. OpenCV implements a weighted sum of:
Y = (R + G + B)/3 (2)
was chosen because it produced better speed while not compromising accuracy.
UIImage to IplImage:
Another challenge that was encountered was the conversion between the UIImage and
IplImage. The conventional way of doing this was to get the core graphics image reference
and draw the reference to the image data that will be used to the IplImage structure.
This method proved to be slow for the application since it was called every frame.
Cleverly, since there was direct access to the raw byte data of the frame; therefore, this
data was copied to a buffer and made into an IplImage structure in order to execute
OpenCV functions, avoiding the conversion from UIImage to IplImage.
5
2.3 OpenGL
OpenGL (Open Graphics Library) is an opened source library initially written in C that
allows rendering of 2D and 3D graphics. OpenGL will be used to translate the user’s
motion into the virtual world. The key aspect in transforming a camera pose from
OpenCV to OpenGL is to understand the axis-orientation for OpenCV (real camera)
and OpenGL (”virtual camer”). This is shown in Figure 3. This will be used in order to
give a realistic virtual experience, as the virtual world will move according to the user’s
movements. [5]
6
3 Experimental Results
3.1 Calibrated Results
The following results are from using a fully calibrated iPhone at every frame using the
chessboard calibration method.
Figure 4: The graph above shows the re-projection error of every image and every point
in that image represented as colored crosses (each color is a unique image). This is the
geometric quantity that compares the measured points in the 2D image to re-projected
points of the 2D image estimated by the calculated by the camera parameters.
7
ambiguity. The change in origin would sometimes switch the x with the y-axis and vice
versa. This problem was due to the way OpenCV’s method, FindChessboardCorners()
detected corners. If the chessboard was a square, there would orientation ambiguity;
however, if it’s width and height were of different lengths, the ambiguity seemed to
disappear. Figure 5 shows this ambiguity.
Figure 5: The figure above clearly shows that the origin orientation (which starts at
(0,0) from the first blue corner) changes drastically from frame ten (left) to thirty two
(right). On frame ten, the origin starts in the upper left hand corner, but the origin
starts on the upper right hand corner in frame thirty two.
8
Figure 6: The image on the left shows the OpenCV pose problem. The outputted iPhone
image, is turned 90 degrees on the z-axis, which causes an incorrect estimation when
OpenCV calculates the rotation and translation. A simple rotation along the z-axis fixes
this problem, which can be seen on the image to the right.
9
Figure 7: The image on the left shows the OpenGL pose problem. Since the camera is
looking at the marker from a bird’s eye view, the OpenGL will render the 3D objects
in that same exact way, as seen on the image to the left. This is not desirable and is
fixed by multiplying the initial transformation matrix with an offset to simulate human
vision.
10
4 Discussion
4.1 Overview
In the field of engineering, it is important to aim to make a project as perfect as possible,
but equally as important to know when to compromise and adjust to meet specific
requirements. This was a major obstacle faced in this project; there were many methods
that were intended to be implemented but because of time, could not be finished.
4.4 OpenGL
The main extension for OpenGL would be to extend the virtual world to be a true
exploration scene. This can include the world to be a forest full of different grass,
shrubs, tress and wildlife. The user would be able to move around and feel as if he/she
was actually in a forest. Also, real life rotation and translation restrictions must be
imposed in the virtual world. The user should not be able to rotate and see underneath
the floor (x-axis) or translate in the air above the ground (y-axis).
11
5 Current Computer Vision Trends
As the sophistication and functionality in robotics increase, robots are being incremen-
tally introduced into the social life of humans. Developers are able to use the complexity
of robotic systems to find new task for robots that would not be possible decades ago.
This is often favorable because robots can often replace humans in performing tedious
tasks. Thus, robots can increase productivity in a company, since robots do not get
tired or enhance the life a human by assisting in the task the human is performing.
Yet, this adoption of robots, especially in the work place, makes the interaction between
human and robots inevitable. Human and robot interactions have cause developers and
researchers to embark in the field of the ethics of robotics. This ethics of robotics is
not the ethics of robots but the ethical issues designers and developers of robotic sys-
tems have to consider. Yet, this issue is complicated because social views concerning
the behavior of robotic may vary between persons. Developers cannot estimate in their
controlled environment how humans will react to the robots behavior in the working
place. Hence, ethics in robotic is a main issue that will have to be address in the years
to come.
Human respond to interaction with robots is often seen in workplaces that have
adopted robots to perform some particular task. Jeanne Dietsch, a member of the In-
dustrial Activity Board of the IEE’s Robotics and Automation Society, describes how
robots in the work place demonstrate social behavior even though it has not been pro-
grammed to. Likewise, Humans who interact with them respond differently depending
whether they think that the robot’s action are genuine and appropriate. Dietsch provides
the example of a study of hospital robot, where different responds were found for robots
that behaved the same. In one department housekeepers would bond with the robots
to the point that they would have nicknames and would yearn for the robot if it went
for repairs. Yet, in the Cancer ward, the robot’s behaviors were seen as rude. [3]This is
because when the robot arrives the housekeeper would be comforting a terminal patient.
Hence, the robot would be seen as barging in when the housekeeper would be engaged in
a very emotional situation with the patient. The housekeeper responded this way to the
behavior of the robot mainly because he thinks that the robots actions are intentional
and because he believes that the robot ought to abide some social ethic.
Consequently, developers have to consider the social behavior of the robot they are
designing. Yet, they encounter several difficulties in this task. As explained, the devel-
oper has to consider that there is yet be a consensus on what it means for a robot to
behave ethically. Another problem is, though robots may have learning capabilities, the
developer is limited in his ability to explain and predict all the situation that the robot
may encounter. [7]Ultimately, the robot social behavior will be a key point to human
and robot interaction.
12
Yet, the ethical issues that are encountered with the introduction of robotic system
into the social human life are not limited to the work place. There has been the recent
trend to developing anthropomorphic robots targeted for children and the elderly. The
main reason for this trend is to create care-giving robots that can satisfy the need for a
companion within the vulnerable members of society. Other reasons for this trend have
been to monitor the health of the elderly or act as a nanny for children that require far
more attention their parent may be able to provide. Nevertheless, this trend raises the
ethical issue on whether this is a form of deception and is ethically acceptable. That is
these robots are designed to create the illusion that human relationship could be form
with them.
Certainty, today’s robots have not achieved the level that a normal person can be
confused to whether it is another personal being or not. However, many of the robots
developed for children the elderly do provide the illusion that they have some low level
of understanding and personality. Sony’s artificial intelligence robotic dog (AIBO) is
able to mimic a normal dog to some degree. It can walk in a dog like form and chase
a ball. It also can detect distance, acceleration, sound, vibration, pressure and voice
commands that enables the robot to recognized many situation and respond adequately.
Similarly, it is able to show a variety of expression like happiness, sadness, fear and
surprise through body movement and the color and shape of its eyes. Other robots as
the Hello Kitty robot are marketed primarily for parent who are not able to spend time
with their children. That is the robot will keep the child happy and occupied.
The vulnerable youngest and the elderly are most affect by the anthropomorphism
of the robot mainly because both have a strong need for social contact and lack the
technological knowledge behind the robot. That is the knowledge that though the robot
may posses human characteristic is still not a personal being. It is worthy to note that
the problem is not the anthropomorphic characteristic of the robot. Young children
many times pretend that their toy’s are actual beings. Yet, in this case the child has
the understand that it is just play time and the toy’s themselves do not posses the
characteristics. Similarly, elderly with Alzheimer may forget that the robot is but a
mimicker of human characteristic. [6]
There are several consequences to the anthropomorphism within the robotic system.
Children can spend too much time with the robots and thus diminish their interaction
with other human beings. This will hurt the children understanding on how to interact
with other humans. The care-giver has a strong influence on a child’s development since
most of the child’s learning will come through mimic it. Negative consequences can also
be found within the elderly group. If they start to image that they have a relationship
with the robot they may start to think that they have to take care of the robot at the
expense of their own well being. Similarly, the family of the elder may think that the
robot satisfied all of her needs for companion, causing the elder to feel even more lonely.
However, not all consequences are negative. There are studies that show that robots in
elderly can reduce their stress level. But these study do suggest that the robot cannot
substitute human interaction. [8]
13
References
[1] Mirza Tahir Ahmed, Matthew N. Dailey, José Luis Landabaso, and Nicolas Herrero.
Robust key frame extraction for 3d reconstruction from video streams. In VISAPP
(1), pages 231–236, 2010.
[2] Gary Bradski and Adrian Kaehler. Learning OpenCV. O’Reilly Media Inc., 2008.
[3] J. Dietsch. People meeting robots in the workplace [industrial activities]. Robotics
Automation Magazine, IEEE, 17(2):15 –16, 2010.
[4] R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cam-
bridge University Press, ISBN: 0521540518, second edition, 2004.
[6] A. Sharkey and N. Sharkey. Children, the elderly, and interactive robots. Robotics
Automation Magazine, IEEE, 18(1):32 –38, march 2011.
[8] G. Veruggio, J. Solis, and M. Van der Loos. Roboethics: Ethics applied to robotics
[from the guest editors]. Robotics Automation Magazine, IEEE, 18(1):21 –22, march
2011.
14
6 What we did
6.1 Andrew Abril
The project was a 50/50 teamwork effort. We wrote the code together (physically in
the same room), switching who typed whenever an idea struck. The only independent
work done was research. I mostly researched on how to improve the project in general
(uncalibrated, etc), while my partner improved what was already done (performance
issues).
15
7 Appendix
Extrinsic calibration using the chessboard method and optical flow to find + track:
/∗
C r e a t e b u f f e r s t h a t a r e i n t i a l i z e d o n l y once
I used Mi t o d e t e c t i f t h e y have been i n t i a l i z e d t h o u g h
any one c o u l d had been u s e .
Maybe I s h o u l d had c h e c k each one b u t t h a t s e r v e d t o o much
trouble .
∗/
i f ( ! Mi ) {
c o r n e r s =(CvPoint2D32f ∗ ) c v A l l o c ( board ∗ s i z e o f ( c o r n e r s [ 0 ] ) ) ;
i m a g e p o i n t s = cvCreateMat ( board , 2 , CV 32FC1 ) ;
o b j e c t p o i n t s= cvCreateMat ( board , 3 , CV 32FC1 ) ;
point counts = cvCreateMat ( 1 , 1 , CV 32SC1 ) ;
d i s t o r t i o n c o e f f s = cvCreateMat ( 5 , 1 , CV 32FC1 ) ;
r o t a t i o n v e c t o r s = cvCreateMat ( 3 , 1 , CV 32FC1 ) ;
t r a n s l a t i o n v e c t o r s = cvCreateMat ( 1 , 3 , CV 32FC1 ) ;
r o t a t i o n m a t = cvCreateMat ( 3 , 3 , CV 32FC1 ) ;
Mi = cvCreateMat ( 3 , 3 , CV 32FC1 ) ;
f l a g=t r u e ;
}
//Mi v a l u e s were c a l c u l a t e d p r i o r t o t h i s p r o j e c t i n MatLab
CV MAT ELEM( ∗Mi , f l o a t , 0 , 0 ) = 4 5 9 . 2 4 5 3 3 3 1 f ;
CV MAT ELEM( ∗Mi , f l o a t , 0 , 1 ) = 0 . 0 f ;
CV MAT ELEM( ∗Mi , f l o a t , 0 , 2 ) = 2 1 8 . 2 7 3 2 8 5 f ;
CV MAT ELEM( ∗Mi , f l o a t , 1 , 0 ) = 0 . 0 f ;
CV MAT ELEM( ∗Mi , f l o a t , 1 , 1 ) = 4 5 9 . 2 4 5 3 3 3 1 f ;
CV MAT ELEM( ∗Mi , f l o a t , 1 , 2 ) = 1 7 8 . 9 6 9 1 1 6 f ;
CV MAT ELEM( ∗Mi , f l o a t , 2 , 0 ) = 0 . 0 f ;
CV MAT ELEM( ∗Mi , f l o a t , 2 , 1 ) = 0 . 0 f ;
CV MAT ELEM( ∗Mi , f l o a t , 2 , 2 ) = 1 . 0 f ;
// d i s t s t u f f
CV MAT ELEM( ∗ d i s t o r t i o n c o e f f s , f l o a t , 0 , 0 ) = 0 . 0 7 0 9 6 9 f ;
CV MAT ELEM( ∗ d i s t o r t i o n c o e f f s , f l o a t , 1 , 0 ) = 0 . 7 7 7 6 4 7 f ;
CV MAT ELEM( ∗ d i s t o r t i o n c o e f f s , f l o a t , 2 , 0 ) = −0.009131 f
;
CV MAT ELEM( ∗ d i s t o r t i o n c o e f f s , f l o a t , 3 , 0 ) = −0.013867 f
;
16
CV MAT ELEM( ∗ d i s t o r t i o n c o e f f s , f l o a t , 4 , 0 ) = −5.141519 f
;
// B u f f e r
CV MAT ELEM( ∗ p o i n t c o u n t s , int , 0 , 0 ) = board ;
// u n d i s t o r t image
[ s e l f U n d i s t o r t : imgB ] ;
// f i n d t h e c h e s s b o a r d p o i n t s t h a t w i l l be t r a c k e d w i t h
optical flow
int s u c c e s s = cvFindChessboardCorners (
imgB ,
c v S i z e ( row , column ) ,
corners ,
&c o r n e r c o u n t ,
CV CALIB CB ADAPTIVE THRESH
|
CV CALIB CB FILTER QUADS
| CV CALIB CB FAST CHECK)
;
cvFindCornerSubPix ( imgB , c o r n e r s , c o r n e r c o u n t , c v S i z e ( 1 1 ,
1 1 ) , c v S i z e ( −1 , −1) , c v T e r m C r i t e r i a (CV TERMCRIT EPS+
CV TERMCRIT ITER, 3 0 , 0 . 1 ) ) ;
i f ( ( s u c c e s s ) &&(c o r n e r c o u n t == board ) ) {
// s e t up t h e w o r l d p o i n t and image p o i n t
for calibration
f o r ( i n t i = 0 , j = 0 ; j < board ; ++i , ++j ) {
CV MAT ELEM( ∗ i m a g e p o i n t s , f l o a t , i , 0 ) =
corners [ j ] . x ;
CV MAT ELEM( ∗ i m a g e p o i n t s , f l o a t , i , 1 ) =
corners [ j ] . y ;
// t h i s s h o u l d o n y l run once i n t h i s f o r
loop
if ( flag ) {
CV MAT ELEM( ∗ o b j e c t p o i n t s , f l o a t ,
i , 0 ) = j / column ;
CV MAT ELEM( ∗ o b j e c t p o i n t s , f l o a t ,
i , 1 ) = j%column ;
CV MAT ELEM( ∗ o b j e c t p o i n t s , f l o a t ,
i , 2) = 0.0 f ;
}
}
NeedToInit=f a l s e ;
}
}
else {
// t r a c k t h e i n i t l i a t i z e P o i n t s
i f ( corners ){
char f e a t u r e s f o u n d [ board ] ;
f l o a t f e a t u r e e r r o r s [ board ] ;
17
int w i n s i z e = 5 ;
CvSize p y r s z = c v S i z e ( imgA−>width +8, imgB−>h e i g h t
/3) ;
I p l I m a g e ∗ pyrA = cvCreateImage ( p y r s z ,
IPL DEPTH 32F , 1 ) ;
I p l I m a g e ∗ pyrB = cvCreateImage ( p y r s z ,
IPL DEPTH 32F , 1 ) ;
CvPoint2D32f ∗ c o r n e r s B = ( CvPoint2D32f ∗ ) c v A l l o c (
board ∗ s i z e o f ( c o r n e r s B [ 0 ] ) ) ;
cvCalcOpticalFlowPyrLK ( imgA ,
imgB ,
pyrA ,
pyrB ,
corners ,
cornersB ,
board ,
cvSize ( win size , win size ) ,
3,
features found ,
feature errors ,
c v T e r m C r i t e r i a ( CV TERMCRIT ITER |
CV TERMCRIT EPS, 2 0 , 0 . 3 ) ,
0 );
// draw o p t i c a l f l o w
/∗ f o r ( i n t i =0; i <c o r n e r c o u n t ; i ++) {
i f ( features found [ i ]) {
p r i n t f (” Got i t /n ”) ;
CvPoint p0 = c v P o i n t ( cvRound (
c o r n e r s [ i ] . x ) , cvRound ( c o r n e r s [ i
]. y) ) ;
CvPoint p1 = c v P o i n t ( cvRound (
cornersB [ i ] . x ) , cvRound ( cornersB [
i ]. y) ) ;
c v L i n e ( imgC , p0 , p1 , CV RGB
( 2 5 5 , 2 5 5 , 2 5 5 ) , 2) ;
}
}
∗/
// c h e c k t h e p o i n t
i n t n u m O f S u c c e s s f u l P o i n t s =0;
f o r ( i n t k=0; k<board ; k++) {
i f ( f e a t u r e s f o u n d [ k ] && f e a t u r e e r r o r s [ k
] <550)
n u m O f S u c c e s s f u l P o i n t s ++;
}
i f ( n u m O f S u c c e s s f u l P o i n t s != board ) NeedToInit=t r u e ;
else {
f o r ( i n t i = 0 , j = 0 ; j < board ; ++i , ++j )
{
CV MAT ELEM( ∗ i m a g e p o i n t s , f l o a t ,
18
i , 0) = cornersB [ j ] . x ;
CV MAT ELEM( ∗ i m a g e p o i n t s , f l o a t ,
i , 1) = cornersB [ j ] . y ;
}
// c r e a t e t h e image p o i n t s
}
c v R e l e a s e I m a g e (&imgA ) ;
imgA = imgB ;
c o r n e r s=c o r n e r s B ;
}
else
NeedToInit=t r u e ;
}
// c a l i b r a t e
i f ( ! NeedToInit ) {
// f i n d e x t r i n s i c and o u t p u t t h e v a l u e s
// solvePnP ( o b j e c t p o i n t s , i m a g e p o i n t s , Mi , d i s t o r t i o n c o e f f s
, rotation vectors , translation vectors , true ) ;
cvFindExtrinsicCameraParams2 ( o b j e c t p o i n t s , i m a g e p o i n t s ,
Mi , d i s t o r t i o n c o e f f s , r o t a t i o n v e c t o r s ,
translation vectors ) ;
f l o a t e l e m e n t 1 = CV MAT ELEM( ∗ t r a n s l a t i o n v e c t o r s , f l o a t , 0 ,
0) ;
f l o a t e l e m e n t 2 = CV MAT ELEM( ∗ t r a n s l a t i o n v e c t o r s , f l o a t , 0 ,
1) ;
f l o a t e l e m e n t 3 = CV MAT ELEM( ∗ t r a n s l a t i o n v e c t o r s , f l o a t , 0 ,
2) ;
float s c a l e =1.00;
// s e t t h e r o t a t i o n o u t p u t
// t h h e x and t h e y a r e i n v e r s e d l i k e t h e t r a n s l a t i o n
// x
// / y
19
CameraPose . y . y=CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 1 , 1 ) ;
CameraPose . y . z=CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 2 , 1 ) ;
// z
// o u t p u t s t u f f
f l o a t r o t x x = CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 0 , 0 ) ;
f l o a t r o t x y = CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 1 , 0 ) ;
f l o a t r o t x z = CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 2 , 0 ) ;
f l o a t r o t y x = CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 0 , 1 ) ;
f l o a t r o t y y = CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 1 , 1 ) ;
f l o a t r o t y z = CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 2 , 1 ) ;
f l o a t r o t z x = CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 0 , 2 ) ;
f l o a t r o t z y = CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 1 , 2 ) ;
f l o a t r o t z z = CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 2 , 2 ) ;
i f ( Output )
[ Output r e l e a s e ] ;
Output = [ [ NSMutableString a l l o c ] initWithFormat :@”The v e c t o r
i s : \ n” ] ;
[ Output appendFormat :@”:% f , :% f , :% f \n” , rotxx , rotyx ,
rotzx ] ;
[ Output appendFormat :@”:% f , :% f , :% f \n” , rotxy , rotyy ,
rotzy ] ;
[ Output appendFormat :@”:% f , :% f , :% f \n” , r o t x z , r o t y z ,
rotzz ] ;
// c r e a t e r e s o u r c e manager
m r e s o u r c e= C r e a t e R e s o u r c e ( ) ;
20
// C r e a t e t h e d e p t h b u f f e r .
glGenRenderbuffersOES ( 1 , &m d e p t h R e n d e r b u f f e r ) ;
glBindRenderbufferOES (GL RENDERBUFFER OES, m d e p t h R e n d e r b u f f e r ) ;
g l R e n d e r b u f f e r S t o r a g e O E S (GL RENDERBUFFER OES,
GL DEPTH COMPONENT16 OES,
width ,
height ) ;
// C r e a t e t h e f r a m e b u f f e r o b j e c t ; a t t a c h t h e d e p t h and c o l o r b u f f e r s .
glGenFramebuffersOES ( 1 , &m f r a m e b u f f e r ) ;
glBindFramebufferOES (GL FRAMEBUFFER OES, m f r a m e b u f f e r ) ;
g l F r a m e b u f f e r R e n d e r b u f f e r O E S (GL FRAMEBUFFER OES,
GL COLOR ATTACHMENT0 OES,
GL RENDERBUFFER OES,
m colorRenderbuffer ) ;
g l F r a m e b u f f e r R e n d e r b u f f e r O E S (GL FRAMEBUFFER OES,
GL DEPTH ATTACHMENT OES,
GL RENDERBUFFER OES,
m depthRenderbuffer ) ;
// Bind t h e c o l o r b u f f e r f o r r e n d e r i n g .
glBindRenderbufferOES (GL RENDERBUFFER OES, m c o l o r R e n d e r b u f f e r ) ;
// s e t up f o r t h e b i n d o f t h e t e x t u r e
gl Gen Te xtu re s ( 3 , &m g r i d T e x t u r e [ 0 ] ) ;
// l o a d cube t e x t u r e
g l B i n d T e x t u r e (GL TEXTURE 2D, m g r i d T e x t u r e [ 0 ] ) ;
g l T e x P a r a m e t e r i (GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) ;
g l T e x P a r a m e t e r i (GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) ;
m r e s o u r c e −>LoadPngImage ( ” p u r p l e . j p g ” ) ;
void ∗ p i x e l s = m r e s o u r c e −>GetImageData ( ) ;
i v e c 2 s i z e= m r e s o u r c e −>GetImageSize ( ) ;
glTexImage2D (GL TEXTURE 2D, 0 , GL RGBA,
s i z e . x , s i z e . y , 0 , GL RGBA,
GL UNSIGNED BYTE, p i x e l s ) ;
m r e s o u r c e −>UnloadImage ( ) ;
// l o a d c y l i n d e r
g l B i n d T e x t u r e (GL TEXTURE 2D, m g r i d T e x t u r e [ 1 ] ) ;
g l T e x P a r a m e t e r i (GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) ;
g l T e x P a r a m e t e r i (GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) ;
m r e s o u r c e −>LoadPngImage ( ” y e l l o w . j p g ” ) ;
p i x e l s = m r e s o u r c e −>GetImageData ( ) ;
s i z e= m r e s o u r c e −>GetImageSize ( ) ;
21
glTexImage2D (GL TEXTURE 2D, 0 , GL RGBA,
s i z e . x , s i z e . y , 0 , GL RGBA,
GL UNSIGNED BYTE, p i x e l s ) ;
m r e s o u r c e −>UnloadImage ( ) ;
// l o a d f l o o r t e x t u r e
m r e s o u r c e −>LoadPngImage ( ” g r e e n . j p g ” ) ;
p i x e l s = m r e s o u r c e −>GetImageData ( ) ;
s i z e= m r e s o u r c e −>GetImageSize ( ) ;
glTexImage2D (GL TEXTURE 2D, 0 , GL RGBA,
s i z e . x , s i z e . y , 0 , GL RGBA,
GL UNSIGNED BYTE, p i x e l s ) ;
m r e s o u r c e −>UnloadImage ( ) ;
g l V i e w p o r t ( 0 , 0 , width , h e i g h t ) ;
g l E n a b l e (GL DEPTH TEST) ;
// s e t up camera p r o j e c t i o n
glMatrixMode (GL PROJECTION) ;
g l F r u s t u m f ( −0.467889 f , 0 . 4 6 7 8 8 9 , −0.467889 , 0 . 4 6 7 8 8 9 , 1 , 1 0 0 0 ) ;
// change me ! ! !
// i n i t i a l i z e t h e r o a t e m a t r i x
// x
offset . x . x = 0;
offset . x . y = 1;
offset . x . z = 0;
o f f s e t . x .w = 0 ;
// / y
offset .y.x = −1;
offset .y.y = 0;
offset .y.z = 0;
offset . y .w = 0;
// z
offset . z . x = 0;
22
offset . z . y = 0;
offset . z . z = 1;
o f f s e t . z .w = 0 ;
// /w
offset .w. x = 0;
offset .w. y = 0;
offset .w. z = 0;
offset .w.w = 1;
g l M u l t M a t r i x f ( Trans . P o i n t e r ( ) ) ;
// t o make i t l o o k good
g l T r a n s l a t e f ( −4 , 9 , 0 ) ;
g l R o t a t e f ( −90 , 0 , 0 , 1 ) ;
g lR ot at e f (90 , 0 , 1 , 0) ;
g l R o t a t e f ( −10 , 0 , 0 , 1 ) ;
// e n a b l e v e r t e x c o o r d i n a t e a r r a y when glDrawArrays i s c a l l e d
g l E n a b l e C l i e n t S t a t e (GL VERTEX ARRAY) ;
// e n a b l e normal a r r a y when glDrawArrays i s c a l l e d
g l E n a b l e C l i e n t S t a t e (GL NORMAL ARRAY) ;
// e n a b l e t e x t u r e c o o r d i n a t e a r r a y when glDrawArrays i s c a l l e d
g l E n a b l e C l i e n t S t a t e (GL TEXTURE COORD ARRAY) ;
g l E n a b l e (GL TEXTURE 2D) ;
// b e g i n i n g o f cube
// l o a d t e x t u r e f o r cube
g l B i n d T e x t u r e (GL TEXTURE 2D, m g r i d T e x t u r e [ 0 ] ) ;
// draw cube
glPushMatrix ( ) ;
g l T r a n s l a t e f ( −2 , 0 . 5 , 1 ) ;
g l V e r t e x P o i n t e r ( 3 , GL FLOAT, 0 , c u b e V e r t s ) ;
// g l N o r m a l P o i n t e r (GL FLOAT, 0 , bananaNormals ) ;
g l T e x C o o r d P o i n t e r ( 2 , GL FLOAT, 0 , cubeTexCoords ) ;
glDrawArrays (GL TRIANGLES, 0 , cubeNumVerts ) ;
glPopMatrix ( ) ;
// end o f cube
23
// b e g i n n i n g o f c l y i n d e r
// l o a d t e x t u r e f o r c l y i n d e r
g l B i n d T e x t u r e (GL TEXTURE 2D, m g r i d T e x t u r e [ 1 ] ) ;
// draw c l y i d n e r
glPushMatrix ( ) ;
g l T r a n s l a t e f (3 , 1 , 1) ;
g l V e r t e x P o i n t e r ( 3 , GL FLOAT, 0 , c y l i n d e r V e r t s ) ;
g l T e x C o o r d P o i n t e r ( 2 , GL FLOAT, 0 , c y l i n d e r T e x C o o r d s ) ;
glDrawArrays (GL TRIANGLES, 0 , cylinderNumVerts ) ;
glPopMatrix ( ) ;
// end o f c y l i n d e r
// b e g i n i n g o f f l o o r
// l o a d t e x t u r e
// l o a d t e x t u r e f o r c l y i n d e r
g l B i n d T e x t u r e (GL TEXTURE 2D, m g r i d T e x t u r e [ 2 ] ) ;
// / draw f l o o r
g l V e r t e x P o i n t e r ( 3 , GL FLOAT, 0 , p l a n e V e r t s ) ;
g l T e x C o o r d P o i n t e r ( 2 , GL FLOAT, 0 , planeTexCoords ) ;
glDrawArrays (GL TRIANGLES, 0 , planeNumVerts ) ;
// end o f f l o o r
m animation . E l a p s e d += t i m e S t e p ;
i f ( m animation . E l a p s e d >= AnimationDuration ) {
m animation . Current = m animation . End ;
} else {
f l o a t mu = m animation . E l a p s e d / AnimationDuration ;
m animation . Current = m animation . S t a r t . S l e r p (mu, m animation . End ) ;
}
}
24
O f f s e t matrix
mat4 o f f s e t = { 0, −1, 0, 0,
1, 0, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1};
∗/
Trans = t r a n s ∗ o f f s e t ;
// Trans = t r a n s ;
/∗ mat3 temp ;
i f ( ! S t a r t && ( t r a n s . w . x !=0) ) {
S t a r t=t r u e ;
temp=t r a n s . ToMat3 ( ) ;
L a s t ( temp . Transposed ( ) ) ;
}
else {
// e x t r a c t t h e r o t a t i o n from t h e t r a n s
temp=t r a n s . ToMat3 ( ) ;
mat4 temp2 ;
temp2 ( temp ) ;
mat4 newRotation= temp2 ∗ L a s t ;
// s e t b a c k t h e r o a t i o n
newRotation . w . x=t r a n s . w . x ;
newRotation . w . y=t r a n s . w . y ;
newRotation . w . z=t r a n s . w . z ;
Trans=newRotation ∗ o f f s e t ;
//
} ∗/
// presenTtw
// pastTw
// p a s t T P r e s e n t
//WTpas ∗ presentTW
}
25