0% found this document useful (0 votes)
111 views

Final - Demo Camera Calib PDF

The document describes a virtual world called iWorld that can be interacted with using an iPhone. It uses OpenCV for computer vision tasks like camera calibration and OpenGL for 3D rendering. The goals are to determine the iPhone camera's pose using OpenCV and a chessboard marker, and then use the poses to transform between the real and virtual worlds in OpenGL. It details the calibration process, including intrinsic calibration to determine internal camera parameters and extrinsic calibration to determine the camera's position and orientation. Optical flow is also used for efficiency. The results demonstrate pose estimation in both OpenCV and OpenGL.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
111 views

Final - Demo Camera Calib PDF

The document describes a virtual world called iWorld that can be interacted with using an iPhone. It uses OpenCV for computer vision tasks like camera calibration and OpenGL for 3D rendering. The goals are to determine the iPhone camera's pose using OpenCV and a chessboard marker, and then use the poses to transform between the real and virtual worlds in OpenGL. It details the calibration process, including intrinsic calibration to determine internal camera parameters and extrinsic calibration to determine the camera's position and orientation. Optical flow is also used for efficiency. The results demonstrate pose estimation in both OpenCV and OpenGL.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

iWorld: A Virtual World using OpenCV and OpenGL

Written by:
Andrew Abril
Jose Rafael Caceres

May 2, 2011
Contents
1 Introduction 1
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

2 Procedure 2
2.1 OpenCV and OpenGL on the iPhone . . . . . . . . . . . . . . . . . . . . . 2
2.1.1 OpenCV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.1.2 OpenGL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2 Calibrated System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2.1 Intrinsic Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2.2 Extrinsic Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2.3 Optical Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2.4 Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 OpenGL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3.1 OpenGL - Calibrated . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3.2 Blender . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

3 Experimental Results 8
3.1 Calibrated Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.1.1 Intitial Intrinsic Calibration . . . . . . . . . . . . . . . . . . . . . . 8
3.1.2 Extrinsic Calibration Ambiguity . . . . . . . . . . . . . . . . . . . 8
3.1.3 OpenCV Pose Problem . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1.4 OpenGL Pose Problem . . . . . . . . . . . . . . . . . . . . . . . . 10

4 Discussion 12
4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.2 Uncalibrated - GRIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.3 Stereo reconstruction - 3D world points . . . . . . . . . . . . . . . . . . . 12
4.4 OpenGL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

5 Current Computer Vision Trends 13

References 15

6 What we did 16
6.1 Andrew Abril . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
6.2 Jose Rafael Caceres . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

7 Appendix 17
1 Introduction
1.1 Overview
With the rising popularity of using computer vision in smart phones, there has been a
need to implement unique ways to interact with the phone and the software. The recent
trend has been toward augmented reality, which is bringing virtual scene into reality.
There has been little attempt to do the opposite, bringing the user into the virtual world.
[2]

1.2 Objective
To acheive our objective, these goals must be met:

• Capture the iphone’s camera pose by using OpenCv and chessboard as a marker

• Use the captured poses to transform them from the real world to the virtual world
using OpenGL.

• Use OpenGLto create a virtual world with one camera as the view point. The
virtual world will be displayed on the iPhone depending on the user’s movements.

1
2 Procedure
2.1 OpenCV and OpenGL on the iPhone
Originally, both OpenCV and OpenGL have been mainly used on desktops and laptops
in order to take advantage of their processing power. Recently, there has been an increase
in demand to use these libraries on mobile devices for many applications.

2.1.1 OpenCV
Since porting the OpenCV library onto the iPhone is such a new idea, there is little
documentation available for it. Many of the pre-compiled projects that include the
OpenCV libraries only do a subset of OpenCV’s functions. This is mainly due to Apple’s
recent acceptance of letting user’s implement applications using the iPhone’s camera.
The most stable version of OpenCV used on the iPhone is OpenCV 2.0, limiting the
functions available from the current version OpenCV 2.2. The version used in iWorld is
OpenCV 2.1, which allowed the use of some of OpenCV’s defined flags; however, many
of the functions introduced in OpenCV 2.1 could not be used due to technical reasons.
Many of OpenCV’s functions were designed to take advantage of a 32-bit processor.
These functions suffer in performance when implemented on an iPhone. Specifically, the
type of images OpenCV(IplImage) uses is different than the iPhone’s (UIImage). The
conversion between these two images is an important factor in the application because
OpenCV does not recognize UIImages as inputs to functions, and the iPhone does not
recognize IplImages as an output to the screen. Another function that slowed down the
application was finding the chessboard every frame. Compared to a laptop, the iPhone
ran at much lower speed. In order to fix this problem, optical flow was used to track the
features after they were initially found using OpenCV’s find chessboard function.

2.1.2 OpenGL
OpenGL has been officially supported by the iPhone but as a trimmed down version,
named OpenGL ES. OpenGL ES only supports primitive vertices that are either trian-
gular, linear or just mere points. This cause drawing to be cumbersome than the original
OpenGL that support more vertices such as quads. Another important function that is
not available is the perspective projection of a real world camera’s intrinsic parameters
to an OpenGL “camera.” OpenGL ES uses a different function, glFrustrum (that takes
in different inputs), to set up this projection matrix; therefore it was necessary to make
an intermediate function that took the real world camera’s intrinsic parameters and
convert them to parameters used by glFrustrum. Yet, this function was easy enough to
implement as there are many implementations available.

2.2 Calibrated System


To provide a simple demonstration on how iWorld will function, a calibrated iPhone
was used. Initially, the intrinsic parameters of the iPhone were calibrated using the

2
MATLAB calibration toolbox. This was done because the MATLAB toolbox provides
more detailed information about the calibration results than OpenCV’s cvCalibrate-
Camera2() function. For instance, MATLAB provides axis orientation information and
re-projection error. Using the intrinsic values given by MATLAB, the extrinsic param-
eters were calculated every frame. This provided the rotation and translation of the
world with respect to the camera.

2.2.1 Intrinsic Calibration


With MATLAB’s calibration toolbox, the intrinsic parameters were calculated using the
chessboard method. Ten images of a chessboard was used at different orientation (Figure
1), with the iPhone camera being stationary. After extracting the grid corners from each
image, the intrinsic values were calculated. The axis-coordinates are shown in Figure 2.

Figure 1: Images of a chessboard at different orientation used to find the intrinsic pa-
rameters.

3
Figure 2: This image shows the axis orientation of iPhone camera, where the x-axis lies
along the vertical (where the x-axis becomes more positive going downward), the y-axis
is along the horizontal (where the y-axis becomes more positive going to the right) and
the z-axis is out of the board.

2.2.2 Extrinsic Calibration


The chessboard method in OpenCV was used in order to find image points (each corner
represents an image point). The world points for each corner were arbitrarily chosen.
The OpenCV function, cvFindExtrinsicParams2() was used in order to find the pose
(rotation and translation) of the camera at each frame. This function was used over the
original OpenCV function, cvCalibrateCamera2(), which finds both intrinsic and extrin-
sic parameters, in order to minimize extrinsic value error. Since our camera does not
change, the intrinsic parameters ought to remain constant. Using cvCalibrateCamera2()
would imply that the estimated intrinsic values are liable to change.

2.2.3 Optical Flow


Lucas-Kanade optical flow is used to track features between two frames to capture the
affine motion of the features. This algorithm is based on two assumptions that affect
the project: brightness constancy and temporal persistence. Brightness constancy is
the notion that a pixel’s brightness in gray scale changes very little or not at all. In
other words, the feature looks the same over the frames. Temporal persistence requires
that the features move slowly frame to frame to ensure that the feature stays within the
camera’s view.
In order to increase performance, Lucas-Kanade’s optical flow was adopted in lieu of
OpenCV’s slower method of repeatedly finding chessboard corners. First the chessboard

4
is located within the image by using OpenCV’s chessboard detector. After the corner
points have been extracted, optical flow is used to track their affine motion. If a tracked
feature is not successfully found, OpenCV’s chessboard detector is called to relocate the
corner points.

2.2.4 Enhancements
RGB to Gray-scale:
The two main bottle necks of the application was tracking features (finding the chess-
board) and converting the RGB (red, blue, green) image to a gray-scale image. OpenCV
requires gray-scale images for their finding and tracking feature algorithms; however,
their implementation of converting an image from RGB to gray-scale was slow in a
mobile device because this function was optimized to run on a 32-bit processor. This
conversion combined with the iPhone’s limited processing power, caused the application
to run extremely slow. As a result, another algorithm was implemented in order to
remedy the problem.
To convert to gray-scale, the weighted sum of the RGB components of the image was
taken. OpenCV implements a weighted sum of:

Y = (0.299)R + (0.587)G + (0.114)B (1)

to convert from RGB to gray-scale. In the application the average:

Y = (R + G + B)/3 (2)

was chosen because it produced better speed while not compromising accuracy.

UIImage to IplImage:
Another challenge that was encountered was the conversion between the UIImage and
IplImage. The conventional way of doing this was to get the core graphics image reference
and draw the reference to the image data that will be used to the IplImage structure.
This method proved to be slow for the application since it was called every frame.
Cleverly, since there was direct access to the raw byte data of the frame; therefore, this
data was copied to a buffer and made into an IplImage structure in order to execute
OpenCV functions, avoiding the conversion from UIImage to IplImage.

5
2.3 OpenGL
OpenGL (Open Graphics Library) is an opened source library initially written in C that
allows rendering of 2D and 3D graphics. OpenGL will be used to translate the user’s
motion into the virtual world. The key aspect in transforming a camera pose from
OpenCV to OpenGL is to understand the axis-orientation for OpenCV (real camera)
and OpenGL (”virtual camer”). This is shown in Figure 3. This will be used in order to
give a realistic virtual experience, as the virtual world will move according to the user’s
movements. [5]

Figure 3: In the real camera (top), the axis-


orientation uses the right-hand coordinate
system with the x-axis in the horizontal, the
y-axis in the vertical and the z-axis in com-
ing out of the camera, while in the “virtual
camera” (right) have the y and z-axis in op-
posite direction.

2.3.1 OpenGL - Calibrated


For the calibrated experiment, a simple virtual world was used in order to convey the
idea. A 3D cone was used to show how the user’s translation affects the cone’s movement.
If the user moves to the right, the cone will move to the left. This shows the realistic
way humans view objects in the real world. Similarly, moving the camera toward the
cone will bring the cone closer and vice versa. The function, glfrustum(), played an
important part in the cone’s movement. This function specifies the window view can be
seen by OpenGL. If the window is too large, the cone will be unresponsive to changes
in the translation of the world with respect to the camera.

6
3 Experimental Results
3.1 Calibrated Results
The following results are from using a fully calibrated iPhone at every frame using the
chessboard calibration method.

3.1.1 Intitial Intrinsic Calibration


Initially, the iPhone’s camera was calibrated in order to find the intrinsic parameters
(focal length, principal points, etc.) in order to more accurately calculate the extrinsic
parameters of each frame. Ten images of chessboards were used in different orientation
to get a precise measurement. MATLAB’s calibration toolbox also gives us the re-
projection error (in pixels). The re-projection error was about one percent and is shown
graphically in Figure 4 below.

Figure 4: The graph above shows the re-projection error of every image and every point
in that image represented as colored crosses (each color is a unique image). This is the
geometric quantity that compares the measured points in the 2D image to re-projected
points of the 2D image estimated by the calculated by the camera parameters.

3.1.2 Extrinsic Calibration Ambiguity


It is important to always have the the same point of origin when calibrating, especially
when calibration is occurring every frame, since it will set the axis-orientation of the
camera. A problem encountered was that the point of origin would randomly change
from the top left corner to the bottom right corner; this gives us orientation of origin

7
ambiguity. The change in origin would sometimes switch the x with the y-axis and vice
versa. This problem was due to the way OpenCV’s method, FindChessboardCorners()
detected corners. If the chessboard was a square, there would orientation ambiguity;
however, if it’s width and height were of different lengths, the ambiguity seemed to
disappear. Figure 5 shows this ambiguity.

Figure 5: The figure above clearly shows that the origin orientation (which starts at
(0,0) from the first blue corner) changes drastically from frame ten (left) to thirty two
(right). On frame ten, the origin starts in the upper left hand corner, but the origin
starts on the upper right hand corner in frame thirty two.

3.1.3 OpenCV Pose Problem


OpenCV assumes an orientation of the x-axis on the horizontal, y-axis on the vertical
and the z-axis out of the board, as shown in Figure 3. Yet, in Figure 2, the x-axis
is on the vertical while the y-axis is on the horizontal. This orientation is due to the
iPhone rotating the image, so when OpenCV calculated the translation and rotation of
the camera, the pose was actually incorrect by 90 degrees along the z-axis. This gave
incorrect results when translating the camera’s orientation to OpenGL. To solve this
problem, an offset transformation matrix (with rotation of 90 degrees along z-axis) was
multiplied to the calculated transformation matrix each frame. This rotation problem
can clearly be seen by the figures below.

8
Figure 6: The image on the left shows the OpenCV pose problem. The outputted iPhone
image, is turned 90 degrees on the z-axis, which causes an incorrect estimation when
OpenCV calculates the rotation and translation. A simple rotation along the z-axis fixes
this problem, which can be seen on the image to the right.

3.1.4 OpenGL Pose Problem


The output of OpenCV’s calculated camera orientation mimics the orientation of the
marker (chessboard). Although this is correct, it resulted in an undesirable bird’s eye
view of the virtual world because it does not project the pose needed for the application
(parallel to the floor). To accommodate this effect, another offset was applied. After this
offset was applied, the relative translation and rotation agreed with human-like vision. If
the user moves the iPhone closer to the chessboard, the user will move forward in game
(z direction). The bird’s eye view and its correction can be seen in the figure below.

9
Figure 7: The image on the left shows the OpenGL pose problem. Since the camera is
looking at the marker from a bird’s eye view, the OpenGL will render the 3D objects
in that same exact way, as seen on the image to the left. This is not desirable and is
fixed by multiplying the initial transformation matrix with an offset to simulate human
vision.

10
4 Discussion
4.1 Overview
In the field of engineering, it is important to aim to make a project as perfect as possible,
but equally as important to know when to compromise and adjust to meet specific
requirements. This was a major obstacle faced in this project; there were many methods
that were intended to be implemented but because of time, could not be finished.

4.2 Uncalibrated - GRIC


Originally, the application was going to be executed using an uncalibrated system to
allow the greatest amount of portability. Using an uncalibrated method, the fundamen-
tal matrix would be found using corresponding points. Using epipolar geometry, the
essential matrix is found using the fundamental matrix. The translation and rotation
of the camera is then calculated using the essential matrix. [4] Methods of finding a
non-degenerate fundamental matrix proved especially difficult, however. In the case of
iWorld, the degenerate fundamental matrix was caused by the majority of corresponding
points being coplanar. These coplanar points are caused by the geometry the movement
of the camera makes. An implementation that seemed reasonable was Geometric Robust
Information Criterion (GRIC). [1] Basically, every frame would go through an algorithm
that determines whether a homography or fundamental matrix should be used to deter-
mine camera pose. The homography matrix would be used to determine the 2D camera
pose until a frame was met the criterion to use the fundamental matrix, giving us full
3D motion. This would solve the degenerate fundamental matrix problem.

4.3 Stereo reconstruction - 3D world points


As an extension of the calibrated iWorld, an implementation that only used the chess-
board as an initial way to find image and respective world points was used. From there,
the camera would be able to move away from the chessboard to find and track new fea-
tures (using Lucas-Kanade’s optical flow) and stereo calibration/reconstruction would
be used to find 3D world points. The idea was to use OpenCV’s implementation of
calibrated stereo reconstruction to calculate a disparity matrix in order to calculate the
world points from image coordinates. Once again, the geometry of a single camera just
does not allow for a successful stereo reconstruction.

4.4 OpenGL
The main extension for OpenGL would be to extend the virtual world to be a true
exploration scene. This can include the world to be a forest full of different grass,
shrubs, tress and wildlife. The user would be able to move around and feel as if he/she
was actually in a forest. Also, real life rotation and translation restrictions must be
imposed in the virtual world. The user should not be able to rotate and see underneath
the floor (x-axis) or translate in the air above the ground (y-axis).

11
5 Current Computer Vision Trends
As the sophistication and functionality in robotics increase, robots are being incremen-
tally introduced into the social life of humans. Developers are able to use the complexity
of robotic systems to find new task for robots that would not be possible decades ago.
This is often favorable because robots can often replace humans in performing tedious
tasks. Thus, robots can increase productivity in a company, since robots do not get
tired or enhance the life a human by assisting in the task the human is performing.
Yet, this adoption of robots, especially in the work place, makes the interaction between
human and robots inevitable. Human and robot interactions have cause developers and
researchers to embark in the field of the ethics of robotics. This ethics of robotics is
not the ethics of robots but the ethical issues designers and developers of robotic sys-
tems have to consider. Yet, this issue is complicated because social views concerning
the behavior of robotic may vary between persons. Developers cannot estimate in their
controlled environment how humans will react to the robots behavior in the working
place. Hence, ethics in robotic is a main issue that will have to be address in the years
to come.
Human respond to interaction with robots is often seen in workplaces that have
adopted robots to perform some particular task. Jeanne Dietsch, a member of the In-
dustrial Activity Board of the IEE’s Robotics and Automation Society, describes how
robots in the work place demonstrate social behavior even though it has not been pro-
grammed to. Likewise, Humans who interact with them respond differently depending
whether they think that the robot’s action are genuine and appropriate. Dietsch provides
the example of a study of hospital robot, where different responds were found for robots
that behaved the same. In one department housekeepers would bond with the robots
to the point that they would have nicknames and would yearn for the robot if it went
for repairs. Yet, in the Cancer ward, the robot’s behaviors were seen as rude. [3]This is
because when the robot arrives the housekeeper would be comforting a terminal patient.
Hence, the robot would be seen as barging in when the housekeeper would be engaged in
a very emotional situation with the patient. The housekeeper responded this way to the
behavior of the robot mainly because he thinks that the robots actions are intentional
and because he believes that the robot ought to abide some social ethic.
Consequently, developers have to consider the social behavior of the robot they are
designing. Yet, they encounter several difficulties in this task. As explained, the devel-
oper has to consider that there is yet be a consensus on what it means for a robot to
behave ethically. Another problem is, though robots may have learning capabilities, the
developer is limited in his ability to explain and predict all the situation that the robot
may encounter. [7]Ultimately, the robot social behavior will be a key point to human
and robot interaction.

12
Yet, the ethical issues that are encountered with the introduction of robotic system
into the social human life are not limited to the work place. There has been the recent
trend to developing anthropomorphic robots targeted for children and the elderly. The
main reason for this trend is to create care-giving robots that can satisfy the need for a
companion within the vulnerable members of society. Other reasons for this trend have
been to monitor the health of the elderly or act as a nanny for children that require far
more attention their parent may be able to provide. Nevertheless, this trend raises the
ethical issue on whether this is a form of deception and is ethically acceptable. That is
these robots are designed to create the illusion that human relationship could be form
with them.
Certainty, today’s robots have not achieved the level that a normal person can be
confused to whether it is another personal being or not. However, many of the robots
developed for children the elderly do provide the illusion that they have some low level
of understanding and personality. Sony’s artificial intelligence robotic dog (AIBO) is
able to mimic a normal dog to some degree. It can walk in a dog like form and chase
a ball. It also can detect distance, acceleration, sound, vibration, pressure and voice
commands that enables the robot to recognized many situation and respond adequately.
Similarly, it is able to show a variety of expression like happiness, sadness, fear and
surprise through body movement and the color and shape of its eyes. Other robots as
the Hello Kitty robot are marketed primarily for parent who are not able to spend time
with their children. That is the robot will keep the child happy and occupied.
The vulnerable youngest and the elderly are most affect by the anthropomorphism
of the robot mainly because both have a strong need for social contact and lack the
technological knowledge behind the robot. That is the knowledge that though the robot
may posses human characteristic is still not a personal being. It is worthy to note that
the problem is not the anthropomorphic characteristic of the robot. Young children
many times pretend that their toy’s are actual beings. Yet, in this case the child has
the understand that it is just play time and the toy’s themselves do not posses the
characteristics. Similarly, elderly with Alzheimer may forget that the robot is but a
mimicker of human characteristic. [6]
There are several consequences to the anthropomorphism within the robotic system.
Children can spend too much time with the robots and thus diminish their interaction
with other human beings. This will hurt the children understanding on how to interact
with other humans. The care-giver has a strong influence on a child’s development since
most of the child’s learning will come through mimic it. Negative consequences can also
be found within the elderly group. If they start to image that they have a relationship
with the robot they may start to think that they have to take care of the robot at the
expense of their own well being. Similarly, the family of the elder may think that the
robot satisfied all of her needs for companion, causing the elder to feel even more lonely.
However, not all consequences are negative. There are studies that show that robots in
elderly can reduce their stress level. But these study do suggest that the robot cannot
substitute human interaction. [8]

13
References
[1] Mirza Tahir Ahmed, Matthew N. Dailey, José Luis Landabaso, and Nicolas Herrero.
Robust key frame extraction for 3d reconstruction from video streams. In VISAPP
(1), pages 231–236, 2010.

[2] Gary Bradski and Adrian Kaehler. Learning OpenCV. O’Reilly Media Inc., 2008.

[3] J. Dietsch. People meeting robots in the workplace [industrial activities]. Robotics
Automation Magazine, IEEE, 17(2):15 –16, 2010.

[4] R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cam-
bridge University Press, ISBN: 0521540518, second edition, 2004.

[5] Philip Rideout. iPhone 3D Programming. O’Reilly Media Inc., 2010.

[6] A. Sharkey and N. Sharkey. Children, the elderly, and interactive robots. Robotics
Automation Magazine, IEEE, 18(1):32 –38, march 2011.

[7] G. Veruggio. Roboethics [tc spotlight]. Robotics Automation Magazine, IEEE,


17(2):105 –109, 2010.

[8] G. Veruggio, J. Solis, and M. Van der Loos. Roboethics: Ethics applied to robotics
[from the guest editors]. Robotics Automation Magazine, IEEE, 18(1):21 –22, march
2011.

14
6 What we did
6.1 Andrew Abril
The project was a 50/50 teamwork effort. We wrote the code together (physically in
the same room), switching who typed whenever an idea struck. The only independent
work done was research. I mostly researched on how to improve the project in general
(uncalibrated, etc), while my partner improved what was already done (performance
issues).

6.2 Jose Rafael Caceres


Most of our work was done together, since we only had one iPhone. Hence, to write new
part of code and test, we had to do it together. My independent part for the project
was finding new ways of increasing performance. I was able to do this because I could
test out some OpenCV function (as changing into gray-scale ) in the simulator.

15
7 Appendix
Extrinsic calibration using the chessboard method and optical flow to find + track:

−(void ) CalibrateCameraWithOpticalFlow : ( I p l I m a g e ∗ ) imgB


{ // i n t i a l i z e p a r a m e t e r s
i n t row = 6 ;
i n t column = 7 ;
i n t board = row∗ column ;
int c o r n e r c o u n t ;
// t h i s f l a g o n l y t e l l s me i f i d i d t h e f i r s t i n i t t i a l i z a t i o n
b o o l f l a g=f a l s e ;
// c a l i b r a t e o f need t o i n i t
// do o p t i c a l f l o w t o t r a c k p o i n t s
i f ( NeedToInit ) {
// i n i t i a l i z e c a l i b r a t i o n b u f f e r s and p a r a m e t e r s

/∗
C r e a t e b u f f e r s t h a t a r e i n t i a l i z e d o n l y once
I used Mi t o d e t e c t i f t h e y have been i n t i a l i z e d t h o u g h
any one c o u l d had been u s e .
Maybe I s h o u l d had c h e c k each one b u t t h a t s e r v e d t o o much
trouble .
∗/
i f ( ! Mi ) {
c o r n e r s =(CvPoint2D32f ∗ ) c v A l l o c ( board ∗ s i z e o f ( c o r n e r s [ 0 ] ) ) ;
i m a g e p o i n t s = cvCreateMat ( board , 2 , CV 32FC1 ) ;
o b j e c t p o i n t s= cvCreateMat ( board , 3 , CV 32FC1 ) ;
point counts = cvCreateMat ( 1 , 1 , CV 32SC1 ) ;
d i s t o r t i o n c o e f f s = cvCreateMat ( 5 , 1 , CV 32FC1 ) ;
r o t a t i o n v e c t o r s = cvCreateMat ( 3 , 1 , CV 32FC1 ) ;
t r a n s l a t i o n v e c t o r s = cvCreateMat ( 1 , 3 , CV 32FC1 ) ;
r o t a t i o n m a t = cvCreateMat ( 3 , 3 , CV 32FC1 ) ;
Mi = cvCreateMat ( 3 , 3 , CV 32FC1 ) ;
f l a g=t r u e ;
}
//Mi v a l u e s were c a l c u l a t e d p r i o r t o t h i s p r o j e c t i n MatLab
CV MAT ELEM( ∗Mi , f l o a t , 0 , 0 ) = 4 5 9 . 2 4 5 3 3 3 1 f ;
CV MAT ELEM( ∗Mi , f l o a t , 0 , 1 ) = 0 . 0 f ;
CV MAT ELEM( ∗Mi , f l o a t , 0 , 2 ) = 2 1 8 . 2 7 3 2 8 5 f ;
CV MAT ELEM( ∗Mi , f l o a t , 1 , 0 ) = 0 . 0 f ;
CV MAT ELEM( ∗Mi , f l o a t , 1 , 1 ) = 4 5 9 . 2 4 5 3 3 3 1 f ;
CV MAT ELEM( ∗Mi , f l o a t , 1 , 2 ) = 1 7 8 . 9 6 9 1 1 6 f ;
CV MAT ELEM( ∗Mi , f l o a t , 2 , 0 ) = 0 . 0 f ;
CV MAT ELEM( ∗Mi , f l o a t , 2 , 1 ) = 0 . 0 f ;
CV MAT ELEM( ∗Mi , f l o a t , 2 , 2 ) = 1 . 0 f ;
// d i s t s t u f f
CV MAT ELEM( ∗ d i s t o r t i o n c o e f f s , f l o a t , 0 , 0 ) = 0 . 0 7 0 9 6 9 f ;
CV MAT ELEM( ∗ d i s t o r t i o n c o e f f s , f l o a t , 1 , 0 ) = 0 . 7 7 7 6 4 7 f ;
CV MAT ELEM( ∗ d i s t o r t i o n c o e f f s , f l o a t , 2 , 0 ) = −0.009131 f
;
CV MAT ELEM( ∗ d i s t o r t i o n c o e f f s , f l o a t , 3 , 0 ) = −0.013867 f
;

16
CV MAT ELEM( ∗ d i s t o r t i o n c o e f f s , f l o a t , 4 , 0 ) = −5.141519 f
;
// B u f f e r
CV MAT ELEM( ∗ p o i n t c o u n t s , int , 0 , 0 ) = board ;

// u n d i s t o r t image
[ s e l f U n d i s t o r t : imgB ] ;

// f i n d t h e c h e s s b o a r d p o i n t s t h a t w i l l be t r a c k e d w i t h
optical flow
int s u c c e s s = cvFindChessboardCorners (
imgB ,
c v S i z e ( row , column ) ,
corners ,
&c o r n e r c o u n t ,
CV CALIB CB ADAPTIVE THRESH
|
CV CALIB CB FILTER QUADS
| CV CALIB CB FAST CHECK)
;

cvFindCornerSubPix ( imgB , c o r n e r s , c o r n e r c o u n t , c v S i z e ( 1 1 ,
1 1 ) , c v S i z e ( −1 , −1) , c v T e r m C r i t e r i a (CV TERMCRIT EPS+
CV TERMCRIT ITER, 3 0 , 0 . 1 ) ) ;

i f ( ( s u c c e s s ) &&(c o r n e r c o u n t == board ) ) {
// s e t up t h e w o r l d p o i n t and image p o i n t
for calibration
f o r ( i n t i = 0 , j = 0 ; j < board ; ++i , ++j ) {
CV MAT ELEM( ∗ i m a g e p o i n t s , f l o a t , i , 0 ) =
corners [ j ] . x ;
CV MAT ELEM( ∗ i m a g e p o i n t s , f l o a t , i , 1 ) =
corners [ j ] . y ;
// t h i s s h o u l d o n y l run once i n t h i s f o r
loop
if ( flag ) {
CV MAT ELEM( ∗ o b j e c t p o i n t s , f l o a t ,
i , 0 ) = j / column ;
CV MAT ELEM( ∗ o b j e c t p o i n t s , f l o a t ,
i , 1 ) = j%column ;
CV MAT ELEM( ∗ o b j e c t p o i n t s , f l o a t ,
i , 2) = 0.0 f ;
}
}

NeedToInit=f a l s e ;
}
}
else {
// t r a c k t h e i n i t l i a t i z e P o i n t s
i f ( corners ){
char f e a t u r e s f o u n d [ board ] ;
f l o a t f e a t u r e e r r o r s [ board ] ;

17
int w i n s i z e = 5 ;
CvSize p y r s z = c v S i z e ( imgA−>width +8, imgB−>h e i g h t
/3) ;
I p l I m a g e ∗ pyrA = cvCreateImage ( p y r s z ,
IPL DEPTH 32F , 1 ) ;
I p l I m a g e ∗ pyrB = cvCreateImage ( p y r s z ,
IPL DEPTH 32F , 1 ) ;

CvPoint2D32f ∗ c o r n e r s B = ( CvPoint2D32f ∗ ) c v A l l o c (
board ∗ s i z e o f ( c o r n e r s B [ 0 ] ) ) ;

cvCalcOpticalFlowPyrLK ( imgA ,
imgB ,
pyrA ,
pyrB ,
corners ,
cornersB ,
board ,
cvSize ( win size , win size ) ,
3,
features found ,
feature errors ,
c v T e r m C r i t e r i a ( CV TERMCRIT ITER |
CV TERMCRIT EPS, 2 0 , 0 . 3 ) ,
0 );

// draw o p t i c a l f l o w
/∗ f o r ( i n t i =0; i <c o r n e r c o u n t ; i ++) {
i f ( features found [ i ]) {
p r i n t f (” Got i t /n ”) ;
CvPoint p0 = c v P o i n t ( cvRound (
c o r n e r s [ i ] . x ) , cvRound ( c o r n e r s [ i
]. y) ) ;
CvPoint p1 = c v P o i n t ( cvRound (
cornersB [ i ] . x ) , cvRound ( cornersB [
i ]. y) ) ;
c v L i n e ( imgC , p0 , p1 , CV RGB
( 2 5 5 , 2 5 5 , 2 5 5 ) , 2) ;
}
}
∗/
// c h e c k t h e p o i n t
i n t n u m O f S u c c e s s f u l P o i n t s =0;
f o r ( i n t k=0; k<board ; k++) {
i f ( f e a t u r e s f o u n d [ k ] && f e a t u r e e r r o r s [ k
] <550)
n u m O f S u c c e s s f u l P o i n t s ++;
}
i f ( n u m O f S u c c e s s f u l P o i n t s != board ) NeedToInit=t r u e ;
else {
f o r ( i n t i = 0 , j = 0 ; j < board ; ++i , ++j )
{
CV MAT ELEM( ∗ i m a g e p o i n t s , f l o a t ,

18
i , 0) = cornersB [ j ] . x ;
CV MAT ELEM( ∗ i m a g e p o i n t s , f l o a t ,
i , 1) = cornersB [ j ] . y ;
}
// c r e a t e t h e image p o i n t s
}
c v R e l e a s e I m a g e (&imgA ) ;
imgA = imgB ;
c o r n e r s=c o r n e r s B ;
}
else
NeedToInit=t r u e ;

}
// c a l i b r a t e
i f ( ! NeedToInit ) {
// f i n d e x t r i n s i c and o u t p u t t h e v a l u e s
// solvePnP ( o b j e c t p o i n t s , i m a g e p o i n t s , Mi , d i s t o r t i o n c o e f f s
, rotation vectors , translation vectors , true ) ;
cvFindExtrinsicCameraParams2 ( o b j e c t p o i n t s , i m a g e p o i n t s ,
Mi , d i s t o r t i o n c o e f f s , r o t a t i o n v e c t o r s ,
translation vectors ) ;

f l o a t e l e m e n t 1 = CV MAT ELEM( ∗ t r a n s l a t i o n v e c t o r s , f l o a t , 0 ,
0) ;
f l o a t e l e m e n t 2 = CV MAT ELEM( ∗ t r a n s l a t i o n v e c t o r s , f l o a t , 0 ,
1) ;
f l o a t e l e m e n t 3 = CV MAT ELEM( ∗ t r a n s l a t i o n v e c t o r s , f l o a t , 0 ,
2) ;

// f l o a t v e c x=CV MAT ELEM( ∗ r o t a t i o n v e c t o r s , f l o a t , 0 , 0) ;


// f l o a t v e c y=CV MAT ELEM( ∗ r o t a t i o n v e c t o r s , f l o a t , 1 , 0) ;
// f l o a t v e c z=CV MAT ELEM( ∗ r o t a t i o n v e c t o r s , f l o a t , 2 , 0) ;

float s c a l e =1.00;

cvRodrigues2 ( r o t a t i o n v e c t o r s , rotation mat ) ;


// s e t t h e o u t p u t s
CameraPose . w . x = e l e m e n t 1 / s c a l e ;
CameraPose . w . y = −1 ∗ e l e m e n t 2 / s c a l e ;
CameraPose . w . z = −1 ∗ e l e m e n t 3 / s c a l e ;

// s e t t h e r o t a t i o n o u t p u t
// t h h e x and t h e y a r e i n v e r s e d l i k e t h e t r a n s l a t i o n
// x

CameraPose . x . x=CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 0 , 0 ) ;


CameraPose . x . y=CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 1 , 0 ) ;
CameraPose . x . z=CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 2 , 0 ) ;

// / y

CameraPose . y . x=CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 0 , 1 ) ;

19
CameraPose . y . y=CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 1 , 1 ) ;
CameraPose . y . z=CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 2 , 1 ) ;
// z

CameraPose . z . x=CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 0 , 2 ) ;


CameraPose . z . y=CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 1 , 2 ) ;
CameraPose . z . z=CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 2 , 2 ) ;

// o u t p u t s t u f f

f l o a t r o t x x = CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 0 , 0 ) ;
f l o a t r o t x y = CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 1 , 0 ) ;
f l o a t r o t x z = CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 2 , 0 ) ;

f l o a t r o t y x = CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 0 , 1 ) ;
f l o a t r o t y y = CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 1 , 1 ) ;
f l o a t r o t y z = CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 2 , 1 ) ;

f l o a t r o t z x = CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 0 , 2 ) ;
f l o a t r o t z y = CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 1 , 2 ) ;
f l o a t r o t z z = CV MAT ELEM( ∗ r o t a t i o n m a t , f l o a t , 2 , 2 ) ;

i f ( Output )
[ Output r e l e a s e ] ;
Output = [ [ NSMutableString a l l o c ] initWithFormat :@”The v e c t o r
i s : \ n” ] ;
[ Output appendFormat :@”:% f , :% f , :% f \n” , rotxx , rotyx ,
rotzx ] ;
[ Output appendFormat :@”:% f , :% f , :% f \n” , rotxy , rotyy ,
rotzy ] ;
[ Output appendFormat :@”:% f , :% f , :% f \n” , r o t x z , r o t y z ,
rotzz ] ;

[ Output appendFormat :@”:% f , :% f , :% f \n” , element1 , element2


, element3 ] ;
// [ Output appendFormat :@”:% f , :% f , :% f \n ” , imgB−>width , imgB
−>h e i g h t , 0 . 0 ] ;

Rendeering and transformation of the real camera to “virtual camera” to represent


the user’s movement:

void RenderingEngine1 : : I n i t i a l i z e ( i n t width , i n t h e i g h t )


{

// c r e a t e r e s o u r c e manager
m r e s o u r c e= C r e a t e R e s o u r c e ( ) ;

20
// C r e a t e t h e d e p t h b u f f e r .
glGenRenderbuffersOES ( 1 , &m d e p t h R e n d e r b u f f e r ) ;
glBindRenderbufferOES (GL RENDERBUFFER OES, m d e p t h R e n d e r b u f f e r ) ;
g l R e n d e r b u f f e r S t o r a g e O E S (GL RENDERBUFFER OES,
GL DEPTH COMPONENT16 OES,
width ,
height ) ;

// C r e a t e t h e f r a m e b u f f e r o b j e c t ; a t t a c h t h e d e p t h and c o l o r b u f f e r s .
glGenFramebuffersOES ( 1 , &m f r a m e b u f f e r ) ;
glBindFramebufferOES (GL FRAMEBUFFER OES, m f r a m e b u f f e r ) ;
g l F r a m e b u f f e r R e n d e r b u f f e r O E S (GL FRAMEBUFFER OES,
GL COLOR ATTACHMENT0 OES,
GL RENDERBUFFER OES,
m colorRenderbuffer ) ;
g l F r a m e b u f f e r R e n d e r b u f f e r O E S (GL FRAMEBUFFER OES,
GL DEPTH ATTACHMENT OES,
GL RENDERBUFFER OES,
m depthRenderbuffer ) ;

// Bind t h e c o l o r b u f f e r f o r r e n d e r i n g .
glBindRenderbufferOES (GL RENDERBUFFER OES, m c o l o r R e n d e r b u f f e r ) ;

// s e t up f o r t h e b i n d o f t h e t e x t u r e
gl Gen Te xtu re s ( 3 , &m g r i d T e x t u r e [ 0 ] ) ;

// l o a d cube t e x t u r e
g l B i n d T e x t u r e (GL TEXTURE 2D, m g r i d T e x t u r e [ 0 ] ) ;
g l T e x P a r a m e t e r i (GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) ;
g l T e x P a r a m e t e r i (GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) ;

m r e s o u r c e −>LoadPngImage ( ” p u r p l e . j p g ” ) ;
void ∗ p i x e l s = m r e s o u r c e −>GetImageData ( ) ;
i v e c 2 s i z e= m r e s o u r c e −>GetImageSize ( ) ;
glTexImage2D (GL TEXTURE 2D, 0 , GL RGBA,
s i z e . x , s i z e . y , 0 , GL RGBA,
GL UNSIGNED BYTE, p i x e l s ) ;
m r e s o u r c e −>UnloadImage ( ) ;

// l o a d c y l i n d e r
g l B i n d T e x t u r e (GL TEXTURE 2D, m g r i d T e x t u r e [ 1 ] ) ;
g l T e x P a r a m e t e r i (GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) ;
g l T e x P a r a m e t e r i (GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) ;

m r e s o u r c e −>LoadPngImage ( ” y e l l o w . j p g ” ) ;
p i x e l s = m r e s o u r c e −>GetImageData ( ) ;
s i z e= m r e s o u r c e −>GetImageSize ( ) ;

21
glTexImage2D (GL TEXTURE 2D, 0 , GL RGBA,
s i z e . x , s i z e . y , 0 , GL RGBA,
GL UNSIGNED BYTE, p i x e l s ) ;
m r e s o u r c e −>UnloadImage ( ) ;

// l o a d f l o o r t e x t u r e

g l B i n d T e x t u r e (GL TEXTURE 2D, m g r i d T e x t u r e [ 2 ] ) ;


g l T e x P a r a m e t e r i (GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) ;
g l T e x P a r a m e t e r i (GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) ;

m r e s o u r c e −>LoadPngImage ( ” g r e e n . j p g ” ) ;
p i x e l s = m r e s o u r c e −>GetImageData ( ) ;
s i z e= m r e s o u r c e −>GetImageSize ( ) ;
glTexImage2D (GL TEXTURE 2D, 0 , GL RGBA,
s i z e . x , s i z e . y , 0 , GL RGBA,
GL UNSIGNED BYTE, p i x e l s ) ;
m r e s o u r c e −>UnloadImage ( ) ;

g l V i e w p o r t ( 0 , 0 , width , h e i g h t ) ;
g l E n a b l e (GL DEPTH TEST) ;

// s e t up camera p r o j e c t i o n
glMatrixMode (GL PROJECTION) ;
g l F r u s t u m f ( −0.467889 f , 0 . 4 6 7 8 8 9 , −0.467889 , 0 . 4 6 7 8 8 9 , 1 , 1 0 0 0 ) ;

glMatrixMode (GL MODELVIEW) ;


glLoadIdentity () ;
// g l R o t a t e f ( 4 5 , 1 , 0 , 0) ;

// change me ! ! !
// i n i t i a l i z e t h e r o a t e m a t r i x
// x
offset . x . x = 0;
offset . x . y = 1;
offset . x . z = 0;
o f f s e t . x .w = 0 ;

// / y
offset .y.x = −1;
offset .y.y = 0;
offset .y.z = 0;
offset . y .w = 0;

// z
offset . z . x = 0;

22
offset . z . y = 0;
offset . z . z = 1;
o f f s e t . z .w = 0 ;

// /w
offset .w. x = 0;
offset .w. y = 0;
offset .w. z = 0;
offset .w.w = 1;

void RenderingEngine1 : : Render ( ) const


{

glClearColor ( 0 . 5 f , 0.5 f , 0.5 f , 1) ;


g l C l e a r (GL COLOR BUFFER BIT | GL DEPTH BUFFER BIT) ;
glPushMatrix ( ) ;

g l M u l t M a t r i x f ( Trans . P o i n t e r ( ) ) ;

// t o make i t l o o k good
g l T r a n s l a t e f ( −4 , 9 , 0 ) ;
g l R o t a t e f ( −90 , 0 , 0 , 1 ) ;
g lR ot at e f (90 , 0 , 1 , 0) ;
g l R o t a t e f ( −10 , 0 , 0 , 1 ) ;

// e n a b l e v e r t e x c o o r d i n a t e a r r a y when glDrawArrays i s c a l l e d
g l E n a b l e C l i e n t S t a t e (GL VERTEX ARRAY) ;
// e n a b l e normal a r r a y when glDrawArrays i s c a l l e d
g l E n a b l e C l i e n t S t a t e (GL NORMAL ARRAY) ;
// e n a b l e t e x t u r e c o o r d i n a t e a r r a y when glDrawArrays i s c a l l e d
g l E n a b l e C l i e n t S t a t e (GL TEXTURE COORD ARRAY) ;
g l E n a b l e (GL TEXTURE 2D) ;

// b e g i n i n g o f cube
// l o a d t e x t u r e f o r cube
g l B i n d T e x t u r e (GL TEXTURE 2D, m g r i d T e x t u r e [ 0 ] ) ;

// draw cube
glPushMatrix ( ) ;
g l T r a n s l a t e f ( −2 , 0 . 5 , 1 ) ;
g l V e r t e x P o i n t e r ( 3 , GL FLOAT, 0 , c u b e V e r t s ) ;
// g l N o r m a l P o i n t e r (GL FLOAT, 0 , bananaNormals ) ;
g l T e x C o o r d P o i n t e r ( 2 , GL FLOAT, 0 , cubeTexCoords ) ;
glDrawArrays (GL TRIANGLES, 0 , cubeNumVerts ) ;
glPopMatrix ( ) ;

// end o f cube

23
// b e g i n n i n g o f c l y i n d e r
// l o a d t e x t u r e f o r c l y i n d e r
g l B i n d T e x t u r e (GL TEXTURE 2D, m g r i d T e x t u r e [ 1 ] ) ;

// draw c l y i d n e r
glPushMatrix ( ) ;
g l T r a n s l a t e f (3 , 1 , 1) ;
g l V e r t e x P o i n t e r ( 3 , GL FLOAT, 0 , c y l i n d e r V e r t s ) ;
g l T e x C o o r d P o i n t e r ( 2 , GL FLOAT, 0 , c y l i n d e r T e x C o o r d s ) ;
glDrawArrays (GL TRIANGLES, 0 , cylinderNumVerts ) ;
glPopMatrix ( ) ;
// end o f c y l i n d e r

// b e g i n i n g o f f l o o r
// l o a d t e x t u r e
// l o a d t e x t u r e f o r c l y i n d e r
g l B i n d T e x t u r e (GL TEXTURE 2D, m g r i d T e x t u r e [ 2 ] ) ;

// / draw f l o o r
g l V e r t e x P o i n t e r ( 3 , GL FLOAT, 0 , p l a n e V e r t s ) ;
g l T e x C o o r d P o i n t e r ( 2 , GL FLOAT, 0 , planeTexCoords ) ;
glDrawArrays (GL TRIANGLES, 0 , planeNumVerts ) ;
// end o f f l o o r

g l D i s a b l e C l i e n t S t a t e (GL VERTEX ARRAY) ;


g l D i s a b l e C l i e n t S t a t e (GL TEXTURE COORD ARRAY) ;
g l D i s a b l e C l i e n t S t a t e (GL NORMAL ARRAY) ;
g l D i s a b l e (GL TEXTURE 2D) ;
// g l D i s a b l e C l i e n t S t a t e (GL COLOR ARRAY) ;
glPopMatrix ( ) ;

void RenderingEngine1 : : UpdateAnimation ( f l o a t t i m e S t e p )


{
i f ( m animation . Current == m animation . End )
return ;

m animation . E l a p s e d += t i m e S t e p ;
i f ( m animation . E l a p s e d >= AnimationDuration ) {
m animation . Current = m animation . End ;
} else {
f l o a t mu = m animation . E l a p s e d / AnimationDuration ;
m animation . Current = m animation . S t a r t . S l e r p (mu, m animation . End ) ;
}
}

void RenderingEngine1 : : S e t T r a n s f o r m a t i o n ( mat4 t r a n s )


{
/∗

24
O f f s e t matrix
mat4 o f f s e t = { 0, −1, 0, 0,
1, 0, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1};
∗/
Trans = t r a n s ∗ o f f s e t ;
// Trans = t r a n s ;
/∗ mat3 temp ;
i f ( ! S t a r t && ( t r a n s . w . x !=0) ) {
S t a r t=t r u e ;
temp=t r a n s . ToMat3 ( ) ;
L a s t ( temp . Transposed ( ) ) ;
}

else {
// e x t r a c t t h e r o t a t i o n from t h e t r a n s
temp=t r a n s . ToMat3 ( ) ;
mat4 temp2 ;
temp2 ( temp ) ;
mat4 newRotation= temp2 ∗ L a s t ;
// s e t b a c k t h e r o a t i o n
newRotation . w . x=t r a n s . w . x ;
newRotation . w . y=t r a n s . w . y ;
newRotation . w . z=t r a n s . w . z ;
Trans=newRotation ∗ o f f s e t ;

//
} ∗/
// presenTtw
// pastTw
// p a s t T P r e s e n t
//WTpas ∗ presentTW
}

25

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy