Friday, January 6, 2012



Hi everybody!

I bring you a sample of how to reconstruct a scene in 3D using OpenCV and Point Cloud Library (PCL) with a simple program and an example scene.

All we need is the left image of our stereo camera:

(You can implement your own cheap stereo webcam following this post: OpenCV Stereo Webcam)

The disparity map generated with your preferred Stereo Matching algorithm:

(For example you can use OpenCV's stereoBM algorithm: OpenCV StereoBM)

And the reprojection matrix (Q) obtained at calibration time:

<?xml version="1.0"?>
<Q type_id="opencv-matrix">
    1. 0. 0. -2.9615028381347656e+02
    0. 1. 0. -2.3373317337036133e+02
    0. 0. 0. 5.6446880931501073e+02
    0. 0. -1.1340974198400260e-01 4.1658568844268817e+00
(You can get the matrix Q for your own stereo camera following the instructions in this post: OpenCV Camera Calibration)

Now download the source code (I highly recommend to read the source code to understand what is going on, don't worry there is comments :P):

[NOTE]: You will need to have installed OpenCV Library (you can get it here) and Point Cloud Library (you can get it here). Also you will need CMake to generate the Makefiles.

Once you have downloaded the source code and installed the dependencies, just run:

tar xzvf OpenCVReprojectImageToPointCloud-1.0.tgz
cd OpenCVReprojectImageToPointCloud
cmake .
./OpenCVReprojectImageToPointCloud rgb-image.ppm disparity-image.pgm Q.xml

You should see something similar to the following video:

I hope you enjoy it!
[UPDATE: 04/02/2012] I have released a bug-fix. Thanks to Chris for pointing it out.


Samarth said...

Hi Martin! I saw the video - you've done some great work!

Actually I have been working on the same project (here)

I have two questions - How do you maintain colour in your pointclouds (since stereo correspondence needs us to have greyscale images)?

Also, you use cvReprojectTo3D() for reprojection? I have read on the PCL blog that this function needs optimization and so I am working on using formulae to calculate depths directly.

Martin Peris said...

Hi Samarth!

Thanks for your kind comment.

Regarding the color information, the "secret" is that the disparity image(greyscale image) corresponds with the left image of the stereo camera (rgb image). So, for each pixel in the disparity image I calculate the 3D coordinates and assign the color of the same pixel in the rgb image, if you download the source code you will see that I am using pcl::PointXYZRGB for that.

About cvReprojectTo3D() my code is ready to use it, but I implemented my own version (see the source code for more info) using the formula directly. It is faster (as I don't use matrix algebra) but many improvements can be done, for example: there is a lot of stuff that can be pre-calculated and each pixel 3D coordinates can be calculated in parallel using GPU.

I hope this could help you.
Best regards,

Anonymous said...

hi martin i am facing problem in installing PCL 1.4.0 .
i have installed Boost and Eigen3 but flann is creating some problem so when i cmake the PCL it gives error package flann is not found . please help me out .


with regards
pankaj upadhyay

Martin Peris said...

Hi Pankaj,

I remember having the same problem, I think I solved it by using the pre-built binaries for Linux instead of building them myself.

You can find how to install PCL 1.4.0 from pre-built packages here:

Hope this helps.
Best regards,

Anonymous said...

Hi Martin,

Thanks for this great work!

Besides, I have some questions. I have generated the 3D point cloud associated to my stereopair by using these methods. I have generated a satisfying disparity map of a scene that includes building facades and some objects at few meters of the acquisition system. Also, I find this order of value regarding the coordinates of the generated 3D points:

(454961212740861952.000000, 5273911976232222720.000000, -207111293372909748224.000000)

The size of my chessboard squares are 10 cm
- Do you think that these coordinates are correct and in decimetre also? a formating problem?

(I have not still installed PCL and analyzed the code in detail)

- Besides, could you say me if the left image mentioned in this tutorial is rectified or is it the initial raw image?

For the estimation of the error regarding the parameters related to the calibration, I find 0.394492.

- Could you say me concisely how it is estimated and how can I judge that this value is satisfying?

Bravo for the website that is wonderfull.
Regards, Karim

Anonymous said...

hi martin thanks for reply, i tried to install PCL-1.4.0 but not succeed can u please give me a step by step method to install PCL.

with regards
pankaj upadhyay

Christian said...

Hi Martin!
I really enjoy reading your stereovision related posts. But i got some comments on this one.

1. I think you got a bug here:
Q32 =,3) ;
Q33 =,3) ;
It reads the same value (3,3), nevertheless it seems to work in your example

2. Which units do you take for Tx in the reprojection Matrix? Is it meter, centimeter, inch...?

3. Although i use your custom reprojection, i still get very strange results. The corners are blasted towards a cone yielding into infinity and the inner area of the picture is divided into two parallel flat surfaces. It seems one holds the odd rows and the other one the even rows.
Did you encounter similar problems?

4. Are there any preconditions to the matrices? What format does your disparity picture have? is it normalized, is it CV_32FC1 or CV_16UC1 ?

Best regards,

Martin Peris said...

@Karim, @Pankaj and @Chris, thanks a lot for your kind comments, I appreciate them a lot.

@Karim There was a bug in my code, so if you used it then it is likely that you got some nonsense coordinates.

The left image should be undistorted and rectified, never the raw image.

The estimation of the calibration error is expressed in pixel^2. For details on how it is calculated and its meaning I highly recommend you to read Chapter 12 of the book "Learning OpenCV: Computer Vision with the OpenCV Library" by Gary Bradski and Adrian Kaehler.

@Pankaj, Sorry I am not an expert on PCL installation, but I am sure that you could find support at the official mail list

@Chris, thanks a lot for the bug report, it was a big fat one!! hehehe I have fixed it :)

The units for Tx in my reprojection Matrix should be in centimeter (depends on the units that you use to measure the size of the squares in your chessboard at calibration time).

About the reprojection phenomena that you describe, the cone yielding into infinity is normal (due to the fact that the relation between depth and disparity is non-lineal: see this post ), but I have never encountered the inner area of the picture divided in two parallel flat surfaces :S

My disparity picture is CV_8U.

I hope this could help you.
Best regards,

Anonymous said...

Hi Martin!
I just wanted to post the result maybe for others with the same problem.
The reprojectimageto3d seems to have problems with the CV_32FC1 format - maybe a bug. Switching the app to fixed point representation (16Bit unsigned) worked fine!

Thanks for the Tx Info. I changed that and now it looks really good :-)
Hope to hear more from you,

tuoleita said...

Hi Martin:

Now, i completed the stereo-camera calibration. And got the reasonable output parameters. Just as you mentioned, the disparity should be calculated based on undistorted AND rectified image pairs, NOT raw image pairs.

The next stage, i just hope to recover the 3D coordinates of several key feature points from corresponding 2D RAW pairs of image coordinates.So, given x_left, y_left, x_right and y_right (All these are raw image coordinates)we can get the undistoreted and rectified coordinates as follow:

undistorted_x_left = mx1[x_left,y_left]
undistorted_y_left = my1[x_left,y_left]

undistorted_x_right = mx2[x_right, y_right]
undistorted_y_right = my2[x_right, y_right]

I used the routine to implement this as below:
undistorted_x_left = cvmGet(mx1,x_left,y_left)
undistorted_y_left = cvmGet(my1,x_left,y_left)

The result i got is that undistorted_y_left and undistorted_y_right have big differece. By scrutinizing the rectified image, we should notice that, the corresponding feature points have the nearly same undistorted_y in rectified image coordinates.(That's the basis to calculate disparity).

But, the calculation results: undistorted_y_left and undistorted_y_right have big difference.There must be something wrong. So i got confusing. Could you give some suggestions?

I read the chapter in Learning Opencv. mx1,my2 contains the mapping or lookup table to do things like this: to decide the pixel value on target Rectified image, using that table(mx1,my2) inversely trace back to raw image and pick up corresponding value to fill "blank pixel" on target Rectified image. From this point of view, is that correct to use
undistorted_x_left = mx1[x_left,y_left]
undistorted_y_left = my1[x_left,y_left] ?

Or i did something wrong...

Martin Peris said...

Hi Tuoleita!

Sorry for my late reply.

The problem is that you are using cvmGet with the wrong order of columns and rows. If you see the definition of the function:

cvmGet( const CvMat* mat, int row, int col )

row -> y
col -> x

So the correct way to get the undistorted coordinates would be:

undistorted_x_left = cvmGet(mx1,y_left,x_left)
undistorted_y_left = cvmGet(my1,y_left,x_left)

I hope this could solve your issue.

Best regards,

Anonymous said...

how to I add PCL library to my Pkg-config files(which has the opencv lib) ???

ank said...

Hi Martin,

I have a question about reprojectImageTo3D method in OpenCV. I get a point cloud from this method, and the point cloud looks correct and all the relative points to each other look correct as well. However, there is something wrong with the computed point cloud. The scale is off. It seems that reprojectImageTo3D computes the point cloud up to a scale value.

For instance, I have a 40mm ping pong ball that I am able to get the point cloud for, a laser range scan gives the correct point cloud and observes that the ping pong ball is 40mm. However, reprojectImageTo3D gives a point cloud that looks correct but the values are off by a factor of 10 (or 16).

Have you experienced this with the point clouds you generate from openCV?

Thank you

Alonso said...

Hello Martin!! I'm a very beginner in Computer Vision System. I really appreciate your video and you have done a great work. I could generate the disparity map but the result has many uncertainty objects and not clear.
Could you tell me how to generated the accurate disparity map from a pair of rectified image by using opencv and visual c++ ?

Thank you for this video.

Joe said...
This comment has been removed by the author.
Anonymous said...

Hi Martin, I have generated a 3D point cloud by using a left undistorted and rectifed image and the associated disparity map. The camera parameters have been estimated and seems to be satisfying.

The visualized scene is a building angle (between two building facades). However, the right angle is here not respected. The angle is more opened, around 120 degrees. It seems that I have the same result that visualizing the disparity map without exploiting estimated camera parameters, although I use this information.

- I use a left gray image instead of the colour image. Is it a problem?
- The size of these left image and disparity image have been divided by 2 in spite of the camera parameters that were estimated from the images with the original size. It was for computing a less big disparity map. Is it the problem? certainly yes.
- Is there a deformation caused by the PCL viewer (very improbable)?

- Besides, if the size of the chessboard squares are 10cm * 10cm,
then the point cloud unit should be the decimeter?

Please, confirm me these points. I thank you for this great assistance you bring us.


Joe said...


Thanks very much for sharing your code. I used part of your visualization code and it works very well. Have you tried to do the real-time visualization? I once used OpenGL to do this task and had a good performance but don't know how is PCL does when doing real-time rendering.



Martin Peris said...

Hi all!
As usual, sorry for taking so long to reply your questions and comments.

Have you tried: pkg-config --cflags --libs PCL ?

The units of the coordinates that you obtain depends on the size of the squares in the calibration chessboard. If your square size is, for example, 25mm then all your coordinates will be a multiple of 25mm. What square size did you used during calibration? Maybe that is your problem.

It will be very difficult to get a perfect disparity map (specially in realtime). In this post I used OpenCV's Block Matching method because it is fast. You can explore other methods , also included in OpenCV, that are more robust but are more computationally expensive.

Yep, your problem comes from the division of the image size by 2. You should modify the reprojection matrix (Q) accordingly. Also, you got it right, if your chessboard squares are 10cm*10cm your point cloud units should be decimeters.

I haven't implemented real-time visualization myself yet, but I use ROS (Robot Operative System) which includes a PCL-based realtime sensor data visualizer and works like a charm.

Best regards to all.

Jean-François said...

Hi Martin,

I've read most of your post and find them really interesting. I was wondering if you had any experience trying to do 3D reconstruction with pictures taken with a fisheye len (360 degrees of a 180 degrees view) See this link for example: . For example, in my case I will move the camera in a room, taking picture along the way, and I want to create the room in point cloud.

I have the "Learning OpenCV" from O'Reilly but they do not give a lot of information for SfM (Structure From Motion) 3D reconstruction.

So my question is: Is it possible to do this with a 360 cam (using the chessboard for configuration) or do I need to take a complete other way? Also, do you know any code for structure in motion with OpenCV?

Thanks and keep the good blog! :)

ank said...

Hi Martin,

Thank you for your response. I use a chessboard with 5mm square size. I divided all the coordinates by 5mm as you had suggested, now the Z-coordinate looks correct but X-Y coordinates are incorrect. However, if I do not divide X-Y by 5mm then, all values for the ping pong ball (40mm in diameter) look correct. Does this make sense that only Z-coordinate gets divided by the square size?

Also, I had a PCL question, do you know how the pcl visualization's scale grid works, It doesn't seem to reflect the values of the point cloud.

Thank you

Jean-François said...
This comment has been removed by the author.
Anonymous said...

Good Job Martin !
I'm trying to compile your cde but there is a problem. Opencv does not recognize the function destroyWindow and some others. It says to me : undefined reference to cv::destroyWindow. Can you help me ?


Jean-François said...

@Nicolas: I'm not Martin but I can say that his code compile perfectly so your problem must be in your configuration. With the error, I'm pretty sure you are missing a .lib in your linker include. Be aware, you have the c++ library include where you put the directory where all the .lib are and there is the linker include, where you need to put the lib that you are actually using (for example: opencv_core231d.lib).

Hope this help!

Anonymous said...

Hi Martin, thanks for your answers and bravo for your IEEE Cyber!

I have still some questions regarding the 3D point cloud:

1. I have observed on your video that we need to dezoom and to reverse the scene at the beginning. How can we directly visualize the scene correctly? (It s not a prioritary point...)

2. When I test the 3D reconstruction by using your data, the scene is reconstructed but I see:

i) a conical effect with a rectangular basis that grows (profile view).
ii) the scene seems more or less composed of a set of fronto-parallel planes.

I have not seen this effect in your video but I think that it s normal and that you have also these effects. Please, confirm me.


Anonymous said...

3) Are these effects essentially due to the quality of the disparity map? since when I test with a coarse disparity map, the whole of the perspective cone is composed of fronto-parallel planes. Please confirm me.

4) I have tested the 3D reconstruction by using your dataset and the scene is coherent. However, when I test with my dataset (image with me at the foreground and a far building at the background), I have a display problem. The foreground (me) is well but the close and far background appear behind the point of view and reversed. So a part of the scene in the correct sense and a part not well located and reversed. It is more or less the same effect that is often used for illustrating the principle of camera pin-hole ( How can I fix it?

Guys, If you have an idea by reading these comments, thanks to post it.

Thank you Martin for your assistance and good luck for your presentation at Cyber.


Anonymous said...


Can I use the code source for a project ? I will modify it but it will be quite similar.


Anonymous said...

Hi Martin,
Good Job, it is really nice. Nevertheless, I'm trying to improve your project by taking more than 2 pictures. Have you got any idea how to make a dispartiy image of several pictures ??
Thanks a lot

Martin Peris said...

Hi everybody! Here I am with another batch of comment replies, sorry for the delay.

@Jean-François thanks for your comment and for giving a hand replying other people's doubts, I really appreciate it :)

Regarding 3D reconstruction from fisheye lenses, I have never experienced it, I only flattened a fisheye image using Bresenham's algorithm. I am guessing that the distortion introduced by the lense would be too heavy for this kind of 3D reconstruction methods, but the best way to find it out is just trying it ;) Also, sorry I have no experience with SfM :S

@Ankur, actually... now that you mention it, it might make sense, but I should check the math. Sorry, I didn't have the time to do it, but it might be a good exercise for you ;) Regarding the scale grid of PCL viewer, I haven't used it... maybe you can get more information in PCL's user forum.

@Karim, hi! thanks for your support ^^ About your questions:

1. You can programatically set the initial point of view of the camera, you can visit PCL's documentation to find out how to do it.

2. This effect is normal, as the disparity map is not perfect. Also the fronto-parallel planes that you see are due to the fact that the disparity levels are discrete, so each plane correspond to a different disparity level (you can appreciate that for distant objects those planes are each time further from one to another, the explanation is here.

3. Your suspicions are correct :)

4. There was a bug in my code, did you downloaded the latest version? Sorry about that :P

@Nicolas, Of course you can use the code in your project! as long as you comply with the GNU-LGPL license ^^. Enjoy it!

@John, thans for your comment! If you are interested in n-view geometry I would recommend you the book: "Multiple View Geometry" by Hartley and Zisserman.

Once again, thanks for reading!

Anonymous said...

Hi Martin,

Thank you very much for your answers.
I have an other question. I would like to project my 3D points into the associated left image (rectified). For this stage, the disparity is assumed unknown in my case for each 3D point. How can I project these 3D points in my images by using the calibration parameters from your sample.

- I have found this function in the OpenCV documentation:

void projectPoints(const Mat& objectPoints, const Mat& rvec, const Mat& tvec, const Mat& cameraMatrix, const Mat& distCoeffs, vector& imagePoints)

Is it the best way to do it? others ways?

- If yes, what are the correct parameters that can be employed regarding the .xml files generated in the preceding calibration stage?

const Mat& rvec ?
const Mat& tvec ?
const Mat& cameraMatrix -> _M1
const Mat& distCoeffs -> _D1

Thanks for your assistance.
Regards, Karim

Anonymous said...

Hi Martin,

It seems to work by exploiting P1.xml.

Regards, Karim

Waqar Shahid said...

Dear MArtin,

thanks for solving my problem, I was having a problem in compiling pcl and opencv with cmake,

waqar shahid

Nimantha said...

I'm a very beginner in open cv.i want to reconstruct a 3d face using a laser line.but i dont know how to start it. Can you please teach me to do it using open cv.??

rida said...

Hy Martin,

I have stored M1 , M2 ,T , mx1 , my1 , mx2 , my2 matrix.

My question is that how can i get baseLine of , focal length in mm , sensor element size and base line in mm . Could you pleas help me as soon as possible .

Peter Abeles said...

Hi! I did a similar demonstration, but using a different library. Thought you might find it interesting.


The applet contains the same application as what's in the video.

nowhere man said...

Hey Martin,

In case I use cvPerspectiveTransform to figure the 3D location of the pixels of a pair of rectified stereo images, how do I know the locations of the pixels with respective to the camera in meter?

Martin Peris said...

Hi all! Sorry for taking so long to reply. I am very busy as usual.

@Waqar Shahid I am glad I could help you. Hey, I am in Thailand right now for a conference :) Very nice people and food you have here :D

@Nimantha hi! Maybe if you formulate more specific questions I could help you better. Anyway, is there any particular reason why you want to use a scanner line? Why don't you use an RGB-D sensor (like Kinect) that way you can get the whole face in only one shot.

@Rida About the baseline you can extract it from the T matrix. Focal lenght and other info can be extracted from Q Matrix. I suggest you to take a look at the OpenCV documentation for more info.

@Peter Abeles Awesome! Thanks for sharing!!

@Nowhere-man. I am not sure if I followed you... you are using cvPerspectiveTransform to figure 3D location of the pixels?

azer89 said...

Hi, currently i am using Fuji w3 stereo camera which has 75 mm wide baseline (distance between left and right) and i found out this makes the block matching result is not good.
do you know how to deal with this wide baseline?

rida said...

hi Martin, i need urgent help.
I calibrated stereo camera and i have calibration error is 0.456 some thing like that .
I have just concern with disparity map . Problem is my closer objects are darker than farthest object which is not correct. calibration is good enough. Could you please tell me tell could be the reason of getting wrong disparity map .

Anonymous said...

Hello Martin

I am somnath from india,I am using the source code to render a 3D scene in windows 7 32bit . What i have done is generated the disparity map in opencv from 2 scenes and then generated the point cloud after that I am trying to render the disparity map using this code . But I am getting an error on line


I am not getting how to solve this problem . Please help me .

Thanks and Regards

Link said...

Great Job!
It helps me a lot!

Thank you, keep continuing

agungwb said...

Hei martin, you did a very great job.

I wanna ask you something about disparity image. I never got a really good disparity image, that caused a really bad 3D reconstruction. Do you have any idea of what I've got to do with my stereo cams, maybe about the distance of cameras, or about BMState parameters? In order to have a good disparity image. I really need you help.

Big Thanks.

jag2kn said...
This comment has been removed by the author.
jag2kn said...
This comment has been removed by the author.
jag2kn said...

Change from:

#include < opencv/cv.h>


#include < opencv2/opencv.hpp>

Anonymous said...

Hi Martin!
this is about stereo calibration!
but since there is no response over there i am posting here!
Can anyone help me with this so I did the stereo-calibration, but the avg error is 2.3 , what is the could be the reason, i followed the same steps as stereo calibration.

Anonymous said...

hi Martin,

I followed your instructions to build the software, but I got some errors when the object file was being linked to PCL lib. I have installed the PCL 1.2 as required by your software. I also tried PCL 1.4 or 1.5 but got the same errors. My linux box is Ubuntu 10.04.

Could you please help me.

Scanning dependencies of target OpenCVReprojectImageToPointCloud
[100%] Building CXX object CMakeFiles/OpenCVReprojectImageToPointCloud.dir/opencv_reproject_image_pcl.cpp.o
Linking CXX executable OpenCVReprojectImageToPointCloud
/usr/lib/ undefined reference to `xnContextRegisterForShutdown'
/usr/lib/ undefined reference to `xnContextRelease'
/usr/lib/ undefined reference to `xnContextAddRef'
/usr/lib/ undefined reference to `xnNodeInfoGetRefHandle'
/usr/lib/ undefined reference to `xnContextUnregisterFromShutdown'
/usr/lib/ undefined reference to `xnGetRefContextFromNodeHandle'
/usr/lib/ undefined reference to `xnForceShutdown'
/usr/lib/ undefined reference to `xnFindExistingRefNodeByType'
collect2: ld returned 1 exit status
make[2]: *** [OpenCVReprojectImageToPointCloud] Error 1
make[1]: *** [CMakeFiles/OpenCVReprojectImageToPointCloud.dir/all] Error 2
make: *** [all] Error 2

rosered said...

Hi Martin, thanks for the code. I'm Shimiao from Singapore.I have a question on openCV Q matrix. 3D points are computed from [X Y Z W]' = Q [x y d 1]'; which coordinate system are these points in ? Are they in the rectified left camera coordinate system ? or original left camera coordinate system? Thanks.

berak said...

hi martin, no wonder you're getting strange results from reprojectImageTo3D( img_disparity, recons3D, Q, false, CV_32F );.
you're writing that into a float*3 matrix, but you're accessing it like it was 3 doubles in:

Anonymous said...

am Pradeep a new fellow to pcl and also weak in maths is it possible to learn the programing in pcl i love pcl and even opencv how can i start my study please help me
thanks............... a lot
in advance

Animesh said...

I ran your program on my system. I get no errors but the 3d viewer never opens after I run the program.
The output stops at "Press any key to continue.."
When I close the two windows, titled "rgb-image", suddenly the 3d viewer opens with no point cloud in it and a message appears "could not read disparity image" !!!
What could be the problem ??

vikram said...

Hi Martin,

Your blog is really great , I have read all your previous blogs on stereovision (calibration, matching etc) , but now I am stuck at an error while installing point cloud library. Can you please help me resolve this,

root@vikram-ThinkPad-SL400:/home/vikram/Documents/stereo_camera_calibrate# sudo apt-get install libpcl-all
Reading package lists... Done
Building dependency tree
Reading state information... Done
You might want to run 'apt-get -f install' to correct these:
The following packages have unmet dependencies:
ksnapshot : Depends: kdebase-runtime but it is not going to be installed
libgtk2.0-0 : Breaks: gtk-sharp2 (< 2.12.10-2ubuntu2) but 2.12.0-2ubuntu3 is to be installed
libpcl-all : Depends: libpcl-1.6-all but it is not going to be installed
E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).

Anonymous said...

Hi Martin,

This is a great blog! Thanks for sharing!
I am very new to computer vision and opencv. So, i have a stupid question. Is there a way to do 3D reconstruction without using stereo images/camera? Can i just use the same camera to take photo of the same object from two different angles?



hninohnmarmyint said...
This comment has been removed by the author.
hnin hnin said...
This comment has been removed by the author.
jose bravo said...

great work !!! i working on something similar, so i run your code, but i get all black on the 3D image

hope you can help me , i am using pcl 1.6

jose bravo said...

hello, it is me again. i solve the problem of showing a image, i am using you program with my onw data but i can get a good disparity map to use it on the project. I am getting a normal disparity map an then i normalize like this cv::normalize( disp, vdisp, 0, 255, NORM_MINMAX, CV_8UC1);

but i get a lot of black spaces that d=0 so the point cloud is with a lot a empy spaces, how did you get yours, your map looks very good compare with mine.

i really need this information i have been for a while on this. i will apreciate your help.

crown said...

I am new with PCL i have already installed pcl and used Cmak for generating executable files in visual stdio 2010. but i do not know how to run the project. when i run the "opencv_reproject_image_pcl.cpp" it give error that "can not find specified file".
can you let me know how should i run the the program to get the output after generation Cmak ??

ace said...

Can you share the code to save the point clouds in pcd file ?

Suman Saha said...

How it can be done using monocular camera?

hailukebede said...


Your blog is very helpful for everyone associated with the Computer Vision field. I'm just wondering what will be done to compute the same thing with Kinect device. Hope to hear from you. Thanks

sputnik87 said...

To those who have tried running the code in MS Visual Studio 2010, I really need help running this code. I ran into some problems. Can someone please kindly guide me in running the code from scratch.

VM said...

How I can run this code using QT Creator. I ran into some problems. Can someone please kindly guide me in running the code from scratch.

Abdelrahman Elbakry said...

I have downloaded your code, The cmake generation is OK, but whenever I try to run the make command I get those errors:
[100%] Building CXX object CMakeFiles/epi.dir/epi.cpp.o
/bin/sh: -c: line 0: syntax error near unexpected token `('
/bin/sh: -c: line 0: `/usr/bin/g++ -DEIGEN_USE_NEW_STDVECTOR -DEIGEN_YES_I_KNOW_SPARSE_MODULE_IS_NOT_STABLE_YET -Dqh_QHpointer -DvtkFiltersStatistics_AUTOINIT="1(vtkFiltersStatisticsGnuR)" -DvtkRenderingCore_AUTOINIT="4(vtkInteractionStyle,vtkRenderingFreeType,vtkRenderingFreeTypeOpenGL,vtkRenderingOpenGL)" -DvtkRenderingVolume_AUTOINIT="1(vtkRenderingVolumeOpenGL)" -Wno-deprecated -I/usr/include/vtk -I/usr/include/freetype2 -I/usr/include/python2.7 -I/usr/include/libxml2 -I/usr/include/pcl-1.7 -I/usr/include/eigen3 -INOTFOUND -I/usr/include/ni -I/usr/include/qhull -I/opt/local/include/opencv -I/opt/local/include -DDISABLE_LIBUSB-1.0 vtkFiltersStatistics_AUTOINIT=1(vtkFiltersStatisticsGnuR) vtkRenderingCore_AUTOINIT=4(vtkInteractionStyle,vtkRenderingFreeType,vtkRenderingFreeTypeOpenGL,vtkRenderingOpenGL) vtkRenderingVolume_AUTOINIT=1(vtkRenderingVolumeOpenGL) -march=native -msse4.2 -mfpmath=sse -Wno-invalid-offsetof -o CMakeFiles/epi.dir/epi.cpp.o -c /home/steep/ahmedmoh/Desktop/Epipolar_Geometry_Estimation/epi.cpp'
make[2]: *** [CMakeFiles/epi.dir/epi.cpp.o] Error 1
make[1]: *** [CMakeFiles/epi.dir/all] Error 2
make: *** [all] Error 2

I can't really figure out what they are, My OS is Fedora 20

Tho Cao said...

Hi !
I am trying to implement your code on windows with following modifications, !
But can't built successfull !
Here error !
example.obj : error LNK2019: unresolved external symbol "public: __thiscall Camera::Camera(char const *,int,int,int)" (??0Camera@@QAE@PBDHHH@Z) referenced in function _main

Please help me !

Pratul Singh said...

Hi Martin,

Awesome work there and thanks for making it available to all of us.

I had a problem after I reached the point of viewing the 3D viewer window. My 3D image is completely distorted. See the link:

I do not see any of the axes or the cone which generally people arrived at. I don't know the reason for that. It would be great if you could let me know why that is happening. Thanks again.

Jan Kučera said...

In case anyone is interested I used Martin's code to generate 3D point cloud from Karlsruhe dataset stereo images:

Dipesh Suwal said...

Hello Martin, First of all many many thanks for your code and description . i was able to get first of all the depth map and then the point cloud using pcl. here I am using my own UAv captured images but due to lack of camera calibration parameters I assumed frontal parallel configuration and calculated mean depth but I got always wrong depth information. Thus to test my code I need some sample images with calibration parameter? Could you please provide me the required datasets??

Sumit M said...

Hi Martin,
Awesome work and thanks for sharing your codes.... a great help :) I am facing some problem in generating a point cloud. I get a point cloud but it does not look like yours at all. It just shows layered images as point cloud for a single image.
Here is the link for Pointcloud with zoomed in.
any help will be greatly appreciated.
Best regards,

Abid Hasan said...
This comment has been removed by the author.
Abid Hasan said...
This comment has been removed by the author.
Abid Hasan said...

Hi Martin!
Thanks a lot for your code!
I am running this on windows.After building the code successfully I am running the code by the following command line in command prompt (cmd)--
C:.....>OpenCVReprojectImageToPointCloud.exe rgb-image.ppm disparity-image.pgm Q.xml

And,receiving this following error---
Error:Could not read matrix Q (doesn't exist or size is not 4*4)

Can you tell me what I am doing wrong?
Thanks in Advance!

Unknown said...

Hi, the code worked for me, but I have a question. At the reconstuction of the point cloud code part:

double pw = -1.0*static_cast(d) * Q32 + Q33;
px = static_cast(j) + Q03;
py = static_cast(i) + Q13;
pz = Q23;

where does the -1.0 multiplying the static_cast(d) * Q32 + Q33; comes from? I did the matrix multiplication by hand, and I could not find where it comes from, and what it does. I know that without it, does not works well, but I dont know why, can you answer me? thanks.

정현조 said...

i have a question about extracting 3d point by your method
if my chessboard size is 3.5cm and 3D_Z point is 7.xxxxx
then how do i calculate realworld z point?

sieubebuvietnam\ said...

Hi Martin

I followed your guides. However, when I ran "make" command, I met the error below:

/usr/lib/ undefined reference to `QMapData::createData(int)'
/usr/lib/ undefined reference to `QMapData::node_create(QMapData::Node**, int, int)'

Did you see it before?

Could you give me some advice to fix above errors?

Best regards

Nguyen Huy Hung

Sergio Feo said...

Hi Martin, sorry for commenting on such an old post, you may have an answer to my question though... How can one do the opposite: given a point cloud and a camera matrix, position and rotation, obtain a depth image. Are there functions in either PCL or OpenCV to achieve that?

Sergio Feo said...

Hi Martin, sorry for commenting on such an old post, you may have an answer to my question though... How can one do the opposite: given a point cloud and a camera matrix, position and rotation, obtain a depth image. Are there functions in either PCL or OpenCV to achieve that?

Post a Comment