Optionally exporting the camera positions as part of the model

Mar 6, 2010 at 2:04 AM
Edited Mar 10, 2010 at 8:55 AM

Hello all,

Since Photosynth's release, I have become increasingly interested in the pointcloud side of synthing... far and away past the objective of sharing photos, for the time being. That is likely to balance out again when we see synth linking really take off, but I think I'm hooked on creating a solid pointcloud, regardless.

This exporter is a wonderful tool, but we're only collecting Photosynth's sparse reconstruction when we use it. In reality, an exporting tool such as this one is actually on the verge of providing us something far more revolutionary - namely the necessary input to generate our own dense pointclouds, which form a far superior basis for creating meshes.

For the purpose of this conversation, please watch the following lecture from 2009 May 22: Cyberspace Arriving (Cal IT2) and also the abbreviated version from 2009 June 12 Cyberspace Arriving (TEDx Dublin).

The dense reconstruction of Kelvingrove Art Gallery seen in the video is a good leap beyond the current pointcloud and from what Blaise says, requires only the knowledge of the photos positions (which has been solved for using the current synther) to then leverage existing stereo vision algorithms to generate the second generation of pointclouds.

For this reason, I'm interested in an option to export the coordinates and properties such as focal length and field of view of the cameras in a synth's reconstruction.

> Beyond that, all that remains is finding good stereo vision algorithms to leverage as well as writing an app that allows you to download the full resolution images in a given synth to tie to each camera position, if not permanently then at least as long as needed for generation of dense pointclouds.
> In the case of this process for your own synths, presumably the .log file for your synth which lists the disk address of your input images as referred to within the synth (located by typing %temp%\photosynther into your Run command) could be leveraged by your workflow in tandem with the exported coordinates and vectors of the camera positions to save yourself the trouble of downloading and piecing together DZIs before processing.

The Photosynth team has been talking about improving the visualization of the models generated from our photos for some time now, but it would be excellent to begin pursuing a way to press ahead ourselves, rather than waiting for them to implement their solution.

 

If anyone has suggestions as to open source high quality performant stereo vision algorithms that could be used for this, please feel free to submit them here.

Also, if this entire workflow is accomplished, someone be sure to notify Tom Benedict from the Photosynth forums as he's incredibly eager to try generating dense reconstructions from his own synths.

 

Further recommended viewing includes:
Multi-View Stereo for Community Photo Collections

Also, these take a different approach, mapping what is called 'patches' oriented to the mesh or pointcloud. I suspect that these 'patches' are likely some variation on or selection of the original image features, since a feature correctly identified in multiple photos is precisely what each point represents. These may not be achievable strictly by the steps described above, but are interesting food for thought, nonetheless.
Dense Modelling
Towards Internet-scale Multi-view Stereo

P.S. (Exactly how pertinent to the method discussed in this conversation this link is, I'm not certain, but the comments of this synth appear to display the results of an improved synther. Be sure to remove the fullstop from the end of Dan Hou's first link.)

Coordinator
Mar 6, 2010 at 5:45 PM

Exporting camera positions, focal lengths, etc. shouldn't be too difficult to implement. However, that data alone isn't much use if there are no programs available that can take that data as input and create these dense point clouds. The links you've mentioned seem to describe some suitable algorithms for this task but I don't know of any programs that actually implement these. If someone can show me such a program that wants this camera data in a specific format I will of course enhance the exporter with such functionality.

Apr 13, 2010 at 11:42 AM

Here, here, we have such a program, that really wants to have this camera orientation parameters ;)

At first a short introduction:

I came across Photosynth Point Cloud Exporter a few days ago via a reference from Kean Walmsley's (Autodesk) "Through the Interface" blog. He created a nice AutoCAD plugin which is importing the Photosynth Point Clouds into AutoCAD's native point cloud format.

We (kubit) are developing AutoCAD plugins for the surveying business, supporting different kinds of sensors - single measurements from total stations, huge point clouds from 3D laserscanners, and last not least photos for doing 3D digital photogrammetry inside AutoCAD. Admittedly we do not provide a fully automatic extraction of a consistent 3D surface model out of photos - having this would be like owning a gold-mine ;) - but having 3D photogrammetry together with the powerful modeling tools of AutoCAD has its potential for a variety of applications (Architecture, Heritage, Archeology,...).

And that's why we are very interested in having access to the Photosynth camera orientation parameters (interior and exterior).

Apr 17, 2010 at 5:59 PM

I'm unsure whether the link to this software was present when I began the thread and linked to the output video above, but I was looking at the page for Yasutaka Furukawa's CMVS and PMVS2 this morning and am interested to know whether synth data could be exported to the same format that Bundler outputs. I'm very unclear at present, exactly what input either of the programs need, but they are designed to work with Bundler's output, that much I grasp. I assume that the input images need to be locally stored, but I really don't understand how they go about whatever it is that they do.

This may be a tangent from what I initially began talking about above, as the output is the image features matched back into the model, rather than more dense pointclouds, but I'm interested nonetheless.

 

Coordinator
Apr 18, 2010 at 6:39 PM

The export of camera parameters as CSV files is planned. I haven't taken a look at the format that Bundler outputs yet, but it could probably be exported in that format as well.

Apr 22, 2010 at 3:57 PM

Hey Guys,

I have been testing some output for the camera parameters thanks to Christoph, but am falling short on understanding some of the data.  I will keep you posted of my findings, but if anyone wants to collaborate, I would welcome it.

My goal is to get the camera stations into PhotoModeler and then be able to process and create dense point clouds or just model with the photos.

Regards,

Eugene

 

 

Coordinator
May 21, 2010 at 1:00 PM

This has been implemented in the lastest release: SynthExport 1.1.0.

May 23, 2010 at 6:12 PM

Has anyone had any success taking the Photosynth camera parameters and migrating them into another package?  I have been successful at the import, but when I compare this to a the solution of another photogrammetry package, it is way off.

May 29, 2010 at 1:47 AM

Thanks for the great tool!

however, i am having trouble to recreate the camera orientation from the csv file. seemed straight forward at first, but the values always seem off ...

what exactly is the format of these rotation values? obviously in radians, but is it a yaw/pitch/roll, is there a special sequence of the axes the rotations have to be applied to ... ?

this transformation doesnt seem to work (processing):

translate(c.PositionX, c.PositionY, c.PositionZ);
rotateZ(c.RotationZ);
rotateX(c.RotationX);
rotateY(c.RotationY);

all the best,

d

May 29, 2010 at 10:59 AM

didi_o,

I've written a javascript form to generate output (nearly) ready to go into the PMVS2 pipeline for dense point-cloud reconstruction.  It generates list.txt and bundle.out file content, and then you run the Bundle2PMVS code on it.  This includes calculation of the rotation matrix, as you need. 

It's at http://blog.neonascent.net/archives/converting-photosynths-to-dense-point-clouds/

I'm still having a problem that the PhotoSynth calculated focal-lengths are COMPLETELY different from what is coming out of Bundler (the offline code base Photosynth is based on).  Any advice, and I can fix up the script.

Kermit Example images
focal length from EXIF data in photo: 5mm
Photosynth output from CSV:   1.05587
Bundler list.txt value:  660.8030
Bundle.out value: 6.7483561379e+002

Another example - Phil reconstruction
focal lenght from EXIF data in photo: 6mm
photosynth CSV output: 1.0412
bundler list.txt: 2358.33333
bundle.out: 2.3813042054e+003

This is completely messing up the Radial Undistort.  It might be a function of the size of the CCD/CMOS sensor, but I'm not sure what the relationship it.   Again, a little help and the conversion script will be working!

May 29, 2010 at 12:47 PM

Hi Josh,

I am pretty sure that Photosynth uses a ratio of the image sensor size so simply multiply the length of the sensor (in my camera, it's 23.6mm) by the value you have and that should give you the calculated focal length.

I tested this on a fixed focal length lens and it seemed to work out.  It was not exact, but close.

I haven't seen your program, but I am interested to try...if you can, please email me at eliscio "at" hotmail.com.

Thanks,

Eugene

 

May 29, 2010 at 2:57 PM
Edited May 30, 2010 at 4:31 AM

Hi Eugene,

That definitely makes sense for the values I'm seeing:  5.27mm sensor / 5mm focal length = 1.054, very close to the PhotoSynth value of 1.05587

I've checked, and the value for focal length used by Bundler is measured in pixels, so it is the horizontal resolution / ratio, confirmed by checking out the Bundler output:

[Extracting exif tags from image ./p01.jpg]
  [Focal length = 6.000mm]
  [CCD width = 5.760mm]
  [Resolution = 1704 x 2264]
  [Focal length (pixels) = 2358.333

The "code" I have is embedded in the form at the bottom of the post I quoted above:

http://blog.neonascent.net/archives/converting-photosynths-to-dense-point-clouds/

I've now updated it to ask for a longest-side-of-image resolution, so that it can calculate the correct focal length data for Bundler.  For example in the images above you'd enter 2264. 
I'm just running PMVS2 now with these exported parameters to see what it comes up with, but it looks like something is still a bit funny. 

The rotation matrix calculator is done using the code from http://toolserver.org/~dschwen/tools/rotationmatrix.html

On closer inspection, the rotation is definitely not in degrees; it ranges from roughly -2 to +2, though a little over sometimes.  Possible it is in Radians.  Is anyone aware?

If anyone has any helpful feedback, I am poised to modify the code for others (and myself!) to use. :)

Kind Regards,

Josh

Coordinator
May 30, 2010 at 4:05 PM

didi_o and JoshHarle,

the rotation angles are exported in radians, however, I am not fully sure if they are correct. SynthExport converts them from quaternions using the algorithm described here.

Christoph

May 31, 2010 at 2:01 AM

Hi Christoph,

As I said, I'm interested in taking these camera parameters into PMVS2, which normally takes output from Bundler.  Bundler is based on the same code as PhotoSynth, written by Noah Snavely, so there should be fairly similar output.  From the Bundler documentation:

Each camera entry <cameraI> contains the estimated camera intrinsics and extrinsics, and has the form:


    <f> <k1> <k2>   [the focal length, followed by two radial distortion coeffs]
    <R>             [a 3x3 matrix representing the camera rotation]
    <t>             [a 3-vector describing the camera translation]

So is the raw output similar to this in any way?  

Kind Regards,

Josh

May 31, 2010 at 3:36 AM

Here's some information from a contact..."the raw format is simply the x,y,z of a quaternion, and that w=sqrt(x^2+y^2+z^2)"

Eugene

Coordinator
May 31, 2010 at 11:22 AM
Edited May 31, 2010 at 11:25 AM

Josh,

Photosynth stores rotations as quaternions but SynthExport converts them to three angles (Rx, Ry, Rz) during export. Bundler, on the other side, takes rotation matrices as input.

I initally chose to convert the quaternions because I thought working with three angles is easier. But it should be possible to convert them back to quaternions and then into matrices using theta_triple_to_quaternion and quaternion_to_rotation_matrix.

I could also change SynthExport to export matrices directly if that would be an acceptable solution for everyone. At least importing them into Bundler should be easier then.

Christoph

May 31, 2010 at 1:16 PM

Hi Josh.

Eugene just pointed me to this thread.  To reiterate, the raw format of the rotations looks like the format used in the Doom 3 MD5 files.  That is, they store a compressed quaternion, where w = sqrt(1-x^2-y^2-z^2).  Anecdotally, for some reason Doom used (x,y,z,-w) in their format, not sure why.

 

 

May 31, 2010 at 2:08 PM
Edited May 31, 2010 at 2:34 PM

@Kurvi, thanks for that clarification.

@Christoph,

First, thanks for the great application. 

I've downloaded the source and got it to build quite happily.  I've had a fair bit of C# experience, but my head is a bit fuzzy regarding 3D maths.  I have talked to a colleague about the possibility  of pulling the images from the PhotoSynth Deep Zoom Index, to give everything needed and ready to reconstruct further, and this is definitely beyond the scope of my little javascript form, so it makes sense to either extend or fork SynthExport. 

...But obviously I need to be able to get the output working with PMVS2 before anything else makes sense.  Could you give me a hand with the code to get at rotation matrices?  I can offer at least pulling image names resolutions out of PhotoSynth, and an option for generating output files. :)

Kind Regards,

Josh

May 31, 2010 at 2:50 PM

I am hoping we are going to finally kill this thing with some brainpower!

If this ends up working I want to write an article about it in Forensic Magazine (after some testing of course).

Thanks,

Eugene

From: JoshHarle [mailto:notifications@codeplex.com]
Sent: May-31-10 10:08 AM
To: eliscio@hotmail.com
Subject: Re: Optionally exporting the camera positions as part of the model [synthexport:204015]

From: JoshHarle

Thanks Eugene,

Your friend, and Christoph (the author of the software) have both given helpful comments on the site.
I'll reply there.

Kind Regards,

Joshua Harle
School of Design COFA / Faculty of the Build Environment UNSW
http://tacticalspace.org
ph: +61 (0)409 771 163

On 31 May 2010 13:36, eliscio <notifications@codeplex.com> wrote:

From: eliscio

Here's some information from a contact..."the raw format is simply the x,y,z of a quaternion, and that w=sqrt(x^2+y^2+z^2)"

Eugene

Read the full discussion online.

To add a post to this discussion, reply to this email (synthexport@discussions.codeplex.com)

To start a new discussion for this project, email synthexport@discussions.codeplex.com

You are receiving this email because you subscribed to this discussion on CodePlex. You can unsubscribe on CodePlex.com.

Please note: Images and attachments will be removed from emails. Any posts to this discussion will also be available online at CodePlex.com

Read the full discussion online.

To add a post to this discussion, reply to this email (synthexport@discussions.codeplex.com)

To start a new discussion for this project, email synthexport@discussions.codeplex.com

You are receiving this email because you subscribed to this discussion on CodePlex. You can unsubscribe on CodePlex.com.

Please note: Images and attachments will be removed from emails. Any posts to this discussion will also be available online at CodePlex.com

No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 9.0.819 / Virus Database: 271.1.1/2904 - Release Date: 05/31/10 02:25:00

May 31, 2010 at 3:18 PM
Edited Jun 1, 2010 at 2:28 PM

Eugene,

I am hoping so too.  It looks like it is certainly doable.  In the meantime, send me one of your imagesets and I will do a provisional reconstruction just using the offline tools (without PhotoSynth).  It may give you an indication of what the results could look like.


Kind Regards,

Joshua Harle
School of Design COFA / Faculty of the Build Environment UNSW
http://tacticalspace.org
ph: +61 (0)409 771 163

 

Coordinator
May 31, 2010 at 3:45 PM

Josh,

you'll need to adjust the CameraRotation struct and the ExportAsCsv method of CameraParameterList. Both are in CameraParameters.cs.

Untested code:

public struct CameraRotation
{
    public double[,] Matrix;

    public static CameraRotation FromNormalizedQuaternion(double x, double y, double z)
    {
        if (x == 0 && y == 0 && z == 0)
            return new CameraRotation() { Matrix = new double[3, 3] };

        double w = Math.Sqrt(1 - x * x - y * y - z * z);

        double[,] matrix = new double[3, 3];

        matrix[0, 0] = 1 - 2 * y * y - 2 * z * z;
        matrix[0, 1] = 2 * x * y + 2 * z * w;
        matrix[0, 2] = 2 * x * z - 2 * y * w;
        matrix[1, 0] = 2 * x * y - 2 * z * w;
        matrix[1, 1] = 1 - 2 * x * x - 2 * z * z;
        matrix[1, 2] = 2 * y * z + 2 * x * w;
        matrix[2, 0] = 2 * x * z + 2 * y * w;
        matrix[2, 1] = 2 * y * z - 2 * x * w;
        matrix[2, 2] = 1 - 2 * x * x - 2 * y * y;

        return new CameraRotation() { Matrix = matrix };
    }
}

Christoph

May 31, 2010 at 5:20 PM
Edited May 31, 2010 at 5:21 PM

Hey Cristoph.

The code snippet seems good, but I would add an extra sanity check before computing w.

if (x^2+y^2+z^2 >= 1.0) w = 0.0;

 

Otherwise, floating-point rounding can get you a square-root of a negative number in extreme cases. 

 

Jun 1, 2010 at 5:33 PM

Hi Christoph/Kurvitasch,

I have created a couple of 3D Studio Max scripts for visualising the supposed camera positions.   The PhotoSynth positions look perfect, but must be in a different format to Bundler, as the Bundler output looks totally wrong.  I've posted some screenshots at http://blog.neonascent.net/archives/converting-photosynths-to-dense-point-clouds/

As I say, my conceptualization of matrix operations is not great, so I can't figure out the significance of the rotations to possible format differences.  Maybe you guys can?

Kind Regards,

Josh

Jun 2, 2010 at 12:06 AM

Hi Josh

I'm not quite sure what I'm seeing in the photos, so it's hard for me to tell you what's wrong.  If you did a small number of cameras (say 3-4), and sent me the actual values (both Photosynth and Bundler), I may be able to figure out what's going on.

 

That said, in my experience, the first thing to try with matrix operations gone bad is to make sure that you got them chained in the correct order.  Doing the translation before the rotation can be very different from doing the rotation first.  Also, checking if the matrix is column-major or row-major could be useful.

 

Jun 2, 2010 at 6:28 AM
Edited Jun 2, 2010 at 9:42 AM

Hi Kurvitasch,

I've updated the posting, with what I have now, and the output matrices. 

The reason I was looking at the visualisation is because PhotoSynth and Bundler are going to produce slightly different points, so you can't look purely at the numbers and their correspondence.  I have tried experimenting with column-major or row-major order, and which operation happens first.  I found that the camera positions look sane if I use one matrix order and translation first, or the other matrix order and rotation first.  Not sure if this is purely coincidence, or something to be expected. 

At the moment I am getting the cameras aligned the same way as the PhotoSynth output (visualised as facing out), but the whole setup is mirrored, or the cameras facing the other opposite direction, but correctly placed.   They are also on different axis.

Kind Regards,

Josh

Jun 2, 2010 at 6:25 PM
Great thread, thanks everyone! something to think about .. best, dietmar
Jun 3, 2010 at 2:04 PM
Edited Jun 3, 2010 at 2:08 PM

Hi All,

So I have heard from Noah Snavely, the original author of the PhotoSynth code, and he pointed out the method of finding cameras' 3D locations from the Bundler documentation:

"One thing to note (if you haven’t already) is that the camera “translation” in the bundle file is not the same as the camera position. The position is -R^T * t, where t is the translation. Could that be the problem? Alternatively, is the coordinate system somehow to blame? I assume that in the camera’s coordinate system, the camera is looking down the negative z-axis, with the camera up vector pointing in the positive-y direction."

Following these instructions, I get camera positions that look sane, but that are

1) Rotated 90 degrees around the x axis, so that y and z are appear swapped

2) Are mirrored along the x axis, so what should be on the left (in the .PLY output) is on the rice, and vice versa.

If anyone has good advice on this, then I'd love to hear it. In the meantime I'm going to concentrate on pulling images back off from PhotoSynth's Deep Zoom index.

Kind Regards,

Josh

Jun 3, 2010 at 2:44 PM

Josh,

This sounds very familiar and although I was making comparisons between the Photosynth output and PhotoModeler camera positions, I always had an issue with rotations.  I could try to flip the axes and "make them work", but then there would be a position issue.

My expectation is that if the solutions are the same (or similar) between camera positions, the only major difference would be in the chosen scale.

Keep at it!...I am watching this thread closely.

Eugene

Jun 12, 2010 at 12:41 PM

Hello,

I am still getting inconsistent results, possibly there are some hidden scaling vectors as well ?

am wondering if anyone has figured out the contents of the 0.json file completely - for example, there is a "p" vector triple in the camera parameters - anyone found out what this represents?

best,

dietmar

Jun 12, 2010 at 1:08 PM

Hi Dietmar,

What sort of inconsistencies have you been getting?  I've not had any problems with the axis or the arrangement of cameras, although I haven't been paying much attention to the scale.  

Kind Regards,

Josh

Coordinator
Jun 12, 2010 at 1:22 PM
didi_o wrote:

am wondering if anyone has figured out the contents of the 0.json file completely - for example, there is a "p" vector triple in the camera parameters - anyone found out what this represents?

 "p" is a vector representing the "dominant plane" of the camera. I am not sure what that is exactly.

Jun 12, 2010 at 3:37 PM

Thanks christoph,

so it sounds like  the up-vector of the camera - would make sense.

Josh, i dont know, it kind of looks reasonable, but the views never match the fotos, the angles are off at weird angles. so i was not sure whether i missed something, or it is just an issue of photosynth's camera estimation...

 

dietmar

 

Jun 22, 2010 at 4:30 PM

Just wondering if anyone has had any success at all getting the camera positions and orientations to work??

Jul 23, 2010 at 1:38 AM
eliscio wrote:

Just wondering if anyone has had any success at all getting the camera positions and orientations to work??

 Well, it's one month later and it seems the camera positions are not as simple as would seem.  Please post if there has been any further progress.

Aug 21, 2010 at 6:00 PM

Hi All,

Astre Henri of the Visual Experiments blog has done some great work recently.  Not only has he made a GPU feature matcher for bundler which significantly speeds up matching, he's just come out with a PhotoSynth Toolkit that processes PhotoSynths into PMVS2.  I haven't tried the PhotoSynth toolkit yet, but the gpuSIFT works great, so I'm sure the toolkit does too.

Well done Astre!

 

Aug 21, 2010 at 6:41 PM
Edited Aug 24, 2010 at 11:51 AM

Awesome news, Josh! This is almost exactly what I was after. This all has great timing, being that yesterday was the second anniversary of Photosynth's public release.

I'm off to test the toolkit, then.

Cheers, Henri!

Aug 22, 2010 at 1:52 PM

I just tried this out and for one synth that I did, I was expecting to see a more dense result.  Can someone give some explanation as to the options and how they affect the result?

Aug 22, 2010 at 2:43 PM
Edited Aug 24, 2010 at 11:52 AM

@eliscio, what resolution are the images that you used? If you used the thumbs, then the results won't be very dense at all.

Henri is reluctant to release the tool to download the full resolution images, lest it land him in legal trouble, but if you're doing this with your own synths, you should still have the original photos on your hard drive, yes?

 

I don't have very much experience with PMVS yet, but in the pmvs_options.txt file, the defaults should look like this: 

level 1 (Level 0 will set PMVS to use the original resolution of whatever version of the images that you have placed in the distort folder. Higher levels will use smaller and smaller copies of the photos as the numbers increase. You can use this to find less detail if you're running out of memory. Level 1 is default because most cameras do not have RGB sensors for every pixel, thus examining the original resolution might be viewed as a waste of time. This call is yours, though.)

csize 2 (Cell size determines the density at which PMVS will try to reconstruct patches. The bigger the cell size, the fewer cells per image.)

threshold 0.7 (I'm a little hazy on this one. I understand that it is the tipping point which determines whether a patch will be accepted or not, but you'll just have to read the PMVS documentation yourself on this one.)

 wsize 7 (Determines how large a patch of imagery from the input photos patches will be matched to. Larger patches result in more stable reconstructions but drive the time to reconstruct up.)

 minImageNum 3 (Determines the number of photos a point must appear in to be kept in the reconstruction. Photosynth's default is 3. See: "Rule of three".)

CPU 4 (Determines the number of threads or 'Virtual CPUs' that will be spun up. If you have a single processor that supports multi-threading, then choose 2. If you have a multi-processor CPU where each processor supports multi-threading then do the multiplication and fill in the number here. It is, however, suggested that if you are running out of memory, you might want to lower this number.)

setEdge 0 ()

useBound 0 ()

useVisData 0 ()

sequence -1 ()

timages Must specify (Target images. The default numbers will change per dataset. These are the images that PMVS will attempt to cover in reconstructed points.)

oimages Must specify (Other images. These will be used to verify the accuracy of the reconstruction, but need not be covered in reconstructed points.)

 

To me, the big hitters seem to be using the original resolution photos, choosing a low Level, and keeping the Cell Size small.

If anyone else with more experience can help clarify things, please feel free. I'm still very much a newbie at this.

Aug 22, 2010 at 3:40 PM

I have had very nice results using a Level of 0, csize of 1, and threshold 0.9

I would suggest using CMVS first, to split your images into smaller chunks to avoid running out of memory.  CMVS also creates a vis.dat file that speeds up PMVS2.

 

Aug 22, 2010 at 3:47 PM

Thanks for the tips guys.  I was using the downloaded "thumbs" so that's the first reason and the second is that I used the default images.  I am just working on another synth for which I have the high res images.  I am going to crank up the settings to see what this gives me.

As far as output goes, I saw there was a ply file, but what are the other two files for?

Cheers,

Eugene

Aug 22, 2010 at 4:18 PM

@Eugene; the patch file is not easily usable - I believe it is mainly for debugging.  The .pset files contain the vertexes with normals, in a format ready to reconstruct with Michael Misha Kazhdan and Matthew Bolitho's Poisson mesh reconstruction implementation.

 

See the PMVS2 documentation for other output details.

 

Kind Regards,

Josh

Aug 22, 2010 at 7:16 PM

I am still working at it, but my set is 62 images.  I tried at full resolution and kept changing the settings, but I have been just spinning my wheels with crashes.  I have just now reduced the file sizes from 3mb each to about 150kb.  I am not sure if this is still over the top, but I just had another crash.  So, I am going to try and increase the levels and mess with CPU settings.  Perhaps this will help.

I have downloaded the CMVS, but could use some help as to how to run.

Eugene

Aug 22, 2010 at 8:42 PM

Wow, I finally got it and what a great result!

I had to reduce my set down to 22 images and make them 500kb instead of 3mb each.  The settings were as follows:

level 2
csize 2
threshold 0.7
wsize 7
minImageNum 3
CPU 8
setEdge 0
useBound 0
useVisData 0
sequence -1
timages -1 0 22
oimages -3

Level 1 did not work, but even at level 2 it still took a good half hour to process everything.  However the end result was impressive and really clean of any stray points or noise.  I will definitely need to continue testing...Thanks!

Aug 22, 2010 at 9:28 PM

@Josh,

I was under the impression from the screenshots linked to from Building Rome In A Day and the video from Towards Internet-Scale Multi-View Stereo that oriented patches were an alternate or intermediate step between dense point clouds and meshes and that these were what the pmvs_options.txt.patch file's coordinates described.

It is, of course, conceivable that the patches are really being laid on a transparent mesh, but this seems a little strange to me.

Aug 23, 2010 at 1:12 AM

@eliscio: CMVS will really help.  Level 0, csize 1 was impossible for me (led to out-of-memory crashes), without splitting everything up.  CMVS is fairly easy to use.  Run CMVS with number of images you want for your chunks, and the number of CPUs.  Then run GenOption with your PMVS2 configuration options.

From the CMVS docs:

Usage: ./cmvs prefix maximage[=100] CPU[=4]

Example: If you want to specify maximage=70 and CPU=6 with the data directory located at ./pmvs, try the following command
./cmvs ./pmvs/ 70 6

and

Usage: ./genOption prefix level[=1] csize[=2] threshold[=0.7] wsize[=7] minImageNum[=3] CPU[=8]

 

@Nate,

Everything that comes out of PMVS2 is orientated patch based data, but that .patch file in particular is (for me) an opaque format that I haven't used.  I either reconstruct using the built-in Poisson Reconstruction of Meshlab using the .PLYs, or using the stand-alone implementation using the .PSET.   I'll post an example of the whole process this evening.

Aug 24, 2010 at 1:43 PM

I had a response back from Astre:

"CMVS can’t be used used with PhotoSynth because some information given by Bundler output are not available in the binary produced by PhotoSynth. I’m working on 2 others options:

  • - compiling a 64bit version of PMVS2 for windows
  • - create a new Bundler Matcher based on Surf to replace PhotoSynth"

So for now I will have to stick with small photo sets.

 

 

 

Sep 19, 2010 at 4:04 PM

Hi Guys,

I have a guide and tutorial video each for Astre's PhotoSynth Toolkit, and a repackaged BundleMatcher (Bundler/CMVS/PMVS2).  I wrote them for a university workshop I hosted; I hope you find them useful!

http://blog.neonascent.net/archives/photosynth-toolkit/

http://blog.neonascent.net/archives/bundler-photogrammetry-package/

Kind Regards,

Josh

Oct 3, 2010 at 11:43 PM

Josh,

A little late perhaps, but the videos are very well done and I am sure they have already been well received.

I have been using Astre's Toolkit now on a few projects and I try to avoid the 32 bit memory issue, by trying to keep the number of photos down to 25 or so and also shoot in 2mp resolution so I don't have to convert anything. 

I am traveling at the moment, but once I get back to my office, I am going to test the 64 bit version of PMVS2.

Cheers,

Eugene

Nov 24, 2010 at 10:29 PM

Josh, I never got around to saying it either, but great work with the video tutorials.

 

For everyone else on this thread: Greg Downing from xRez Studios recently posted about how he used Photosynth's exported camera parameters to create a gigapixel image of an Egyptian artifact that could not have been captured otherwise. Give it a read; I reckon you'll enjoy it. He also provides the C# he wrote to extract the camera positions.

You can also view his results on the xRez weblog.

Nov 25, 2010 at 4:38 AM

@Nate

Thanks - I'm glad they were helpful!  And Greg Downing's code looks like good fun too!

Kind Regards,

Josh

Dec 4, 2010 at 3:39 AM

Hi Everyone; 

I've modified SynthExport to pull camera positions and - more interestingly - create textured camera projection maps in 3DS Max.  This should make it fairly easy to repeat Greg Downing's results.

Check out http://blog.neonascent.net/archives/cameraexport-photosynth-to-camera-projection-in-3ds-max/ for the application, and videos of the process.

Kind Regards,

Josh Harle