The UltraMap 3.0 Beta season is almost on us and it promises a great new development in mainstream air-borne photogrammetry. Pervasive availability of 3D point clouds with attached colour information, without requiring a separate sensor. Others have developed similar technologies as well, independent of camera systems. The hardware requirements for the UltraMap Beta points towards what we can expect about the processing technologies underneath – 2 Nvidia Tesla C2075 cards, for use with dense matching.

The typical dense matching pipeline makes heavy use of the newer programmable GPU’s – SiftGPU, Surf, Fast and ORB GPU and trivially parallel operations, such as epipolar geometry calculation. This is a departure from traditional correlation based image matching and an improvement from corners based matching with more robustness to scale, rotation and intensity variations. However the current tie-point generation systems do not have the flexibility to integrate existing information regarding the terrain and camera intrinsic and extrinsic parameters to limit the number of keypoints generated per pair. These constraints can be built if the system is attached to a particular camera model, as UltraMap happens to be.

After the matchpoints are generated, we need to perform a sparse bundle adjustment. Again the options here are varied and your mileage may vary due to the rigourousness of the bundle adjustment algorithm, the number of match points available and even fact that you run the adjustment on CPU or GPU (due to the inherent numerical noise). In the open-source arena there is the classic SBA from Lourakis, or SSBA from Zach, MCBA from Changchang or even the recently released CERES from Google, this does not consider all the other options available from commercial sources.

Beyond the matching and initial bundle adjustment of a large set of images comes the patch based dense matching of the images at full resolution.  For multi-view stereo , CMVS/PMVS have by now become the de-facto tools in the research. This stage is typically memory hungry, but trivially parallel and can performed using multi-threaded shared memory processing or even distributed memory MPI based processing. MVS uses another round of feature matching using Difference of Gaussians and Harris Corners. Static background features such as clouds unless detected can add noisy artefacts in otherwise empty spaces, especially on bright days with patchy clouds, similarly shadows are also propagated to the reconstructed point cloud. Hence the best data acquisition is performed on days with diffuse omni-directional lighting.

Once the point cloud is generated, the next stage usually involves removing spurious points, meshing and texturing. Due to the high density and large geometric extent of the point clouds, scalable processing is required at this stage as well. Meshing can be performed using Poisson techniques for closed objects or using Greedy meshing and Delauney for open objects.