LatLon Notebooks

I finally got around to checking my mail at work and found some LatLon notebooks! They’re all island themed with topo maps on their covers. The one pictured is my favorite since it features Mars and the elevation data was from CTX data processed by ASP. Yay!

Thank you Aitor Garcia. I’m now the engineer with the best stationery at NASA ARC.

Thoughts on improving ASP Stereo

Ames Stereo Pipeline’s (ASP) current integer correlator leaves a bit to be desired. Currently it does poorly in scenes with aggressively changing slopes. It is also a coin flip if it finishes in an hour or several days. So I’ve been working on researching a new correlator by reading, implementing, and applying to satellite imagery a select few of the top performers from the Middlebury stereo competition. I started this a long time ago with PatchMatch and I never gave a good conclusion. Now I will summarize by experiences and give a short introduction into the current solution I’m pursuing.

Algorithm Shoot Out!

Semi Global Matching: [1] This is a world recognized well performing stereo algorithm. I don’t need to say its graces. The cons in my opinion are that it uses a lot of memory and that it is only applicable to 1-D searching. For ASP we like to have 2-D searching solution, or optical flow, to handle flaws in the user’s input data and because some users have actual used us for the creation of velocity maps. We might have been to get around the inaccuracies in our users data and the horrors of linescan cameras by calculating a local epipolar vector for each pixel after a bundle adjustment. But I believe we wouldn’t catch the vertical CCD shifts and jitter seen in HiRISE and World View satellites. As for the memory problem, there have been derivative SGM algorithms to fix this problem, but I didn’t evaluate them.

PatchMatch: [2] I really love the idea of starting with a uniform noise guess for the disparity and then propagating lowest cost scores to the neighbors. There were a couple downsides to this algorithm for satellite processing. 1. The cost metric of absolute differencing intensities and gradients performed much worse than an NCC cost metric in the arctic. 2. The run time was horrible because each pixel evaluation didn’t reuse previous comparison used by neighboring pixels. 3. Their slanted window needed to be adapted to support slants in the vertical direction as well as the horizontal for our optical flow demands. I couldn’t find a formulation that would stop the algorithm from cheating by defining the window as 90 degrees from the image geometry. In other words, the PatchMatch algorithm kept finding out that the correlation score was minimal if you define the kernel as having no area.

Despite all of this, a plain jane implementation of PatchMatch using NCC and non-slanted windows performs the same as a brute force dense evaluation of a template window across all disparity values. This also means that places were brute force search fails, so would PatchMatch. But, maybe for extremely large search ranges, PatchMatch might be worth its processing time. I will keep this in the back of mind forever.

PatchMatch with Huber Regularization: [3] This is a neat idea that is built on top of Steinbruecker and Thomas Pock’s “Large Displacement Optical Flow Computation without Warping” [4]. (Seriously though, Thomas Pock hit a gold mine with lets apply a regularity term to everything in computer vision and show an improvement.) I eventually learned how to implement primal dual convex optimization using Handa’s guide [5]. I realize now that everything I need to know is in Heise’s paper [3], but it took me a long time to understand that. But I never implement exactly what the paper described. They wanted a smoothness constraint applied to both the disparity and the normal vector used to define the correlation kernel. Since I couldn’t define a slanted correlation kernel that worked both in horizontal and vertical directions as seen in PatchMatch, I just dropped this feature. Meaning I only implemented a smoothness constraint with the disparity. Implementing this becomes a parameter tuning hell. I could sometimes get this algorithm to produce a reasonable looking disparity. But if I ran it for a few more iterations, it would then proceed to turn slopes into constant disparity values until it hit a color gradient in the input image. So it became a very difficult question for me of, at what point in the iterations do I get a good result? How do I know if this pretty result is actually a valid measurement and not something the smoothness constraint glued together because it managed to out weight the correlation metric?

In the image I provided above, you can see a slight clustering or stair-casing of the disparity as the smoothness constraint wants disparity values to match their neighbors. Also, random noise spikes would appear and neither the total variance (TV) term or the data term would remove them. They are stable minimas. I wonder if a TVL1 smoothnss term would be better than a TVHuber.

As Rigid As Possible Stereo under Second Order Smoothness Priors: [6] This paper again repeats the idea seen in PatchMatch Huber regularization of having a data term, a regularization term, and theta that with increasing iterations forces the two terms to converge. What I thought was interesting here was their data term. Instead of matching templates between the images for each pixel, instead break the image into quadratic surfaces and then refine the quadratic surfaces. This is incredibly fast evaluating even when using a derivative free Nelder Mead simplex algorithm. Like several orders of magnitude faster. Unfortunately this algorithm has several cons again. 1. They wanted to use the cost metric seen in PatchMatch that again doesn’t work for the satellite imagery of the arctic that I have been evaluating. 2. The data term is incredibly sensitive to its initial seed. If you can’t find a surface that is close to the correct result, the Nelder Mead algorithm will walk away. 3. This algorithm with a smoothness prior is again a parameter tuning hell. I’m not sure that what I tune up for my images will work equally well for the planetary scientists as well as the polar scientists.

Fast Cost-Volume Filtering for Visual Correspondence and Beyond: [7] This is an improvement algorithm to the KAIST paper about Adaptive Support Weights. [8] (Hooray KAIST! Send us more of your grad students.)  They say hey, this is actually a bilateral filter that Yoon is talking about.  They also recently read a paper about performing a fast approximate of the bilateral filter by using a ‘guided’ filter. In the end this is similar to a brute force search except now there is fancy per pixel weighting for each kernel based on image color. This algorithm is easy to implement but fails to scale to larger search regions just like brute force search. Yes this can be applied in a pyramidal fashion but I think in the next section that I’ve hit on a better algorithm. I wouldn’t count this algorithm out all together though. I think it has benefit as a refinement algorithm to the disparity, specifically in cases of urban environments with hard disparity transitions.

What am I pursuing now?

Our users have long known that they could get better results in ASP by first map projecting their input imagery on a prior DEM source like SRTM or MOLA. This reduces the search range. But it also warps the input imagery so that from the perspective of the correlator, the imagery doesn’t have slopes anymore. The downside is that this requires a lot of work on the behalf of the user. They must run a bunch more commands and must also find a prior elevation source. This prior elevation source may or may not correctly register with their new satellite imagery.

My coworker Oleg hit upon an idea of instead using a lower resolution disparity, smoothing it, and then using that disparity to warp the right image to the left before running the final correlation. It’s like map projecting, except with out the maps, camera models, and prior existing elevation source. I’ve been playing with it and made a pyramidal version of this idea. Each layer of the pyramid takes the previous disparity, smooths it, and the warps the right image to the left. Here is an example of a disparity produced with this method up against current ASP correlator’s result. I have single thread rough prototype variant and an in-progress parallel variant I’m working on.

Looks pretty good right? There are some blemishes still that I hope to correct. Surprisingly the parallel implementation of this iterated warping correlator is 2x faster than our current pyramid correlator. Another surprising feature is that the runtime for this mapping algorithm is mostly constant despite the image content. For consecutive pyramid levels, we’ll always be searching a fixed square region, whereas the original ASP pyramid correlator will need to continually adapt to terrain it sees. Once I finish tuning this new algorithm I’ll write another post on exactly why this is the case. There is also a bit of a black art for smoothing the disparity that is used for remapping the right image.

Conclusion

I’m pretty excited again about finding a better correlator for ASP. I still have concerns about how this iterative mapping algorithm will handle occlusions. I also found out that our idea is not completely new. My friend Randy Sargent has been peddling this idea for a while [9]. He even implemented it for the Microscopic Imager (MI) on board the Mars Exploration Rovers. I didn’t even know that software existed! But they used homography matrices, while our ‘new’ idea is using a continuous function. In the end, I hope some of you find my diving into stereo research papers useful. I learned about a lot of cool ideas. Unfortunately very few of them scale to satellite imagery.

Reference

[1] Hirschmuller, Heiko. “Accurate and efficient stereo processing by semi-global matching and mutual information.” Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on. Vol. 2. IEEE, 2005.
[2] Bleyer, Michael, Christoph Rhemann, and Carsten Rother. “PatchMatch Stereo-Stereo Matching with Slanted Support Windows.” BMVC. Vol. 11. 2011.
[3] Heise, Philipp, et al. “PM-Huber: PatchMatch with Huber Regularization for Stereo Matching.” Computer Vision (ICCV), 2013 IEEE International Conference on. IEEE, 2013.
[4] Steinbrucker, Frank, Thomas Pock, and Daniel Cremers. “Large displacement optical flow computation withoutwarping.” Computer Vision, 2009 IEEE 12th International Conference on. IEEE, 2009.
[5] Handa, Ankur, et al. Applications of Legendre-Fenchel transformation to computer vision problems. Vol. 45. Tech. Rep. DTR11-7, Department of Computing at Imperial College London, 2011.
[6] Zhang, Chi, et al. “As-Rigid-As-Possible Stereo under Second Order Smoothness Priors.” Computer Vision–ECCV 2014. Springer International Publishing, 2014. 112-126.
[7] Rhemann, Christoph, et al. “Fast cost-volume filtering for visual correspondence and beyond.” Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. IEEE, 2011.
[8] Yoon, Kuk-Jin, and In So Kweon. “Adaptive support-weight approach for correspondence search.” IEEE Transactions on Pattern Analysis and Machine Intelligence 28.4 (2006): 650-656.
[9] Sargent, Randy, et al. “The Ames MER microscopic imager toolkit.” Aerospace Conference, 2005 IEEE. IEEE, 2005.

Patch Match with Smoothness Constraints

I really wanted to implement the Heise’s 2013 paper about Patch Match with Huber regularization [1]. However that is a little too much for me to bite off, I kept getting lost in the derivation of the Huber-ROF model. So instead I chose to implement something in between Heise’s paper and Steinbrucker’s 2009 paper about large displacement optical flow [2]. I shall call what I implemented the ghetto Patch Match with squared regularization. I’m fancy.

Idea

The idea behind this implementation is that we want to minimize the following equation over the image omega.

If this equation looks familiar, it’s because you’ve seen in Semi Global Matching and it is the basis for most all “global” matching ideas. An interesting technique that I think originates for Steinbrucker’s paper is the idea that solving this directly is impossible. However if we separate this equation into two problems with convex relaxation, we can solve both halves with independent algorithms. In the equation below, both U and V are the disparity image. When theta is large, U and V should be the same disparity image.

The first half of the equation is the data term or template correlation term, which Steinbrucker recommend solving with a brute force search and where Heise instead solved with Patch Match stereo. The right half of the equation is the same as the image denoising algorithm called Total Variation Minimization, first written up in 1992 by Ruden et al (ROF) [3]. However, don’t use the 1992 implementation for solving that equation. Chambolle described a better solution using a primal dual algorithm [4]. I’d like to say I understood that paper immediately and implemented it, however I instead lifted an implementation from Julia written by Tim Holy [5] and then rewrote in C++ for speed. (Julia isn’t as fast as they claim, especially when it has to JIT its whole standard library on boot.)

Solving this new form of the equation is done with theta close to zero. Each iteration U is solved with a fixed V. Then V is solved with a fixed U. Finally in preparation for the next iteration, theta is incremented. After many iterations, it is hoped that U and V then converge on the same disparity image.

The algorithm I’ve implemented for this blog post is considerably different from Heise’s paper. I’ve parameterized my disparity with an x and y offset. This is different from Heise’s x offset and normal vector x and y measurement. I also used a sum of absolute differences while Heise used adaptive support weights. Heise’s method also used a Huber cost metric on the smoothing term while I use nothing but squared error. This all means that my implementation is much closer to Steinbrucker’s optical flow algorithm.

Results

Here is a result I got with cones using a 3×3 kernel size.

That 3×3 kernel size should impress you. I can greatly drop the kernel size when there is a smoothness constraint.

However images of cones and perspective shots in general are not what I care about, I care about the application of this algorithm to terrain. Here it is applied to a subsampled image of a glacier imaged by World View 2.

Crap! Instead of converging to some actual disparity value, it just blurred the initial result.

Disaster Analysis

Possibly this didn’t work because I didn’t tune parameters correctly. With all global matching algorithms there a bunch of lambdas that try to bend the two cost metrics to equal footing. But is this really the answer?

I think it has something to do with the fact that Patch Match produces the same result as brute force search result, a.k.a. the global minimum across all disparity values. If you look at the global minimum result for both of these input images shown below, you’ll see that the World View imagery is exceptionally ill conditioned.

It seems that the smoothness constraint isn’t strong enough to over come the initial global minimum from correlation. Despite the noise, in the ‘cones’ example it is clear there is just high frequency noise around the final disparity image that we want. However with the World View example, there is no such clear view. That disparity image should be a simple gradient along the horizontal. ASP’s take on this region is shown right. However since the texture is self-similar across the image, this creates patches of global minimum that are the incorrect and resemble palette knife marks.

Seeding the input correlation result with ASP’s rough solution helps things to some degree. But ultimately it seems that this convex relaxation trick doesn’t insure that we converge on the correct (and what I hope is the global) minimum from the original equation. Another detail that I learned is that if Patch Match is going to converge, it will happen in the first 10 iterations.

What’s Next?

My conclusion is that having a mostly globally minimum cost metric is required. Unfortunately increasing kernel size doesn’t help much. What I think is worth investigating next is: (1) implement the affine matching and second derivative smoothing detailed in Heise’s paper; (2) Cop out and implement hierarchical Patch Match; (3) recant my sins and investigate fSGM. Eventually I’ll find something good for ASP.

Reference

[1] Heise, Philipp, et al. “PM-Huber: PatchMatch with Huber Regularization for Stereo Matching.”
[2] Steinbrucker, Frank, Thomas Pock, and Daniel Cremers. “Large displacement optical flow computation withoutwarping.” Computer Vision, 2009 IEEE 12th International Conference on. IEEE, 2009.
[3] Rudin, Leonid I., Stanley Osher, and Emad Fatemi. “Nonlinear total variation based noise removal algorithms.” Physica D: Nonlinear Phenomena 60.1 (1992): 259-268.
[4] Chambolle, Antonin. “An algorithm for total variation minimization and applications.” Journal of Mathematical imaging and vision 20.1-2 (2004): 89-97.
[5] https://github.com/timholy/Images.jl

Google’s Lens Blur Feature

Earlier today Google announced that they added a new feature to their android camera application called “Lens Blur”. From their blog post we can see that they take a series of images and solve for a multiview solution for the depth image of the first frame in the sequence. That depth image is then used to refocus the image. It is probably better described that they selectively blur an already fully focused image. This is similar to how they create bokeh in video games, however their blog states that they actually invoke the thin lens equations to get a realistic blur.

I thought this was really cool that they can solve for a depth image on a cellphone in only a few seconds. I also wanted access to that depth image so I can play with it and make 3D renderings! However that will come at another time.

Accessing the Data

Everything the GoogleCamera app needs to refocus an image is contained right inside the JPEG file recorded to /sdcard/DCIM/Camera after you perform a Lens Blur capture. Go ahead and download that IMG of the picture you took. How is this possible? It seems they use Adobe’s XMP format to store a depth image as PNG inside the header of the original JPEG. Unfortunately, I couldn’t figure out how to make usually tools, GDAL, read that.

So instead let’s do a manual method. If you open the JPEG file up in Emacs, right away you’ll see the XMP header which is human readable. Scroll down till you find the GDepth:Data section and copy everything between the quotes to a new file. Now things get weird. For whatever reason, the XMP format embeds binary containing strings of extension definition and a hash periodically through this binary PNG data you just copied. This doesn’t belong in the PNG and libpng will be very upset with you! Using Emacs again you can search this binary data for the http extension definition string and then delete everything between the \377 or OxFF upfront to the 8 bytes after the hash string. Depending on the file length you’ll have to perform this multiple times.

At this point you now have the raw PNG string! Unfortunately it is still in base64, so we need to decode it.

> base64 –D < header > header.png

Viola! You now have a depth image that you can open with any viewer.

Going back to the original XMP in the header of the source JPEG, you can find some interesting details on what these pixels mean like the following:

GFocus:BlurAtInfinity="0.036530018"
GFocus:FocalDistance="20.225327"
GFocus:FocalPointX="0.59722227"
GFocus:FocalPointY="0.5756945"
GImage:Mime="image/jpeg"
GDepth:Format="RangeInverse"
GDepth:Near="12.516363143920898"
GDepth:Far="47.468021392822266"
GDepth:Mime="image/png"

How did they do this?

There is no way for me to determine this. However looking at GoogleCamera we can see that it refers a librefocus.so which seems to contain the first half of the Lens Blur feature’s native code. Doing a symbol dump gives hints about what stereo algorithm they use.

librefocus.so:00221b68 D vtable for vision::optimization::belief_propagation::BinaryCost
librefocus.so:00221b88 D vtable for vision::optimization::belief_propagation::GridProblem
librefocus.so:00221bc0 D vtable for vision::optimization::belief_propagation::LinearTruncatedCost
librefocus.so:00221ca0 V vtable for vision::sfm::RansacSolver<vision::sfm::EssentialMatrixProblem>
librefocus.so:00221cb8 V vtable for vision::sfm::RansacSolver<vision::sfm::FundamentalMatrixProblem>
librefocus.so:00221c78 D vtable for vision::sfm::StdlibRandom
librefocus.so:00221b58 D vtable for vision::image::FixedPointPyramid
librefocus.so:00221c00 D vtable for vision::shared::EGLOffscreenContext
librefocus.so:00221800 V vtable for vision::shared::Progress
librefocus.so:00221be0 V vtable for vision::shared::GLContext
librefocus.so:00221cd0 V vtable for vision::stereo::PlaneSweep
librefocus.so:00221ce8 D vtable for vision::stereo::GPUPlaneSweep
librefocus.so:00221d20 D vtable for vision::stereo::PhotoConsistencySAD
librefocus.so:00221d00 V vtable for vision::stereo::PhotoConsistencyBase
librefocus.so:00221d60 D vtable for vision::stereo::MultithreadPlaneSweep
librefocus.so:00221ae8 V vtable for vision::features::FeatureDetectorInterface
librefocus.so:00221b00 D vtable for vision::features::fast::FastDetector
librefocus.so:00221d78 V vtable for vision::tracking::KLTPyramid
librefocus.so:00221a90 V vtable for vision::tracking::KLTPyramidFactory
librefocus.so:00221dd0 D vtable for vision::tracking::KLTFeatureDetector
librefocus.so:00221da0 D vtable for vision::tracking::GaussianPyramidFactory
librefocus.so:00221db8 D vtable for vision::tracking::LaplacianPyramidFactory

So it appears they use a KLT system that is fed to an SfM model to solve for camera extrinsics and intrinsics. Possibly they solve for the fundamental between the first sequential images, decompose out the intrinsics and then solve for the essential matrix on the remainder of the sequence. After that point, they use a GPU accelerated plane sweep stereo algorithm that apparently has some belief propagation smoothing. That’s interesting stereo choice given how old plane sweeping is and the lack of popularity in the Middlebury tests. However you can’t doubt the speed. Very cool!

Semi-Global Matching

My co-worker Oleg Alexandrov has been working on Ames Stereo Pipeline for a while now. He’s just about touched all parts of the code. This is crystal clear when you look at our logs on Github. One of things he ribs me most about in ASP is that he doesn’t like that we advertise ASP as using up-to-date stereo correlation algorithms. “Come ‘on Man! That’s just not true!” he tells me. Depending on whom you talk to, we’re using primo 90’s research[7] or something re-hashed from the 70’s[1]. Either way, it is clear, we haven’t been tracking along with the current research in our field when it comes to integer correlation. This blog post covers the first part of my own personal research to find a new correlator algorithm that would improve ASP in terms of both runtime and quality. In this post, I’ll be reviewing an algorithm called Semi-Global Matching.

Semi-Global Matching or SGM is a method developed by Heiko Hirschmueller from the DLR. He first wrote about this in his 2005 paper[2]. He then elaborated and proposed further improvements in [3][4]. The best paper to learn the method is probably his second paper[3]. In my opinion his first paper gets side tracked in using a Mutual Information (MI) cost metric when the interesting bit is just SGM. The most exciting bit about this work is that it comes from DLR and is an algorithm they have applied to aerial and satellite mapping. I believe this is the method that was used to create the wonderful HRSC DTMs that some how managed to overcome the weird JPEG artifacts in their raw imagery.

The Algorithm

Heiko might have sped over his SGM algorithm in his first paper because he didn’t view it as being as challenging to implement when compared to the MI cost metric. SGM shares a lot in common with scanline optimization stereo, which has had a lot of prior research but now-a-days is considered a dead end. Let’s review how that worked. Also, the images used for this testing are from the Middlebury Stereo dataset. More information about this data and stereo algorithms applied to them can be found in [8][9][10][11].

Scanline optimization stereo is essentially Viterbi decoding in my mind. We evaluate along an epipolar line. In the case of a rectified image, this is along the horizontal scanline. For each pixel along that scanline we evaluate each possible disparity result. The costs of each pixel along the scanline can then be stacked into a matrix. A scanline was highlighted in the above picture. The cost of each pixel (x-direction) versus each possible disparity value (y-direction) is shown in the picture below. The solution for the disparity along this scanline is then the path through this matrix/image that has minimum costs (dark areas). We also have to include some smoothness constraint otherwise our disparity result could follow the jagged jumps in this map that don’t represent reality.

Finding the minimum path is then an application of Linear Programming. We iterate through the matrix left to right and take a rolling sum. The cost of an element in the rolling sum vector for the current pixel and disparity combination is equal to the cost for the current location plus the lowest summed cost from the set of all possible disparities for the prior pixel location. Heiko applies some additional constraints in that he penalizes the cost when ever the disparity changes. He penalizes higher for multiple disparity value transitions than he does for 1. Penality for an increment of 1 in disparity is P1 and anything greater is P2. This entire paragraph can more elegantly be described in the following equation.

Applying this forward and backward for each scanline we can solve for a disparity map. Here’s an example.

Notice there’s a lot of tearing between the scanlines. The image looks as if we had tracking error on a VCR. We could fix this by using a larger kernel size. For the above, the kernel size was 1 x 1 px. Something more unique would insure matches that are constrained between lines. Another approach would be to insure some smoothness constraint across lines as opposed to just disparity transitions. Heiko’s solution to this issue is what makes SGM what it is. He opted to instead perform scanline optimization at multiple angles and then take the summed cost vector to determine the final disparity result. Note, that even though we evaluate the scanline along an angle, the disparity is still defined as going along the epipolar line (perfectly horizontal in this case). Each line direction produces results like the following:

The sum of their cost vectors and then taking the minimum produces a beautiful result like the following:

My Implementation and Results

All of the pictures above were created with my implementation of SGM. In my version, I only evaluate 8 line directions. So my results are noisier than what’s seen in Heiko’s original paper. Despite this, the end results are pretty impressive. Here’s line up of ASP result, my SGM result, Heiko’s result, and the ground truth result. ASP performs so badly because it has a large kernel size that can’t handle the sudden jumps in depth. ASP then blurs the disparity discontinuities.

Unfortunately I must mention the bad side of this method. There are several cons the first and weakest of arguments is the required CPU time. My implementation of this method takes about 23 seconds to evaluate this with 8 paths. 16 paths like the paper would have doubled the processing time. ASP chops through this image in seconds. Heiko says he got the processing time down to 1.3 seconds in 2005. So I’m doing something horribly wrong and could improve my implementation. However speed is always an issue, some ideas to address this issue are iSGM[5] and wSGM[6]. These are hierarchical methods of SGM and fancy maps to reduce the length required to integrate paths for cost evaluation.

A bigger issue is that SGM requires an absurd amount of memory. All costs for all pixels and all possible disparity values are evaluated up front in a big tensor that has a size of W * H * D * 1 byte. We also need a copy of this tensor for evaluating paths and another to store summing for all paths. Those are two allocations of memory that are W * H * D * 2 bytes. They need to be a higher data type to avoid integer rollover artifacts. This demo set is 450 x 375 px and I evaluated it across 64 possible disparities. Thus SGM required 51 MB. That doesn’t include the memory cost of just loading the images up and allocating space for the disparity result. Imagine tackling a satellite image where we we have a disparity range of 2000 pixels.

Another pesky complaint against SGM is how to figure out what the values should be for the two penalty arguments. Heiko never mentioned what he used; likely he tuned the values for each stereo pair to get best results. However these penalty values ultimately determine how this algorithm responds to occlusion and angled surfaces. What works for World View 1 in glacier regions (low frequencies) might not necessarily apply to World View 1 in the city (square wave patterns). In practice, we would want to have tuned parameters for each instrument we work on and for each type of terrain.

The final and most harsh criticism of SGM is that it can only be applied to 1D disparity searches and the range must be defined beforehand. 1D searches work for calibrated stereo rigs such as the imagery used in this post. However it is my opinion that real data always has imperfections and finding the true disparity requires searching in the Y direction still. Examples of this are linescan cameras that have jitter but the spacecraft ephemeris isn’t sampled high enough to capture such as MOC, HiRISE, and LROC. There’s also the case were the camera isn’t perfect such as the World View cameras where there is a subpixel misregistration of all 50 CCDs. They can’t be easily corrected for because we can’t have raw imagery. ASP also doesn’t have a perfect method for epipolar rectification of linescan cameras. We have a linear approximation with our affine method but the problem is nonlinear.

SGM is still an amazing algorithm that is incredibly useful. There are ton of papers out there that find it to be perfect for their applications. Beside the incredible detail it resolves, my other favorite bit about the algorithm is that its runtime is deterministic. It depends squarely on search range and there is no worst-case path versus best-case path that we have to deal with in ASP’s binary search approach. Despite this, SGM seems to be a non-ideal match for ASP. ASP hopes to address the generic correlation problem where we don’t always trust our camera information or our data. I want something that can still handle 2D searching. In my next post I’ll show off another promising algorithm that seems to address that concern along with runtime and memory requirements. Until then, have some music.

Update: Code is now available here.

Works Cited

[1]  Barnea, D. (1972). A Class of Algorithms for Fast Digital Image Registration. IEEE Transactions on Computers.
[2]  Hirschmuller, H. (2005). Accurate and Efficient Stereo Processing by Semi Global Matching and Mutual Information. CVPR .
[3]  Hirschmuller, H. (2008). Stereo Processing by Semiglobal Matching and Mutual Information. Pattern Analysis and Machine Intelligence .
[4]  Hirschmulller, H., Buder, M., & Ernst, I. (2012). Memory Efficient Semi-Global Matching. Remote Sensing and Spatial Information Sciences .
[5]  Klette, S. H. (2012). Iterative Semi-Global Matching for Robust Driver Assistance Systems. ACCV .
[6]  Spangenberg, R., Langner, T., & Rojas, R. (2013). Weighted Semi-Global Matching and Center-Symmetric Census Transform for Robust Driver Assistance. CAIP .
[7]  Sun, C. (1997). A Fast Stereo Matching Method. Digital Image Computing: Techniques and Application.
[8] Scharstein, D., Szeliski, R. (2002). A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision .
[9] Scharstein, D., Szeliski, R. (2003). High-accuracy stereo depth maps using structured light. CVPR .
[10] Scharstein, D., Pal, C. (2007). Learning conditional random fields for stereo. CVPR .
[11] Hirschmuller, H., Scharstein, D. (2007). Evaluation of cost functions for stereo matching. CVPR .

DEFCON

Every year I go to Vegas to attend DEFCON with friends. It turns out last year my friends and I made the DEFCON 20 documentary. The frame is centered square on the back of my head.

I think I was working on some bluetooth sniffer? At the very least we were making some (bad) jokes and trying to get the camera man to go away.