Code to Semi Global Matching

I’ve received a few emails asking me for the code to my implementation of Semi Global Matching. So here it is in the state that I last touched it. This is my icky dev code and I have no plans for maintaining it. In another month I’ll probably forget how it works. Before you look at the code, take to heart that this is not a full implementation of what Hirschmuller described in his papers. I didn’t implement 16-path integration or the mutual information cost metric.

TestSemiGlobalMatching is my main function. I developed inside a GTest framework while I was writing this code. [source]

The core of the algorithm is inside the following header and source file links, titled SemiGlobalMatching. As you can see it is all in VW-ese. Most of the math looks Eigen-like and there’s a lot of STL in action that I hope you can still read along. [header] [source]

Also, I haven’t forgotten that I promised to write another article on what I thought was a cool correlator idea. I’m still working on that code base so that I can show off cool urban 3D reconstruction using the satellite imagery I have of NASA Ames Research Center and Mountain View. (I think I have Google’s HQ in the shot.)

Update 3/3/2014

Someone emailed me to figure out what is the MOC imagery that the code keeps referring to? In this case MOC stands for the Mars Orbital Camera on board the 1996 Mars Global Surveyor mission. The stereo pair is epipolar rectified images of Hrad Vallis that I use regularly and represent a classical problem for me when performing satellite stereo correlation. A copy of the input imagery is now available here [epi-L.tif][epi-R.tif].

Advances in LRO-NAC processing in ASP

Since I last wrote, we’ve hired a new full-time employee. His name is Scott and we assigned him the task of learning ASP and LROC. The first utilities he’ll be contributing back to ASP are lronacjitreg and lronac2mosaic.py. The first utility is very similar to the ISIS utility with a similar name designed for HiRISE. The second utility, lronac2mosaic.py, uses the first tool and can take LRO-NAC EDR imagery and make a non-projected image mosaic. What lronac2mosaic.py does internally is ‘noproj’ and then ‘handmos’ the images together. There is an offset between the images due to model imperfections. Finding the correct offset so the combined images are seamless is the task of lronacjitreg. All of this just a streamed line version of what I wrote in a past blog post.

Previously users and our team only had the option to run all 4 combinations of the 4 LRO-NAC input files through ASP and then glue them together afterwards. Now with the use of lronac2mosaic, we can feed whole LRO-NAC observations into ASP and receive the full DTM in one go. No messy mosaicking of 4 files.

I’ve used Scott’s program successfully to recreate most DTMs that ASU has made via SOCET SET. Using my home server, I’ve been able to recreate 77 of their DTMs to date. We’ve been fixing bugs as we hit them. One of the biggest was in our search range guessing code. The next upcoming release of ASP will have the fruits of that labor. Previously ASP had a bad habit of ignoring elevation maximas in the image as it thought those IP measurements were noise. Now we should have a better track record of getting measurements for the entire image.

One of the major criticisms I’m expecting from the large dump of LRO-NAC DTMs we expect to deliver next year is what is the quality of the placement of our DTMs in comparison to LOLA. Another engineer we have on staff, Oleg, has just the solution for this. He has developed an iterative closest point (ICP) program called pc_align which will be in the next release. This is built on top of ETHZ Autonomous System Lab’s libpointmatcher and has the ability to take DTMs and align them to other DTMs or LIDAR data. This enables us to create well-aligned products that have height values agreeing within tens of meters with LOLA. Our rough testing shows us having a CE90 of 4 meters against LOLA after performing our corrections.

We’re not ready for the big production run yet. One problem still plaguing our process is that we can see the CCD boundaries in our output DTMs. We believe most of this problem is due to the fact that the angle between line of sight of the left and right CCDs changes with every observation. ISIS however only has one number programmed into it, the number provided by the FK. Scott is actively developing an automated system to determine this angle and to make a custom FK for every LRO-NAC observation. The second problem we’re tracking is that areas of high slope are missing from our DEMs. This is partially because we didn’t use Bayes EM for our test runs but it also seems like our disparity filtering is overly aggressive or just wrong. We’ll get on to that. That’s all for now!

Semi-Global Matching

My co-worker Oleg Alexandrov has been working on Ames Stereo Pipeline for a while now. He’s just about touched all parts of the code. This is crystal clear when you look at our logs on Github. One of things he ribs me most about in ASP is that he doesn’t like that we advertise ASP as using up-to-date stereo correlation algorithms. “Come ‘on Man! That’s just not true!” he tells me. Depending on whom you talk to, we’re using primo 90’s research[7] or something re-hashed from the 70’s[1]. Either way, it is clear, we haven’t been tracking along with the current research in our field when it comes to integer correlation. This blog post covers the first part of my own personal research to find a new correlator algorithm that would improve ASP in terms of both runtime and quality. In this post, I’ll be reviewing an algorithm called Semi-Global Matching.

Semi-Global Matching or SGM is a method developed by Heiko Hirschmueller from the DLR. He first wrote about this in his 2005 paper[2]. He then elaborated and proposed further improvements in [3][4]. The best paper to learn the method is probably his second paper[3]. In my opinion his first paper gets side tracked in using a Mutual Information (MI) cost metric when the interesting bit is just SGM. The most exciting bit about this work is that it comes from DLR and is an algorithm they have applied to aerial and satellite mapping. I believe this is the method that was used to create the wonderful HRSC DTMs that some how managed to overcome the weird JPEG artifacts in their raw imagery.

The Algorithm

Heiko might have sped over his SGM algorithm in his first paper because he didn’t view it as being as challenging to implement when compared to the MI cost metric. SGM shares a lot in common with scanline optimization stereo, which has had a lot of prior research but now-a-days is considered a dead end. Let’s review how that worked. Also, the images used for this testing are from the Middlebury Stereo dataset. More information about this data and stereo algorithms applied to them can be found in [8][9][10][11].

Scanline optimization stereo is essentially Viterbi decoding in my mind. We evaluate along an epipolar line. In the case of a rectified image, this is along the horizontal scanline. For each pixel along that scanline we evaluate each possible disparity result. The costs of each pixel along the scanline can then be stacked into a matrix. A scanline was highlighted in the above picture. The cost of each pixel (x-direction) versus each possible disparity value (y-direction) is shown in the picture below. The solution for the disparity along this scanline is then the path through this matrix/image that has minimum costs (dark areas). We also have to include some smoothness constraint otherwise our disparity result could follow the jagged jumps in this map that don’t represent reality.

Finding the minimum path is then an application of Linear Programming. We iterate through the matrix left to right and take a rolling sum. The cost of an element in the rolling sum vector for the current pixel and disparity combination is equal to the cost for the current location plus the lowest summed cost from the set of all possible disparities for the prior pixel location. Heiko applies some additional constraints in that he penalizes the cost when ever the disparity changes. He penalizes higher for multiple disparity value transitions than he does for 1. Penality for an increment of 1 in disparity is P1 and anything greater is P2. This entire paragraph can more elegantly be described in the following equation.

Applying this forward and backward for each scanline we can solve for a disparity map. Here’s an example.

Notice there’s a lot of tearing between the scanlines. The image looks as if we had tracking error on a VCR. We could fix this by using a larger kernel size. For the above, the kernel size was 1 x 1 px. Something more unique would insure matches that are constrained between lines. Another approach would be to insure some smoothness constraint across lines as opposed to just disparity transitions. Heiko’s solution to this issue is what makes SGM what it is. He opted to instead perform scanline optimization at multiple angles and then take the summed cost vector to determine the final disparity result. Note, that even though we evaluate the scanline along an angle, the disparity is still defined as going along the epipolar line (perfectly horizontal in this case). Each line direction produces results like the following:

The sum of their cost vectors and then taking the minimum produces a beautiful result like the following:

My Implementation and Results

All of the pictures above were created with my implementation of SGM. In my version, I only evaluate 8 line directions. So my results are noisier than what’s seen in Heiko’s original paper. Despite this, the end results are pretty impressive. Here’s line up of ASP result, my SGM result, Heiko’s result, and the ground truth result. ASP performs so badly because it has a large kernel size that can’t handle the sudden jumps in depth. ASP then blurs the disparity discontinuities.

Unfortunately I must mention the bad side of this method. There are several cons the first and weakest of arguments is the required CPU time. My implementation of this method takes about 23 seconds to evaluate this with 8 paths. 16 paths like the paper would have doubled the processing time. ASP chops through this image in seconds. Heiko says he got the processing time down to 1.3 seconds in 2005. So I’m doing something horribly wrong and could improve my implementation. However speed is always an issue, some ideas to address this issue are iSGM[5] and wSGM[6]. These are hierarchical methods of SGM and fancy maps to reduce the length required to integrate paths for cost evaluation.

A bigger issue is that SGM requires an absurd amount of memory. All costs for all pixels and all possible disparity values are evaluated up front in a big tensor that has a size of W * H * D * 1 byte. We also need a copy of this tensor for evaluating paths and another to store summing for all paths. Those are two allocations of memory that are W * H * D * 2 bytes. They need to be a higher data type to avoid integer rollover artifacts. This demo set is 450 x 375 px and I evaluated it across 64 possible disparities. Thus SGM required 51 MB. That doesn’t include the memory cost of just loading the images up and allocating space for the disparity result. Imagine tackling a satellite image where we we have a disparity range of 2000 pixels.

Another pesky complaint against SGM is how to figure out what the values should be for the two penalty arguments. Heiko never mentioned what he used; likely he tuned the values for each stereo pair to get best results. However these penalty values ultimately determine how this algorithm responds to occlusion and angled surfaces. What works for World View 1 in glacier regions (low frequencies) might not necessarily apply to World View 1 in the city (square wave patterns). In practice, we would want to have tuned parameters for each instrument we work on and for each type of terrain.

The final and most harsh criticism of SGM is that it can only be applied to 1D disparity searches and the range must be defined beforehand. 1D searches work for calibrated stereo rigs such as the imagery used in this post. However it is my opinion that real data always has imperfections and finding the true disparity requires searching in the Y direction still. Examples of this are linescan cameras that have jitter but the spacecraft ephemeris isn’t sampled high enough to capture such as MOC, HiRISE, and LROC. There’s also the case were the camera isn’t perfect such as the World View cameras where there is a subpixel misregistration of all 50 CCDs. They can’t be easily corrected for because we can’t have raw imagery. ASP also doesn’t have a perfect method for epipolar rectification of linescan cameras. We have a linear approximation with our affine method but the problem is nonlinear.

SGM is still an amazing algorithm that is incredibly useful. There are ton of papers out there that find it to be perfect for their applications. Beside the incredible detail it resolves, my other favorite bit about the algorithm is that its runtime is deterministic. It depends squarely on search range and there is no worst-case path versus best-case path that we have to deal with in ASP’s binary search approach. Despite this, SGM seems to be a non-ideal match for ASP. ASP hopes to address the generic correlation problem where we don’t always trust our camera information or our data. I want something that can still handle 2D searching. In my next post I’ll show off another promising algorithm that seems to address that concern along with runtime and memory requirements. Until then, have some music.

Update: Code is now available here.

Works Cited

[1]  Barnea, D. (1972). A Class of Algorithms for Fast Digital Image Registration. IEEE Transactions on Computers.
[2]  Hirschmuller, H. (2005). Accurate and Efficient Stereo Processing by Semi Global Matching and Mutual Information. CVPR .
[3]  Hirschmuller, H. (2008). Stereo Processing by Semiglobal Matching and Mutual Information. Pattern Analysis and Machine Intelligence .
[4]  Hirschmulller, H., Buder, M., & Ernst, I. (2012). Memory Efficient Semi-Global Matching. Remote Sensing and Spatial Information Sciences .
[5]  Klette, S. H. (2012). Iterative Semi-Global Matching for Robust Driver Assistance Systems. ACCV .
[6]  Spangenberg, R., Langner, T., & Rojas, R. (2013). Weighted Semi-Global Matching and Center-Symmetric Census Transform for Robust Driver Assistance. CAIP .
[7]  Sun, C. (1997). A Fast Stereo Matching Method. Digital Image Computing: Techniques and Application.
[8] Scharstein, D., Szeliski, R. (2002). A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision .
[9] Scharstein, D., Szeliski, R. (2003). High-accuracy stereo depth maps using structured light. CVPR .
[10] Scharstein, D., Pal, C. (2007). Learning conditional random fields for stereo. CVPR .
[11] Hirschmuller, H., Scharstein, D. (2007). Evaluation of cost functions for stereo matching. CVPR .

Rendering the Moon from AMC

Ames Stereo Pipeline is currently a candidate in the running for NASA’s Software of the Year award. We needed a pretty graphic and decided that making a cool and possibly realistic rendering of Moon would fit the bill. This is a little more difficult than simply hill shading because the Moon has a specular component to it. Hill shading can be interpreted as being only the diffuse component of the phong model. An interesting example of the Moon’s specular compoent is this picture taken with a Hasselblad during Apollo 17.

Below, are videos of my results where the Sun’s projected coordinates sweep from 90 W longitude to 90 E. Both these views are showing map projected imagery, thus this isn’t a true perspective shot. The difference between these videos is the input observer’s altitude above the surface. Lower altitude and more of the specular component can be seen.

I’m using nothing but Apollo Metric imagery for this example. The DEM source was our product for LMMP. The Albedo source was the Apollo Metric Albedo map that Dr. Ara Nefian produced and will eventually be in NASA’s PDS. The photometric model was the Lunar-Lambertian model as described by McEwen’s paper. Shadows were not rendered because that seemed harder than I could accomplish in 24 hours.

Second Thought on LRO-NAC NoProj

Mark Rosiek from USGS Astrogeology expressed some doubt in my noproj recipe for LRO-NAC. This is completely reasonable because if everything were perfect, there would be no offset after the noproj step between the LE and RE CCDs. He requested 2 DEM samples of Tsiolkovsky Crater so he could compare to USGS work. I decided I’d try the same during free time through out the day. Unforunately I couldn’t find a trusted reference DEM to compare against. I can’t find ASU’s result of this area on their RDR webpage and it is impossible to use LMMP Portal. Can you even download the actual elevation values from LMMP? No I don’t want your color render or greyscale! 

Next best idea is to just difference the two DEMs I created and compare them. I didn’t bundle adjust these cameras at all so their placement to each other is off by some ~150 meters. After ICPing them together and then differencing them I can produce an error map. The gradient or shape of the error can help clue into how bad my fiddling in the ideal camera was. Below is the result between the stereo pairs M143778723-M143785506 and M167363261- M167370048.

You can definitely see the CCD boundary in this error map. You can see the CCD boundary in the hillshade. It’s about a 1 meter jump when looking at a single DEM. Error map seems to agree with this and shows a peak error at 4 meters in the CCD boundary location.

So what is the cause of these errors? Well we have two sources, (1) the projection into the ideal camera model and (2) bad spacecraft ephemeris. I’d argue that most of the difference between these two DEMs is the spacecraft ephemeris. That’s top priority in my book to correct. However when I look at the disparity map for these stereo pairs, there is a definitive jump at the CCD boundary. The cause of that would be an improper model of the angle between LE and RE cameras in ISIS. This should be expected. They’re currently not modeling the change in camera angles with respect to temperature. Also when looking at the vertical disparity, it seems one side is always brighter than the other. This suggests that I didn’t quite have the correct values for input to handmos.

Trying to fix all of this now might be a mistake on my part. I know that ASU will eventually produce new SPICE data that contains corrections from LOLA and a USGS study. I also imagine that eventually the work of E. J. Speyerer et al. will be implemented in ISIS.

Noproj’n LRO Imagery

Earlier this year I found out that I got my first proposal funded. I’ve had directed funding before thanks to NASA’s cyrosphere program. I’ve also been a Co-I on numerous other funded proposals with co-workers and friends at USGS. However my LASER proposal to perform Mass DEM production from LROC-NA imagery was something I wrote as PI and was competed. Now that it is accepted, I have two years to follow through and I’d like to share the whole process here on the blog.

The first problem I’m going to write about has bugged me for a while. That problem is that each LROC-NA observation is actually 2 files and makes using ASP awkward. In the past I’ve been telling people to run perform 4 runs with stereo. Do the combinations of LE-LE, LE-RE, RE-LE, and RE-RE. However UofA had the same problem with HiRISE, which comes down in 20 different files. They had worked out an ISIS process chain that would noproj those files together among other things to make a single observation. I don’t know why such step doesn’t exist for LROC-NA but today I’ll show you what I’ve come up with.

If you try right now to noproj the RE cub file to the LE cube file you’ll find that the program errors out because an ideal camera model for LROC has not been defined. Definitions for ideal noproj cameras for all ISIS cameras are defined in a file at $ISIS3DATA/base/applications/ noprojInstruments003.pvl. Looking at that file we see that there are 5 elements that can be defined for the ideal camera model, TransY, ItransS, TransX, ItransL, and DetectorSamples. DetectorSamples is easy; it’s what the output image width should be in the final image.  The Trans* variables are measured in millimeters and define a focal plane offset from the original camera model we are using. I plan to noproj with the match file being the LE file. Thus Trans* references a shift from the original position of the LE sensors. The Itrans* variables are pixel conversion of the millimeter measurements. It’s used by ISIS to run the math the other direction. If Trans* and Itrans* variable don’t match, funny errors will occur where the CCDs noproj correctly but the triangulation in ASP is completely bonkers. Relating the two is easy, just use the pixel pitch. For LROC-NA that is 7 micrometers per pixel.

So how do we decide what to set the TransY and TransX values to be? If those values are left to zero, the LE image will be centered on the new image and the RE will be butted up beside but falling off the edge of the new image. A good initial guess would be to set TransY to be a shift half the CCD width. A better idea I thought to use was to look at the FK kernel and identify the angle differences between the two cameras and then use the instantaneous field of view measurement to convert to pixel offset between the two CCD origins. Use pixel pitch to convert to millimeters and then divide by two will be the shift that we want. Below are the final noproj settings I used for LROC.

 Group = "LUNAR RECONNAISSANCE ORBITER/NACL"
    TransY = 16.8833
    ItransS = -2411.9
    TransX = 0.6475
    ItransL = -92.5
    DetectorSamples = 10000
  End_Group

At this point we can noproj the imagery and then handmos them together. A naïve version would look something like the following.

> noproj from=originalRE.cub to=RE.cub match=originalLE.cub
> noproj from=originalLE.cub to=LE.cub match=origianlLE.cub
> cp LE.cub LERE_mosaic.cub
> handmos from=RE.cub mosaic=LERE_mosaic.cub

Alas, the LE and RE imagery does not perfectly align. In the HiRISE processing world we would use hijitreg to determine a mean pixel offset. There is no generic form of hijitreg that we can use for LROC-NA. There is the coreg application, but in all my tests this program failed to find any correct match points between the images. I’ve tried two other successful methods. I’ve manually measured the offset using Qtie. Warning: Sending this images to Qtie requires twiddling with how serial numbers are made for ideal cameras. This is something I should document at some point as it would allow bundle adjusting ideal cameras like fully formed HiRISE and LROC images. My other solution was the example correlate program in Vision Workbench.  I did the following to make processing quick (5 minutes run time).

> crop from=LE.cub to=LE.centerline.cub sample=4900 nsamples=200
> crop from=RE.cub to=RE.centerline.cub sample=4900 nsamples=200
> parallel isis2std from={} to={.}.tif format=tiff ::: *centerline.cub
> correlate --h-corr-min -60 --h-corr-max 0 --v-corr-min 0 --v-corr-max 60 LE.centerline.tif RE.centerline.tif

That creates a disparity TIF file. The average of the valid pixels (third channel is 1) can then be used to solve for the mean translation. That translation can then be used during handmos by using the outsample and outline options. Ideally this would all be one program like hijitreg but that is for another time. Below is the final result.

Hijitreg actually does more than just solve for the translation between CCDs on HiRISE. It correlates a line of pixels between the CCDs in hope of determining the jitter of the spacecraft. I can do the same!

From the above plots, it doesn’t look like there is much jitter or the state of the data is not in form that I could determine. The only interesting bit here is that the offset in vertical direction changes with the line number. I think this might be disparity due to terrain. The imagery I used for my testing was M123514622 and M123521405, which happened to be focused on the wall of Slipher crater. The NA cameras are angled 0.106 degrees off from each other in the vertical direction. Ideal stereo geometry would 30 degrees or 15 degrees, but 0.106 degrees should allow some disparity given the massive elevation change into the crater. I wouldn’t recommend using this for 3D reconstruction but it would explain the vertical offset signal. The horizontal signal has less amplitude but does seem like it might be seeing spacecraft jitter. However it looks aliased, impossible to determine what the frequency is.

Anyways, making a noproj version of the LROC-NA observation is a huge boon for processing with Ames Stereo Pipeline. Using the options of affine epipolar alignment, no map projection, simple parabola subpixel, and it is possible to make a beautiful DEM/DTM in 2.5 hours. 70% of that time was just during triangulation because ISIS is single threaded. That would be faster with the application parallel_stereo (renamed from stereo_mpi in ASP 2.2.2).