I’m not Dead

However I’ve been really busy working with Google’s project Tango. I encourage you to watch the video if you haven’t already.

What is NASA doing with project Tango? Well currently there is a very vague article available here. However the plan is to apply Tango to the SPHERES project to perform visual navigation. Lately, I’ve been overwhelmed with trying to meet the schedule of 0-g testing and all the hoops there are with getting hardware and software onboard the ISS. This has left very little time to write, let alone sleep. In a few weeks NASA export control will have gone over our collected data and I’ll be able to share here.

In the short term, project Tango represent an amazing opportunity to perform local mapping. The current hardware has little application to the large-scale satellite mapping that I usually discuss. However I think the ideas present in project Tango will have application in low-cost UAV mapping. Something David Shean of U of W has been pursuing. In the more immediate term I think the Tango hardware would have application to scientists wanting to perform local surveys of a glacial wall, cave, or anything you can walk all over. It’s ability to export its observations as a 3D model makes it perfect for sharing with others and perform long-term temporal studies. Yes the 3D sensor won’t work outside, however stereo observations and post processing with things like Photoscan are still possible with the daylight imagery. Tango will then be reduced to providing an amazing 6-DOF measurement of where each picture was taken. If this sounds interesting to you, I encourage you to apply for a prototype device! I’d be interested in helping you tackle a scientific objective with project Tango.

This picture is of Mark and I dealing with our preflight jitters of being onboard the “Vomit Comet” while 0-g testing the space-rated version of Project Tango. This shares my current state of mind. Also, there aren’t enough pictures of my ugly mug on this blog. I’m the guy on the right.

Patch Match Stereo

3 months ago during my Semi Global Matching post, I mentioned I would have a follow along post about another algorithm I was interested in. That algorithm is PatchMatch, and is in my opinion a better fit for satellite processing. It was first written for hole filling [1] and then later it was applied to stereo [2]. Unlike Ames Stereo Pipeline’s currently implemented hierarchical discrete correlator, PatchMatch easily applies to high dimensionality searching and it has a run time that corresponds to number of iterations and image size. Not image content, which I hope interprets into predictable run times. Unfortunately my current implementation is really slow and has several faults. However my hope is that eventually we’ll make it fast and that it’s predictability will enable our users to budget for the CPU cost of creating a DEM while still improving quality.

How it Works

The algorithm for PatchMatch is actually very simple. The disparity image is seeded with uniform noise. That noise is random guesses for what the disparity should be. However since every pixel is another guess, and the image is quite a bit larger than the search range we are looking over, a handful of these guesses are bound to be close to correct. We then propagate disparities with low cost to the neighboring disparities.  The cycle repeats by then adding more noise to the current propagated disparity but at half the previous amplitude. This cycle repeats until everything converges. Surprisingly, a recognizable disparity appears after the first iteration.

If you are having a hard time visualizing this, here is a video from the original authors of Patch Match Stereo.

PatchMatch can be quick in the hole filling application and probably for stereo from video. (Propagating between sequential frames can greatly help convergence.) Unfortunately it is quite slow in use to pairwise stereo. When performing stereo correlation in ASP 2.3, we group kernel evaluations by being at the same disparity value (sometimes called disparity plane in literature). This means that overlapping kernel evaluations will reuse pixel comparisons. The reuse of pixel comparisons is a huge speed boost. My implementation of PatchMatch has none of that. My implementation is also solving for a floating-point precision of disparity. While this gives me very detailed disparity maps, the downside is that my implementation spends 75% of its time performing interpolation. I think for this algorithm to become useful for researchers, I’ll need to discretize the disparities and prerender the input as super sampled to avoid repeated interpolation calculations.

I have one final statement to make on PatchMatch algorithm. In practice, PatchMatch produces results that are very similar to performing a brute force search over the entire search range where winner (lowest cost) takes all. That is different from ASP’s hierarchal search, which at each pyramid level falls into local minimums. It is only the hierarchal part of it that has any use in finding the global. What this means for PatchMatch is that we need to use a cost metric that is globally minimal. For narrow baseline stereo in a controlled lighting environment, the fast Sum of Absolute Differences (SAD) fits the bill. But in the cruel realities of satellite imagery, the only solution is Normalized Cross Correlation (NCC). Unfortunately, NCC is quite a bit slower to evaluate than SAD. Also, in cases where SAD worked, NCC performs worse (probably due to being sensitive to the calculation of mean?).

Where PatchMatch does poorly

I’ve already hit my primary concern, which is the speed of Patch Match. However I think if we view PatchMatch as replacement for both correlation and BayesEM, it already appears cost effective. A different concern is what happens when the search range’s area is larger than the area of the image being correlated. A real world example would be rasterizing a 256^2 tile for a WV2 image where the search range is 1400×100. The area of the tile is less than the search’s. The number of close valid guesses seeded by the initial uniform noise drops dramatically. It might be a good idea to then take the interest points that ASP finds and dump them in the initial random disparity guess that PatchMatch evaluates. This would insure there is a disparity to propagate from.

Previously I mentioned that PatchMatch in my experiments seems to behave as a brute force winner takes all algorithms. This means that increase the search size also means a decrease in quality because our odds of finding an outlier with less cost than the actually correct match have gone up. So maybe completely abandoning hierarchal search entirely is a bad idea.  Other ideas might be enforcing a smoothness constraint that would heavily penalize the random jumping that characterizes outliers. The enforcement of smoothness constraint was the driving force behind the power of Semi Global Matching.

Expanding the Algorithm

Since kernel evaluations in PatchMatch are individual problems and there is no shared arithmetic like there is in ASP’s stereo correlation, it makes it easy to add on some advance algorithms to PatchMatch. The original PatchMatch Stereo paper [#] mentioned adding on parameters to the disparity maps so that an affine window could be used for matching. This would improve results on slopes and I believe would be a better alternative to prior map projecting the input imagery.

Another idea mentioned in the paper was adding Adaptive Support Weights (ASW) [3] to each kernel evaluation. It adds weighting to pixels that match the color of the center of the kernel in addition to weighting central pixels more importantly than pixels at the edge. The idea is that pixels of the same color are likely to be at the same disparity. While not always correct, it definitely applies to scenarios where the top of a building is a different color than its sides. Implementations I show in my figures operate only on grayscale imagery and not color like the original paper proposed.

In practice, this additional weighting does a good job at preserving edges at depth discontinuity. This is important for cliffs and for buildings. A proposed improvement is geodesic adaptive support weights, which weighs same color pixel heavily that are connected to the central pixels. This fixes problems like a picture of blades of grass, where the blades have the same color but are actually at different disparities.

Wrapping back around to the idea that Patch Match needs to have a cost metric that is globally minimal. It might be worth while exploring different cost metric such as Mutual Information or if I remember correctly, the fast evaluating Matching by Tone Mapping (MTM) [4]. Nice side effect of this would be that correlation is completely lighting invariant and could even correlate between infrared and visible.

Additional Comparisons

Here’s a disparity image of NASA Ames Research Center captured from a Pleiades satellite. Whatever they do to pre-process their imagery, I couldn’t epipolar rectify that imagery. Search range given to both ASP and PatchMatch was [-40,-20,70,40]. Despite being noisy, I like how the PatchMatch w/ ASW preserved straight edges. The specularity of some of the roof lights on a hanger however really threw ASW for a loop.

I also processed a crop from a World View 2 image of somewhere in the McMurdo Dry Valleys. This really shot a hole in my argument that Patch Match would be completely invariant to search range. The search range was [-700,-50,820,50]. I did however have to hand tune the search range to get this excellent result from ASP. The automatic detection initially missed the top of the mountains.

Source Code

I’m being messy because this is my research code and not something production. You’ll need to understand VW and Makefiles if you want to compile and modify.

https://github.com/zmoratto/PatchMatch

Further Research

I’m still diving through papers during my free evenings. I don’t do this at work because I have another project that is taking my soul. However there appears to be a good paper called Patch Match Filter that tries to improve speed through super pixel calculation [5]. There is also an implementation of PatchMatch that adds a smoothness constraint that performs better than the original PatchMatch paper [6]. When it came out, it was the best performing algorithm on the Middlebury dataset. However, recently another graph cut algorithm dethroned it. I myself will also just look at applying patch match as a refinement to a noisy disparity result from ASP 2.3 using a kernel size of 3. Having a small kernel still evaluates extremely quickly even if the search range is huge.

I’m sorry this post doesn’t end in a definitive conclusion. However I hope you found it interesting.

References

[1] Barnes, Connelly, et al. “PatchMatch: a randomized correspondence algorithm for structural image editing.” ACM Transactions on Graphics-TOG 28.3 (2009): 24.
[2] Bleyer, Michael, Christoph Rhemann, and Carsten Rother. “PatchMatch Stereo-Stereo Matching with Slanted Support Windows.” BMVC. Vol. 11. 2011.
[3] Yoon, Kuk-Jin, and In So Kweon. “Adaptive support-weight approach for correspondence search.” Pattern Analysis and Machine Intelligence, IEEE Transactions on 28.4 (2006): 650-656.
[4] Hel-Or, Yacov, Hagit Hel-Or, and Eyal David. “Fast template matching in non-linear tone-mapped images.” Computer Vision (ICCV), 2011 IEEE International Conference on. IEEE, 2011.
[5] Lu, Jiangbo, et al. “Patch Match Filter: Efficient Edge-Aware Filtering Meets Randomized Search for Fast Correspondence Field Estimation.” Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference. 2013.
[6] Heise, Philipp, et al. “PM-Huber: PatchMatch with Huber Regularization for Stereo Matching.”

Advances in LRO-NAC processing in ASP

Since I last wrote, we’ve hired a new full-time employee. His name is Scott and we assigned him the task of learning ASP and LROC. The first utilities he’ll be contributing back to ASP are lronacjitreg and lronac2mosaic.py. The first utility is very similar to the ISIS utility with a similar name designed for HiRISE. The second utility, lronac2mosaic.py, uses the first tool and can take LRO-NAC EDR imagery and make a non-projected image mosaic. What lronac2mosaic.py does internally is ‘noproj’ and then ‘handmos’ the images together. There is an offset between the images due to model imperfections. Finding the correct offset so the combined images are seamless is the task of lronacjitreg. All of this just a streamed line version of what I wrote in a past blog post.

Previously users and our team only had the option to run all 4 combinations of the 4 LRO-NAC input files through ASP and then glue them together afterwards. Now with the use of lronac2mosaic, we can feed whole LRO-NAC observations into ASP and receive the full DTM in one go. No messy mosaicking of 4 files.

I’ve used Scott’s program successfully to recreate most DTMs that ASU has made via SOCET SET. Using my home server, I’ve been able to recreate 77 of their DTMs to date. We’ve been fixing bugs as we hit them. One of the biggest was in our search range guessing code. The next upcoming release of ASP will have the fruits of that labor. Previously ASP had a bad habit of ignoring elevation maximas in the image as it thought those IP measurements were noise. Now we should have a better track record of getting measurements for the entire image.

One of the major criticisms I’m expecting from the large dump of LRO-NAC DTMs we expect to deliver next year is what is the quality of the placement of our DTMs in comparison to LOLA. Another engineer we have on staff, Oleg, has just the solution for this. He has developed an iterative closest point (ICP) program called pc_align which will be in the next release. This is built on top of ETHZ Autonomous System Lab’s libpointmatcher and has the ability to take DTMs and align them to other DTMs or LIDAR data. This enables us to create well-aligned products that have height values agreeing within tens of meters with LOLA. Our rough testing shows us having a CE90 of 4 meters against LOLA after performing our corrections.

We’re not ready for the big production run yet. One problem still plaguing our process is that we can see the CCD boundaries in our output DTMs. We believe most of this problem is due to the fact that the angle between line of sight of the left and right CCDs changes with every observation. ISIS however only has one number programmed into it, the number provided by the FK. Scott is actively developing an automated system to determine this angle and to make a custom FK for every LRO-NAC observation. The second problem we’re tracking is that areas of high slope are missing from our DEMs. This is partially because we didn’t use Bayes EM for our test runs but it also seems like our disparity filtering is overly aggressive or just wrong. We’ll get on to that. That’s all for now!

Mimicking Zero Gravity

I know I promised a different blog post than this one. However I’m a bit crammed for time lately. However I do have something quick to show …

This summer, my interns Abe Shultz and Benjamin Reinhardt rejuvenated an old gantry system that is used for mimicking zero gravity for small spacecraft. This is part of my work with the NASA SPHERES program. In this video I demonstrate the gantry responding to external forces applied on a wooden analog for SPHERE. In the future the real SPHERE will be loaded up and will use CO2 thrusters to move itself about the room with the gantry responding accordingly.

This test apparatus is one facility I plan to use to test out a new visual navigation system that my coworkers and I are developing. What I’m looking forward to is the vomit comet flight we have scheduled towards the end of the year. This is why I’ve left some of the software development of ASP with others.

Noproj’n LRO Imagery

Earlier this year I found out that I got my first proposal funded. I’ve had directed funding before thanks to NASA’s cyrosphere program. I’ve also been a Co-I on numerous other funded proposals with co-workers and friends at USGS. However my LASER proposal to perform Mass DEM production from LROC-NA imagery was something I wrote as PI and was competed. Now that it is accepted, I have two years to follow through and I’d like to share the whole process here on the blog.

The first problem I’m going to write about has bugged me for a while. That problem is that each LROC-NA observation is actually 2 files and makes using ASP awkward. In the past I’ve been telling people to run perform 4 runs with stereo. Do the combinations of LE-LE, LE-RE, RE-LE, and RE-RE. However UofA had the same problem with HiRISE, which comes down in 20 different files. They had worked out an ISIS process chain that would noproj those files together among other things to make a single observation. I don’t know why such step doesn’t exist for LROC-NA but today I’ll show you what I’ve come up with.

If you try right now to noproj the RE cub file to the LE cube file you’ll find that the program errors out because an ideal camera model for LROC has not been defined. Definitions for ideal noproj cameras for all ISIS cameras are defined in a file at $ISIS3DATA/base/applications/ noprojInstruments003.pvl. Looking at that file we see that there are 5 elements that can be defined for the ideal camera model, TransY, ItransS, TransX, ItransL, and DetectorSamples. DetectorSamples is easy; it’s what the output image width should be in the final image.  The Trans* variables are measured in millimeters and define a focal plane offset from the original camera model we are using. I plan to noproj with the match file being the LE file. Thus Trans* references a shift from the original position of the LE sensors. The Itrans* variables are pixel conversion of the millimeter measurements. It’s used by ISIS to run the math the other direction. If Trans* and Itrans* variable don’t match, funny errors will occur where the CCDs noproj correctly but the triangulation in ASP is completely bonkers. Relating the two is easy, just use the pixel pitch. For LROC-NA that is 7 micrometers per pixel.

So how do we decide what to set the TransY and TransX values to be? If those values are left to zero, the LE image will be centered on the new image and the RE will be butted up beside but falling off the edge of the new image. A good initial guess would be to set TransY to be a shift half the CCD width. A better idea I thought to use was to look at the FK kernel and identify the angle differences between the two cameras and then use the instantaneous field of view measurement to convert to pixel offset between the two CCD origins. Use pixel pitch to convert to millimeters and then divide by two will be the shift that we want. Below are the final noproj settings I used for LROC.

 Group = "LUNAR RECONNAISSANCE ORBITER/NACL"
    TransY = 16.8833
    ItransS = -2411.9
    TransX = 0.6475
    ItransL = -92.5
    DetectorSamples = 10000
  End_Group

At this point we can noproj the imagery and then handmos them together. A naïve version would look something like the following.

> noproj from=originalRE.cub to=RE.cub match=originalLE.cub
> noproj from=originalLE.cub to=LE.cub match=origianlLE.cub
> cp LE.cub LERE_mosaic.cub
> handmos from=RE.cub mosaic=LERE_mosaic.cub

Alas, the LE and RE imagery does not perfectly align. In the HiRISE processing world we would use hijitreg to determine a mean pixel offset. There is no generic form of hijitreg that we can use for LROC-NA. There is the coreg application, but in all my tests this program failed to find any correct match points between the images. I’ve tried two other successful methods. I’ve manually measured the offset using Qtie. Warning: Sending this images to Qtie requires twiddling with how serial numbers are made for ideal cameras. This is something I should document at some point as it would allow bundle adjusting ideal cameras like fully formed HiRISE and LROC images. My other solution was the example correlate program in Vision Workbench.  I did the following to make processing quick (5 minutes run time).

> crop from=LE.cub to=LE.centerline.cub sample=4900 nsamples=200
> crop from=RE.cub to=RE.centerline.cub sample=4900 nsamples=200
> parallel isis2std from={} to={.}.tif format=tiff ::: *centerline.cub
> correlate --h-corr-min -60 --h-corr-max 0 --v-corr-min 0 --v-corr-max 60 LE.centerline.tif RE.centerline.tif

That creates a disparity TIF file. The average of the valid pixels (third channel is 1) can then be used to solve for the mean translation. That translation can then be used during handmos by using the outsample and outline options. Ideally this would all be one program like hijitreg but that is for another time. Below is the final result.

Hijitreg actually does more than just solve for the translation between CCDs on HiRISE. It correlates a line of pixels between the CCDs in hope of determining the jitter of the spacecraft. I can do the same!

From the above plots, it doesn’t look like there is much jitter or the state of the data is not in form that I could determine. The only interesting bit here is that the offset in vertical direction changes with the line number. I think this might be disparity due to terrain. The imagery I used for my testing was M123514622 and M123521405, which happened to be focused on the wall of Slipher crater. The NA cameras are angled 0.106 degrees off from each other in the vertical direction. Ideal stereo geometry would 30 degrees or 15 degrees, but 0.106 degrees should allow some disparity given the massive elevation change into the crater. I wouldn’t recommend using this for 3D reconstruction but it would explain the vertical offset signal. The horizontal signal has less amplitude but does seem like it might be seeing spacecraft jitter. However it looks aliased, impossible to determine what the frequency is.

Anyways, making a noproj version of the LROC-NA observation is a huge boon for processing with Ames Stereo Pipeline. Using the options of affine epipolar alignment, no map projection, simple parabola subpixel, and it is possible to make a beautiful DEM/DTM in 2.5 hours. 70% of that time was just during triangulation because ISIS is single threaded. That would be faster with the application parallel_stereo (renamed from stereo_mpi in ASP 2.2.2).

Mass CTX Processing

A few weeks back, Ross Beyer presented my blog posts on autonomous HiRISE DEM processing at the HiRISE team meeting in Tucson, AZ. This brought about a question of could this be performed for CTX. Of course! ASP can be applied to do bulk processing for any of the missions it supports. Earth or any place ISIS and CSpice have defined coordinate system for. Just put in some safeties about run time into the processing scripts because ASP still occasionally goes mad and eats a whole bunch of processing time for no output. (Were working on it!)

Processing CTX stereo pairs however is in fact a little more difficult compared to the HiRISE processing I was doing before. HiRISE lists all their stereo pairs from their website or Dr. Shane Byrne’s website. There’s no equivalent for CTX. Luckily for me, some folks at UofA wrote PairendipityCTX (Chris Schaller?). They provided Ross and I a detailed report of overlapping files and other statistics. I cut out everything but the filenames for my own use and you can get a copy of the list with its 1,542 stereo pairs here.

Another difference was how these two missions stored their data on PDS. I can look at a HiRISE filename and work out its download path in PDS with no trouble. CTX on the other hand seems to have arbitrary volume breaks, thus the download URL is not predictable. My solution is a bad solution, but a quick solution. I wrote a python script that scraped PDS’s servers and identified all the CTX images it has and what their URLs are. I then just ‘grep’ for the URL in the processing scripts. Here’s the resulting text file that lists all of the 50,708 CTX images that existed at the time of my scraping. This is a mean trick because my script can make HTTP requests much faster than a human can. In a sense, I’m DOS’ing the PDS servers. So please copy my results rather than my methods.

Processing scripts

Previously with my autonomous HiRISE processing efforts, I just wrote a Bash script and then spawned it multiple times on my server using GNU parallel. That’s great, but looking around my office I saw a lot of unused computers that I’d like to do my bidding. What I wanted was a job management system like PBS, which is what they use on the super computer. But PDS is a little too heavy and sometimes cost money. So I instead found two alternative projects that seemed to fit the bill, Gearman and Celery. Gearman is the minimalist. Celery required a database backing and multiple ports open with each slave worker. I decided to use Gearman since it seemed simpler to learn.

My code for this little project is available on Github in my Mars3DGearman project. I hope to eventually add HiRISE support. Here’s how it plays out. All machines make a working directory that contains the CTX stereo list, the CTX url lookup list, and the folders DEM and DRG. One machine is designated the server, in my case, my workstation at home. It starts the only instance of ctx_processor.py with the help of a backing ‘gearmand’ executable (gearman daemon). All of the slaves then SSH back to my server and forward 4730, the port used by Gearman for communication. Then all the slave machines can start one or more instances of ctx_worker.py.

The part I haven’t worked out is how to send home the completed results to my main server. The ctx_worker script produces a DEM and orthophoto and then just dumps it locally into the DEM and DRG folder. Gearman does allow sending binary strings back the main server, but I’m betting a 100 MB long string would break something. I chose a different route. Since I already have all the slaves SSH’ing back to my main server, I decided to simply rsync all the slaves’ DEM and DRG folder back home. I don’t have to re-enter my password as I’ve enabled SSH ControlMaster which re-uses previous connections. For now, I just put that rsync in a watch command that fires every 2 hours. Alternatively it could be inside the ctx_worker script. A better bet would be to use SSH keys.

Another worthwhile improvement compared to my HiRISE processing scripts is the inclusion of a timeout method for each step. When it comes to CTX, if the correlation doesn’t finish in 2 hours, I’m not interested. This timeout is achieved through the run_cmd and process_timeout functions in the ctx_worker script. The Internet helped me a lot in figuring out how to make that a reality. Thanks Internet!

Results

These last few days I’ve roped 4 machines into doing my bidding. So far they’ve produced ~260 DEMs. 5 new DEMs completed just while I was writing this article. There are still some hiccups in the process. But when the stars align, I seem to produce over 50 new DEMs every day. I’m not going to show you all of them as that felt like a lot of work to post on to the blog. Instead I’m just going to show off a couple screenshots of some interesting places in Google Mars. The color ramp is a little funky because someday I need to learn to reference everything against the Mars Aeroid and not just the sphere.

Not everything looks as great as those screenshots. Here are some places that failed to correlate. I’m not sure what went wrong and I unfortunately don’t have the time to investigate.

In conclusion, this is just another proof of concept of what is possible. I hope that someday someone else will attempt this and do a better job than me. ASP is not perfect, but it can achieve a lot of processing on its own that could be beneficial to the scientific community.

Related

Shean, D. E., et al. “MRO CTX Stereo Image Processing and Preliminary DEM Quality Assessment.“ Lunar and Planetary Institute Science Conference Abstracts. Vol. 42. 2011.

David Shean, MSSS, and Larry Edwards actually already attempted this once before! In the above abstract you can see that they went above and beyond my little demo and processed 1180 stereo pairs. They also developed some initial steps for registering the output DEMs to MOLA and plotted the relationship convergence angle has on the outcome of a stereo pair.