PC Align

Last month we had a new release of Ames Stereo Pipeline, version 2.3! We’ve performed a lot of bug fixing and implementing new features. But my two new prized features in ASP are pc_align and lronac2mosaic. Today I’d like to only introduce pc_align, a utility for registering DEMs, LIDAR points, and ASP point clouds to each other. All you have to do is specify an input, a reference, and an approximate estimate for how bad you think the misplacement is.

Does that sound like magic? Under the hood, pc_align is performing an implementation of the iterative closest point algorithm (ICP). Specifically we are using internally libpointmatcher library from ETH. What ICP does is iteratively attempt to match every point of the input with the nearest neighbor in the reference point cloud. ICP then solves for a transform that would globally reduce the distances between the current set of matches. Then it repeats itself and performs a new round of matching input to reference and then again solves for another global step. Repeat, repeat, repeat until we no longer see any improvements in the sum of distances between matches.

This means that failure cases for PC_align are when the reference set is too coarse to describe the features seen in the input. An example would be having a HiRISE DEM and then only having a single orbit of MOLA that intersects. A single line of MOLA does nothing to constrain the DEM about the axis of the shot line. The 300 meter post spacing might also not be detailed enough to constrain a DEM that is looking at small features such as dunes on a mostly flat plane. What is required is a large feature that both the DEM and the LIDAR source can resolve.

CTX to HRSC example

Enough about how it works. Examples! I’m going to be uncreative and just process CTX and Gale Crater because they’re fast to process and easy to find. I’ve processed the CTX images P21_009149_1752_XI_04S222W, P21_009294_1752_XI_04S222W, P22_009650_1772_XI_02S222W, and P22_009716_1773_XI_02S223W using stereo options “–alignment affineepipolar –subpixel-mode 1”. This means correlation happens in 15 minutes and then triangulation takes an hour because ISIS subroutines are not thread safe. I’ve plotted these two DEMs on top of a DLR HRSC product, H1927_0000_DT4.IMG. It is important to note that this version of the DLR DEM is referenced against the Mars ellipsoid and not the Aeroid. If your data is referenced against the Aeroid, you’ll need to use dem_geoid to temporarily remove it for processing with pc_align who only understand ellipsoidal datums.

In the above picture, the two ASP created DEMs stick out like sore thumbs and are misplaced by some 200 meters. This is due to pointing information for MRO and subsequently CTX being imperfect (how much can you ask for anyway?). You could run jigsaw and that is the gold standard solution, but that takes a lot of manual effort. So instead let’s use pc_align with the commands shown next.

> pc_align --max-displacement 200 H1927_0000_DT4.tif P22-PC.tif \
        --save-transformed-source-points  -o P22_align/P22_align \
> point2dem --t_srs "+proj=sinu +lon_0=138 +x_0=0 +y_0=0 \
        +a=3396000 +b=3396000 +units=m +no_defs" \
        --nodata -32767 P22_align/P22_align-trans_source.tif

There are 3 important observations to make from the command line above. (1) We are using the max displacement option to set the upper bound of how bad we think we are, 200 meters. (2) I’m feeding the ASP PC file as the input source instead of ASP’s DEM. This is because in (3) with the save transformed source points we’ll be writing out another PC file. PC align can only export PC files so we always have to perform another round of point2dem. It is possible to run stereo -> point2dem -> PC Align -> point2dem, but it means you are unneccesarily resampling your data once. Using the PC file directly from stereo saves us from potential aliasing and removes a point2dem call.

Here’s the final result where both CTX DEMs are plotted on top of the HRSC DEM. Everything looks really good except for that left edge. This might be because the DEM was rendered with incorrect geometry and the output DEM is subtly warped from a perfect solution.

Another cool feature that pc_align author Oleg Alexandrov added was recording the beginning and ending matching errors in CSV files. They’re found with the names <prefix>-{beg,end}_errors.csv. You can load those up in QGIS and plot theirs errors to visualize that pc_align uniformly reduced matching error across the map. (Thanks Ross for showing me how to do this!)

CTX to MOLA

Quite a few MOLA shots can be found inside the CTX footprints. Above is a plot of all MOLA PEDR data for Gale crater. I was able to download this information in CSV format from the MOLA PEDR Query tool from Washington University St. Louis. Conveniently PC_align can read CSV files, just not in the format provided by this tool. PC_align is expecting the data to be in format long, lat, elevation against ellipsoid. What is provide is lat, long, elevation against aeroid, and then radius. So I had to manually edit the CSV in Excel to be in the correct order and create my elevation values by subtracting 3396190 (this number was wrong in first draft) from the radius column. The other added bit of information needed with CSV files is that you’ll need to define the datum to use. If you don’t, pc_align will assume you’re using WGS84.

> pc_align --max-displacement 200 P22-PC.tif mola.csv\
        -o P22_mola/P22_mola --datum D_MARS --save-inv-trans \
> point2dem --t_srs "+proj=sinu +lon_0=138 +x_0=0 +y_0=0 \
        +a=3396000 +b=3396000 +units=m +no_defs" \
        --nodata -32767 P22_mola/P22_mola-trans_reference.tif

Two things to notice in these commands, the inputs are backwards from before and I’m saving the inverse transform.  You can keep things in the same order as when I was aligning to HRSC,  it is just that things will run very slowly. For performance reasons, the denser source should be considered the reference and then you must request the reference to be transformed to the source. You’ll likely routinely be using this inverse form with LIDAR sources.

In the end I was able to reduce alignment error for my CTX DEMs from being over 50 meters to being less than 15 meters against MOLA and from over 100 meter to 40 meters error against HRSC. A result I’m quite happy with for a single night processing at home. You can see my final composited MOLA registered CTX DEMs on the left. The ASP team will have more information about pc_align in LPSC abstract form next year. We also hope that you try out pc_align and find it worth regular use in your research.

Update:

I goofed in the MOLA example! Using D_MARS implies a datum that is a sphere with 3396190 meter radius. I subtracted the wrong number from MOLA’s radius measurement before (the value 3396000). That probably had some effect on the registration result shown in the pictures, but this mistake is smaller than the shot spacing of MOLA. Meaning the horizontal registration is fine, but my output DTMs are 190 meters higher than they should have been. FYI, D_MOON implies a datum that is a sphere with radius 1737400 meters.

Advances in LRO-NAC processing in ASP

Since I last wrote, we’ve hired a new full-time employee. His name is Scott and we assigned him the task of learning ASP and LROC. The first utilities he’ll be contributing back to ASP are lronacjitreg and lronac2mosaic.py. The first utility is very similar to the ISIS utility with a similar name designed for HiRISE. The second utility, lronac2mosaic.py, uses the first tool and can take LRO-NAC EDR imagery and make a non-projected image mosaic. What lronac2mosaic.py does internally is ‘noproj’ and then ‘handmos’ the images together. There is an offset between the images due to model imperfections. Finding the correct offset so the combined images are seamless is the task of lronacjitreg. All of this just a streamed line version of what I wrote in a past blog post.

Previously users and our team only had the option to run all 4 combinations of the 4 LRO-NAC input files through ASP and then glue them together afterwards. Now with the use of lronac2mosaic, we can feed whole LRO-NAC observations into ASP and receive the full DTM in one go. No messy mosaicking of 4 files.

I’ve used Scott’s program successfully to recreate most DTMs that ASU has made via SOCET SET. Using my home server, I’ve been able to recreate 77 of their DTMs to date. We’ve been fixing bugs as we hit them. One of the biggest was in our search range guessing code. The next upcoming release of ASP will have the fruits of that labor. Previously ASP had a bad habit of ignoring elevation maximas in the image as it thought those IP measurements were noise. Now we should have a better track record of getting measurements for the entire image.

One of the major criticisms I’m expecting from the large dump of LRO-NAC DTMs we expect to deliver next year is what is the quality of the placement of our DTMs in comparison to LOLA. Another engineer we have on staff, Oleg, has just the solution for this. He has developed an iterative closest point (ICP) program called pc_align which will be in the next release. This is built on top of ETHZ Autonomous System Lab’s libpointmatcher and has the ability to take DTMs and align them to other DTMs or LIDAR data. This enables us to create well-aligned products that have height values agreeing within tens of meters with LOLA. Our rough testing shows us having a CE90 of 4 meters against LOLA after performing our corrections.

We’re not ready for the big production run yet. One problem still plaguing our process is that we can see the CCD boundaries in our output DTMs. We believe most of this problem is due to the fact that the angle between line of sight of the left and right CCDs changes with every observation. ISIS however only has one number programmed into it, the number provided by the FK. Scott is actively developing an automated system to determine this angle and to make a custom FK for every LRO-NAC observation. The second problem we’re tracking is that areas of high slope are missing from our DEMs. This is partially because we didn’t use Bayes EM for our test runs but it also seems like our disparity filtering is overly aggressive or just wrong. We’ll get on to that. That’s all for now!

Second Thought on LRO-NAC NoProj

Mark Rosiek from USGS Astrogeology expressed some doubt in my noproj recipe for LRO-NAC. This is completely reasonable because if everything were perfect, there would be no offset after the noproj step between the LE and RE CCDs. He requested 2 DEM samples of Tsiolkovsky Crater so he could compare to USGS work. I decided I’d try the same during free time through out the day. Unforunately I couldn’t find a trusted reference DEM to compare against. I can’t find ASU’s result of this area on their RDR webpage and it is impossible to use LMMP Portal. Can you even download the actual elevation values from LMMP? No I don’t want your color render or greyscale! 

Next best idea is to just difference the two DEMs I created and compare them. I didn’t bundle adjust these cameras at all so their placement to each other is off by some ~150 meters. After ICPing them together and then differencing them I can produce an error map. The gradient or shape of the error can help clue into how bad my fiddling in the ideal camera was. Below is the result between the stereo pairs M143778723-M143785506 and M167363261- M167370048.

You can definitely see the CCD boundary in this error map. You can see the CCD boundary in the hillshade. It’s about a 1 meter jump when looking at a single DEM. Error map seems to agree with this and shows a peak error at 4 meters in the CCD boundary location.

So what is the cause of these errors? Well we have two sources, (1) the projection into the ideal camera model and (2) bad spacecraft ephemeris. I’d argue that most of the difference between these two DEMs is the spacecraft ephemeris. That’s top priority in my book to correct. However when I look at the disparity map for these stereo pairs, there is a definitive jump at the CCD boundary. The cause of that would be an improper model of the angle between LE and RE cameras in ISIS. This should be expected. They’re currently not modeling the change in camera angles with respect to temperature. Also when looking at the vertical disparity, it seems one side is always brighter than the other. This suggests that I didn’t quite have the correct values for input to handmos.

Trying to fix all of this now might be a mistake on my part. I know that ASU will eventually produce new SPICE data that contains corrections from LOLA and a USGS study. I also imagine that eventually the work of E. J. Speyerer et al. will be implemented in ISIS.

Mass CTX Processing

A few weeks back, Ross Beyer presented my blog posts on autonomous HiRISE DEM processing at the HiRISE team meeting in Tucson, AZ. This brought about a question of could this be performed for CTX. Of course! ASP can be applied to do bulk processing for any of the missions it supports. Earth or any place ISIS and CSpice have defined coordinate system for. Just put in some safeties about run time into the processing scripts because ASP still occasionally goes mad and eats a whole bunch of processing time for no output. (Were working on it!)

Processing CTX stereo pairs however is in fact a little more difficult compared to the HiRISE processing I was doing before. HiRISE lists all their stereo pairs from their website or Dr. Shane Byrne’s website. There’s no equivalent for CTX. Luckily for me, some folks at UofA wrote PairendipityCTX (Chris Schaller?). They provided Ross and I a detailed report of overlapping files and other statistics. I cut out everything but the filenames for my own use and you can get a copy of the list with its 1,542 stereo pairs here.

Another difference was how these two missions stored their data on PDS. I can look at a HiRISE filename and work out its download path in PDS with no trouble. CTX on the other hand seems to have arbitrary volume breaks, thus the download URL is not predictable. My solution is a bad solution, but a quick solution. I wrote a python script that scraped PDS’s servers and identified all the CTX images it has and what their URLs are. I then just ‘grep’ for the URL in the processing scripts. Here’s the resulting text file that lists all of the 50,708 CTX images that existed at the time of my scraping. This is a mean trick because my script can make HTTP requests much faster than a human can. In a sense, I’m DOS’ing the PDS servers. So please copy my results rather than my methods.

Processing scripts

Previously with my autonomous HiRISE processing efforts, I just wrote a Bash script and then spawned it multiple times on my server using GNU parallel. That’s great, but looking around my office I saw a lot of unused computers that I’d like to do my bidding. What I wanted was a job management system like PBS, which is what they use on the super computer. But PDS is a little too heavy and sometimes cost money. So I instead found two alternative projects that seemed to fit the bill, Gearman and Celery. Gearman is the minimalist. Celery required a database backing and multiple ports open with each slave worker. I decided to use Gearman since it seemed simpler to learn.

My code for this little project is available on Github in my Mars3DGearman project. I hope to eventually add HiRISE support. Here’s how it plays out. All machines make a working directory that contains the CTX stereo list, the CTX url lookup list, and the folders DEM and DRG. One machine is designated the server, in my case, my workstation at home. It starts the only instance of ctx_processor.py with the help of a backing ‘gearmand’ executable (gearman daemon). All of the slaves then SSH back to my server and forward 4730, the port used by Gearman for communication. Then all the slave machines can start one or more instances of ctx_worker.py.

The part I haven’t worked out is how to send home the completed results to my main server. The ctx_worker script produces a DEM and orthophoto and then just dumps it locally into the DEM and DRG folder. Gearman does allow sending binary strings back the main server, but I’m betting a 100 MB long string would break something. I chose a different route. Since I already have all the slaves SSH’ing back to my main server, I decided to simply rsync all the slaves’ DEM and DRG folder back home. I don’t have to re-enter my password as I’ve enabled SSH ControlMaster which re-uses previous connections. For now, I just put that rsync in a watch command that fires every 2 hours. Alternatively it could be inside the ctx_worker script. A better bet would be to use SSH keys.

Another worthwhile improvement compared to my HiRISE processing scripts is the inclusion of a timeout method for each step. When it comes to CTX, if the correlation doesn’t finish in 2 hours, I’m not interested. This timeout is achieved through the run_cmd and process_timeout functions in the ctx_worker script. The Internet helped me a lot in figuring out how to make that a reality. Thanks Internet!

Results

These last few days I’ve roped 4 machines into doing my bidding. So far they’ve produced ~260 DEMs. 5 new DEMs completed just while I was writing this article. There are still some hiccups in the process. But when the stars align, I seem to produce over 50 new DEMs every day. I’m not going to show you all of them as that felt like a lot of work to post on to the blog. Instead I’m just going to show off a couple screenshots of some interesting places in Google Mars. The color ramp is a little funky because someday I need to learn to reference everything against the Mars Aeroid and not just the sphere.

Not everything looks as great as those screenshots. Here are some places that failed to correlate. I’m not sure what went wrong and I unfortunately don’t have the time to investigate.

In conclusion, this is just another proof of concept of what is possible. I hope that someday someone else will attempt this and do a better job than me. ASP is not perfect, but it can achieve a lot of processing on its own that could be beneficial to the scientific community.

Related

Shean, D. E., et al. “MRO CTX Stereo Image Processing and Preliminary DEM Quality Assessment.” Lunar and Planetary Institute Science Conference Abstracts. Vol. 42. 2011.

David Shean, MSSS, and Larry Edwards actually already attempted this once before! In the above abstract you can see that they went above and beyond my little demo and processed 1180 stereo pairs. They also developed some initial steps for registering the output DEMs to MOLA and plotted the relationship convergence angle has on the outcome of a stereo pair.

Automatic Production of HiRISE DEMs 4

Server is still running. ASP has changed during this run. We introduced a new IP filtering technique, MPI Parabola was sped up, and added a faster point2dem. I don’t remember when these changes happened to my server while processing these images. I also added a timeout for stereo pairs that take longer than 3 days to process. This is to stop a bad pair from just dominating the machine for 2 weeks. During this run, I also changed the settings so that two stereo pairs are always being processed at the same time. This is to keep my CPU pegged.

HiRISE Processing Log
Left Image Time What Happened
ESP_011543_1665 N/A Timed out
ESP_011563_2200 Lost Too narrow search
ESP_011582_1730 Lost Too narrow search
ESP_011592_1595 1d 10h I dunno
ESP_011608_1525 3h Too narrow search
ESP_011661_1410 N/A Timed out
ESP_011662_1750 1d 5h Pretty good
ESP_011672_1395 1d 12h Awesome
ESP_011675_1470 20h Too narrow search
ESP_011676_1700 1d 10h Awesome
ESP_011677_1655 2h Too narrow search
ESP_011683_1540 2d I dunno
ESP_011688_1760 N/A Timed out
ESP_011701_1790 11h Okay
ESP_011715_1800 15h Pretty good
ESP_011717_1910 2d 12h Pretty good
ESP_011720_1835 21h I dunno
ESP_011722_1460 N/A Timed out
ESP_011728_0985 N/A Timed out
ESP_011740_1830 1d 7h Too narrow search
ESP_011743_1725 5h Pretty good
ESP_011761_1485 13h Too narrow search
ESP_011765_1780 N/A Failed to download
ESP_011767_1640 4h Pretty good
ESP_011772_1790 2h Too narrow search
ESP_011773_1460 21h Bad search range
ESP_011780_1515 7h Awesome
ESP_011785_1875 4h Too narrow search
ESP_011818_1505 2h Awesome

Images

Automatic Production of HiRISE DEMs 3

It’s still kinda cold in San Jose so I’m still processing HiRISE DEMs. Please look at this post to see how I’m creating these elevation models.

HiRISE Processing Log
Left Image Time What Happened
ESP_011377_1835 1h 30m Good
ESP_011384_2185 7h 20m Good
ESP_011385_1695 13h 10m Goodish
ESP_011386_2065 6h 0m Goodish
ESP_011392_1370 2h 0m Good
ESP_011397_1535 2.5 weeks Correlated Noise
ESP_011399_2045 N/A Failed to make cube files
ESP_011406_0945 4h 0m Too small search range
ESP_011417_1755 12h 40m Good
ESP_011425_1775 3h 20m Good
ESP_011440_1335 3h 20m Good
ESP_011443_1380 18h 10m Too small of search range
ESP_011447_0950 1h 40m Good
ESP_011494_1535 2h 30m Good
ESP_011504_1700 1 week Not sure – Maybe poor texture?
ESP_011523_1695 7h 20m Not sure

Images

Prettiest DEM Award of the bunch goes to user ‘jwray’ for picking out ESP_011494_1535 and ESP_011639_1535. Supposedly this is “Sirenum crater floor LTLD with Al-OH”. I don’t know what that means.

Automatic Production of HiRISE DEMs 2

I’m still heating my apartment with HiRISE processing. For details how I’m creating this, please look at this previous post. Let’s look at the recent results!

ESP_011265_1560 and ESP_011331_1560, requested by timparker. This stereo pair was a nightmare. D_sub said part of the south part of the image could be correlated even though only one of the images saw that part. Then in full resolution integer correlation, the algorithms went into an aggressive search for a match that didn’t exist. The end result meant that stereo took 15 days to complete! Yikes!

ESP_011287_2165 and ESP_011564_2165, requested by mcewen. Stereo finished in 5 hours.

ESP_011290_1800 and ESP_011778_1800, requested by mcewen. Stereo finished in 12 hours. It looks like stereo mis-guessed the correlation parameters for the crater chain, thus the long processing time.

ESP_011293_1710 and ESP_019218_1710, requested by cweitz. Stereo finished in 5 days! This might be because there is a lot of disparity happening this image. This is one of those problems where local homography transforms or epipolar alignment would make a major contribution in speed.

ESP_011310_1395 and ESP_011811_1395, requested by jwray. Stereo finished in 2 days. I’m disappointed that the mesa’s edges didn’t come out.

ESP_011314_1585 and ESP_011595_1585, requested by mcewen. Interest point detection failed in preprocessing. ASP just gave up and never completed anything. It is not clear to me why this didn’t work.

ESP_011319_2100 and ESP_011464_2100, requested by ltornabene. Stereo finished in 2 hours.

ESP_011339_1665 and ESP_011972_1665, requested by ltornabene. Stereo finished in 9 hours.

ESP_011350_0945 and ESP_011351_0945, requested by cjhansen. Stereo finished in 2 hours.

ESP_011365_1365 and ESP_011642_1365, requested by alxla. Stereo finished in 11 hours.

ESP_011371_1730 and ESP_011516_1730, requested by rbeyer. Stereo finished in 15 hours.

ESP_011372_1730 and ESP_012295_1730, requested by mcewen. Stereo finished in 9 hours.