Advances in LRO-NAC processing in ASP

Since I last wrote, we’ve hired a new full-time employee. His name is Scott and we assigned him the task of learning ASP and LROC. The first utilities he’ll be contributing back to ASP are lronacjitreg and The first utility is very similar to the ISIS utility with a similar name designed for HiRISE. The second utility,, uses the first tool and can take LRO-NAC EDR imagery and make a non-projected image mosaic. What does internally is ‘noproj’ and then ‘handmos’ the images together. There is an offset between the images due to model imperfections. Finding the correct offset so the combined images are seamless is the task of lronacjitreg. All of this just a streamed line version of what I wrote in a past blog post.

Previously users and our team only had the option to run all 4 combinations of the 4 LRO-NAC input files through ASP and then glue them together afterwards. Now with the use of lronac2mosaic, we can feed whole LRO-NAC observations into ASP and receive the full DTM in one go. No messy mosaicking of 4 files.

I’ve used Scott’s program successfully to recreate most DTMs that ASU has made via SOCET SET. Using my home server, I’ve been able to recreate 77 of their DTMs to date. We’ve been fixing bugs as we hit them. One of the biggest was in our search range guessing code. The next upcoming release of ASP will have the fruits of that labor. Previously ASP had a bad habit of ignoring elevation maximas in the image as it thought those IP measurements were noise. Now we should have a better track record of getting measurements for the entire image.

One of the major criticisms I’m expecting from the large dump of LRO-NAC DTMs we expect to deliver next year is what is the quality of the placement of our DTMs in comparison to LOLA. Another engineer we have on staff, Oleg, has just the solution for this. He has developed an iterative closest point (ICP) program called pc_align which will be in the next release. This is built on top of ETHZ Autonomous System Lab’s libpointmatcher and has the ability to take DTMs and align them to other DTMs or LIDAR data. This enables us to create well-aligned products that have height values agreeing within tens of meters with LOLA. Our rough testing shows us having a CE90 of 4 meters against LOLA after performing our corrections.

We’re not ready for the big production run yet. One problem still plaguing our process is that we can see the CCD boundaries in our output DTMs. We believe most of this problem is due to the fact that the angle between line of sight of the left and right CCDs changes with every observation. ISIS however only has one number programmed into it, the number provided by the FK. Scott is actively developing an automated system to determine this angle and to make a custom FK for every LRO-NAC observation. The second problem we’re tracking is that areas of high slope are missing from our DEMs. This is partially because we didn’t use Bayes EM for our test runs but it also seems like our disparity filtering is overly aggressive or just wrong. We’ll get on to that. That’s all for now!

Second Thought on LRO-NAC NoProj

Mark Rosiek from USGS Astrogeology expressed some doubt in my noproj recipe for LRO-NAC. This is completely reasonable because if everything were perfect, there would be no offset after the noproj step between the LE and RE CCDs. He requested 2 DEM samples of Tsiolkovsky Crater so he could compare to USGS work. I decided I’d try the same during free time through out the day. Unforunately I couldn’t find a trusted reference DEM to compare against. I can’t find ASU’s result of this area on their RDR webpage and it is impossible to use LMMP Portal. Can you even download the actual elevation values from LMMP? No I don’t want your color render or greyscale! 

Next best idea is to just difference the two DEMs I created and compare them. I didn’t bundle adjust these cameras at all so their placement to each other is off by some ~150 meters. After ICPing them together and then differencing them I can produce an error map. The gradient or shape of the error can help clue into how bad my fiddling in the ideal camera was. Below is the result between the stereo pairs M143778723-M143785506 and M167363261- M167370048.

You can definitely see the CCD boundary in this error map. You can see the CCD boundary in the hillshade. It’s about a 1 meter jump when looking at a single DEM. Error map seems to agree with this and shows a peak error at 4 meters in the CCD boundary location.

So what is the cause of these errors? Well we have two sources, (1) the projection into the ideal camera model and (2) bad spacecraft ephemeris. I’d argue that most of the difference between these two DEMs is the spacecraft ephemeris. That’s top priority in my book to correct. However when I look at the disparity map for these stereo pairs, there is a definitive jump at the CCD boundary. The cause of that would be an improper model of the angle between LE and RE cameras in ISIS. This should be expected. They’re currently not modeling the change in camera angles with respect to temperature. Also when looking at the vertical disparity, it seems one side is always brighter than the other. This suggests that I didn’t quite have the correct values for input to handmos.

Trying to fix all of this now might be a mistake on my part. I know that ASU will eventually produce new SPICE data that contains corrections from LOLA and a USGS study. I also imagine that eventually the work of E. J. Speyerer et al. will be implemented in ISIS.

Jigsawing ISIS Ideal Cameras

Ideal cameras that I’m talking about are images that have been noproj’d together, such as LRO-NAC and HiRISE observations. Previously I thought this was impossible because Qnet required unique serial numbers for all observations and getsn on a noproj’d image just returned an ‘unknown’. Turns out ISIS can totally do this. You just need to make a couple modifications.

  1. Download the Ideal data set.
  2. List Ideal as a dataset inside your $ISISROOT/IsisPreferences

After you’ve done this, you are now able to run getsn and qnet successfully on the noproj’d imagery!

> getsn from=<LROC image
>  IDEAL/IDEAL/322147421.58037

This greatly simplifies the workflow I described for HiRISE in this post.

Noproj’n LRO Imagery

Earlier this year I found out that I got my first proposal funded. I’ve had directed funding before thanks to NASA’s cyrosphere program. I’ve also been a Co-I on numerous other funded proposals with co-workers and friends at USGS. However my LASER proposal to perform Mass DEM production from LROC-NA imagery was something I wrote as PI and was competed. Now that it is accepted, I have two years to follow through and I’d like to share the whole process here on the blog.

The first problem I’m going to write about has bugged me for a while. That problem is that each LROC-NA observation is actually 2 files and makes using ASP awkward. In the past I’ve been telling people to run perform 4 runs with stereo. Do the combinations of LE-LE, LE-RE, RE-LE, and RE-RE. However UofA had the same problem with HiRISE, which comes down in 20 different files. They had worked out an ISIS process chain that would noproj those files together among other things to make a single observation. I don’t know why such step doesn’t exist for LROC-NA but today I’ll show you what I’ve come up with.

If you try right now to noproj the RE cub file to the LE cube file you’ll find that the program errors out because an ideal camera model for LROC has not been defined. Definitions for ideal noproj cameras for all ISIS cameras are defined in a file at $ISIS3DATA/base/applications/ noprojInstruments003.pvl. Looking at that file we see that there are 5 elements that can be defined for the ideal camera model, TransY, ItransS, TransX, ItransL, and DetectorSamples. DetectorSamples is easy; it’s what the output image width should be in the final image.  The Trans* variables are measured in millimeters and define a focal plane offset from the original camera model we are using. I plan to noproj with the match file being the LE file. Thus Trans* references a shift from the original position of the LE sensors. The Itrans* variables are pixel conversion of the millimeter measurements. It’s used by ISIS to run the math the other direction. If Trans* and Itrans* variable don’t match, funny errors will occur where the CCDs noproj correctly but the triangulation in ASP is completely bonkers. Relating the two is easy, just use the pixel pitch. For LROC-NA that is 7 micrometers per pixel.

So how do we decide what to set the TransY and TransX values to be? If those values are left to zero, the LE image will be centered on the new image and the RE will be butted up beside but falling off the edge of the new image. A good initial guess would be to set TransY to be a shift half the CCD width. A better idea I thought to use was to look at the FK kernel and identify the angle differences between the two cameras and then use the instantaneous field of view measurement to convert to pixel offset between the two CCD origins. Use pixel pitch to convert to millimeters and then divide by two will be the shift that we want. Below are the final noproj settings I used for LROC.

    TransY = 16.8833
    ItransS = -2411.9
    TransX = 0.6475
    ItransL = -92.5
    DetectorSamples = 10000

At this point we can noproj the imagery and then handmos them together. A naïve version would look something like the following.

> noproj from=originalRE.cub to=RE.cub match=originalLE.cub
> noproj from=originalLE.cub to=LE.cub match=origianlLE.cub
> cp LE.cub LERE_mosaic.cub
> handmos from=RE.cub mosaic=LERE_mosaic.cub

Alas, the LE and RE imagery does not perfectly align. In the HiRISE processing world we would use hijitreg to determine a mean pixel offset. There is no generic form of hijitreg that we can use for LROC-NA. There is the coreg application, but in all my tests this program failed to find any correct match points between the images. I’ve tried two other successful methods. I’ve manually measured the offset using Qtie. Warning: Sending this images to Qtie requires twiddling with how serial numbers are made for ideal cameras. This is something I should document at some point as it would allow bundle adjusting ideal cameras like fully formed HiRISE and LROC images. My other solution was the example correlate program in Vision Workbench.  I did the following to make processing quick (5 minutes run time).

> crop from=LE.cub to=LE.centerline.cub sample=4900 nsamples=200
> crop from=RE.cub to=RE.centerline.cub sample=4900 nsamples=200
> parallel isis2std from={} to={.}.tif format=tiff ::: *centerline.cub
> correlate --h-corr-min -60 --h-corr-max 0 --v-corr-min 0 --v-corr-max 60 LE.centerline.tif RE.centerline.tif

That creates a disparity TIF file. The average of the valid pixels (third channel is 1) can then be used to solve for the mean translation. That translation can then be used during handmos by using the outsample and outline options. Ideally this would all be one program like hijitreg but that is for another time. Below is the final result.

Hijitreg actually does more than just solve for the translation between CCDs on HiRISE. It correlates a line of pixels between the CCDs in hope of determining the jitter of the spacecraft. I can do the same!

From the above plots, it doesn’t look like there is much jitter or the state of the data is not in form that I could determine. The only interesting bit here is that the offset in vertical direction changes with the line number. I think this might be disparity due to terrain. The imagery I used for my testing was M123514622 and M123521405, which happened to be focused on the wall of Slipher crater. The NA cameras are angled 0.106 degrees off from each other in the vertical direction. Ideal stereo geometry would 30 degrees or 15 degrees, but 0.106 degrees should allow some disparity given the massive elevation change into the crater. I wouldn’t recommend using this for 3D reconstruction but it would explain the vertical offset signal. The horizontal signal has less amplitude but does seem like it might be seeing spacecraft jitter. However it looks aliased, impossible to determine what the frequency is.

Anyways, making a noproj version of the LROC-NA observation is a huge boon for processing with Ames Stereo Pipeline. Using the options of affine epipolar alignment, no map projection, simple parabola subpixel, and it is possible to make a beautiful DEM/DTM in 2.5 hours. 70% of that time was just during triangulation because ISIS is single threaded. That would be faster with the application parallel_stereo (renamed from stereo_mpi in ASP 2.2.2).

Making well registered DEMs with ISIS and Ames Stereo Pipeline

Measurements of the Satellite’s position and pose are often not perfect. Those minor mistakes can cause gross offsets when producing DEMs from cameras with such long focal lengths. Correcting this is achieved with bundle adjustment. I’ve discussed this before, but I’d like to expand my previous article by adding ground control points.

Ground Control points are special measurements in an image that have well known geodetic position. On earth, this might be achieved with a large target (an expedition’s tent) that can be observed with a satellite and has a GPS beacon. We don’t currently have such an analog on the Moon or Mars. The best bet is to instead compare an image against a good world map and then pick out individual features. For Mars, I think the best map to register against is the THEMIS global Day IR 100 m map. For the Moon, I would pick the 100 m LRO-WAC DTM and Image mosaic.

In the following sections, I’m going to run through an example for how to register a pair of stereo CTX observations to the THEMIS global map. For those who are interested, my example images are of Hrad Vallis and use P17_007738_2162_XN_36N218W and P18_008028_2149_XN_34N218W.

Getting a THEMIS Global Map Tile into ISIS

The 11th version of the THEMIS Day IR map is available from the following ASU link:

Unfortunately the data you download is just a simple PGM file that mentions nothing about how it was georeferenced. That information can only be obtained through careful reading from the website. You however just need to know that the data is in Simple Cylindrical projection, it’s Geocentric, it’s sampled at 592.75 PPD, and that the files are indexed by their lower left corner.

I’m going to leaving it up to you to read the documentation to see what the following commands in ISIS do. That documentation is available here from USGS’s website. The tile I’m processing is “lat30_lon120.pgm” and I choose it specifically because it contains my example CTX images. I had to use an intermediate GDAL command because std2isis couldn’t read a pgm file.

gdal_translate -of GTiff lat30_lon120.pgm lat30_lon120.tif
std2isis from=lat30_lon120.tif to=lat30_lon120.cub
maptemplate map=map.template projection=SIMPLECYLINDRICAL resopt=ppd resolution=592.75 clon=0 targetname=Mars targopt=user lattype=planetocentric eqradius=3396190.0
maplab sample=0 line=0 coordinates=latlon lat=60 lon=120 from=lat30_lon120.cub map=map.template

Creating Image to Image Measurements

I’ve discussed this process before, and I’m not going to re-describe it. (I’m quite lazy). However, the following is a quick recap of the commands I used to do it.


Group = PolygonSeederAlgorithm
      Name = Grid
      MinimumThickness = 0.01
      MinimumArea = 1
      XSpacing = 4000
      YSpacing = 4000


Object = AutoRegistration
   Group = Algorithm
     Name         = MaximumCorrelation
     Tolerance    = 0.7

   Group = PatternChip
     Samples = 19
     Lines   = 19
     MinimumZScore = 1.5
     ValidPercent = 80

   Group = SearchChip
     Samples = 75
     Lines   = 75

The ISIS Commands:

parallel spiceinit from={} ::: *cub
parallel footprintinit from={} ::: *cub
echo *cub | xargs -n1 echo > cube.lis
findimageoverlaps from=cube.lis overlaplist=overlap.lis
autoseed fromlist=cube.lis overlaplist=overlap.lis deffile=autoseed.def networkid=ctx pointid=???? description=mars
pointreg fromlist=cube.lis deffile=autoRegTemplate.def

Then you’ll need to do clean up in ISIS’s Qnet after you’ve completed the above commands. In my experience, pointreg will only correlate maybe 75% of your control measures correctly. To clean these up quickly, just go into qnet and select that you want to filter your control points to show only those that have been “ignored”. Then go and save. I had an excess of control points, and decided to just delete a lot of my ignore points. You can see a distribution of my control points in the following Qnet screenshot.

Creating Ground Control Measurements

Finally, we are breaking new ground here. We are going to measure some ground control points against the THEMIS map tile we downloaded earlier. You could do this in Qnet, however in practice I find this segfaults a lot (I’m using ISIS 3.3.1). So instead we are going to do this process with Qtie.

Open Qtie by simply just typing “qtie” into your terminal. Then click open button or command “O”. It will ask you for a basemap and you should select your THEMIS tile that is in cube format. It will then ask you for a “cube to tie to base”, select your first CTX image of the stereo pair. When it asks you for a control network, click cancel. If you actually give it your pointreg’d control network, it will go into an infinite loop because that control network has no GCPs.

You’ll then want to use the ‘tie’ tool to create a bunch of GCP. If you don’t remember, you have to right click on your CTX image to create a new measurement. You’ll want to capture at least 3 GCPs if you are working with a frame camera. Do about 10 evenly distributed GCPs if you have a linescan camera. CTX is a linescan camera, so have fun clicking.

When matching a low-resolution image to a high-resolution image, I find it very helpful to turn on the affine alignment option in the “Tie Point Tool”. That is the “Geom” radio button on the right side of the window. I also find it very helpful to have the left window constantly flipping between the two input images at a high rate. You can achieve this by clicking the play button in the lower left and then lowering the number next to the play button. You’ll also want to zoom in on your basemap.

On the left is an example of a match I found. This process is difficult and takes some patience. You’ll want to look at your match from different scales just to make sure that you haven’t aligned to some pareidolia.

Once you finish, you oddly have to click the diskette in the “Tie Point Tool” to save your control network. I’ve saved mine as “”. If you were selecting ground control points against the LRO-WAC Global Image mosaic (for registering images of the moon), you might want to take a detour here and use the “cnetnewradii” command. That way you can have your GCPs use radius measurements from the LRO-WAC Global DTM. Otherwise your radius measurements will come from ISIS’s default sources of an interpolated LOLA or MOLA for Mars.

We now want to merge our GCP control network to our control network we created with pointreg. This is done with the following:

cnetmerge inputtype=cnets networkid=CTX_with_gcp description="against themis 100m"

Back in qnet, you’ll want to tie the other CTX image to your new GCPs. You can do this by open each GCP point up by double clicking it, and then selecting “Add Measure(s) to Point” in the Qnet Tool window. A screenshot is shown left. You’ll then need to manually align the control measures. Be sure not to move your reference control measure from the first CTX image. You’ll also want to change your GCP’s PointType from “Fixed” to “Constrained”. This will allow your bundle adjustment session to move your GCP to reduce its reprojection error. However it will be rubber banded to the measurement you made in Qtie. This is helpful because your basemap source will usually have mistakes.

After having registered all your GCPs to the other image, and also having set their type to “constrained”, you’ll want to do one last thing. Select all your ground control points in the Control Network Navigator and then press the “Set Apriori/Sigmas” button. I have a screenshot below. The sigmas are your uncertainty measurement for your GCP and basically tell bundle adjustment just how much it can ignore or trust your GCP measurements. Since our measurements were all made from the same source, we’re using the same sigma for all our GCPs. In my opinion a good sigma value is two times the pixel resolution of your basemap. So in this example, I’m putting the GCPs as being accurate to 200 meters in longitude, latitude, and radius directions.

Bundle Adjusting

Alright, it’s time to bundle adjust. This almost exactly the same command we ran before, however we can turn on a new option we couldn’t before:

jigsaw fromlist=cube.lis radius=yes twist=no spsolve=position

This time we can solve for the spacecraft’s position since we have GCPs! Do a test run without updating the spice. If it converges to a sigma0 less than 2 px, then you can go ahead an add the “update=yes” option. If you are working with LRO-NAC images, you can likely converge to less than 1 px sigma0. If you unfortunately didn’t converge or your sigma0 has a high value, you likely have a messed up measurement in your control network. This probably came from pointreg, where it managed to match to the incorrect repeated texture. You can identify the mistaken control point by looking at the output residuals.csv file that was created by jigsaw. Just go an manually align the control points in Qnet that show a high residual error. The error should be obvious in Qnet. If it is not, then likely jigsaw fitted to the outliers. If that happened, the control points with low residual errors will have the obvious misregistration in Qnet.

My final DEM results

Creating the above measurements and performing jigsaw took me about 2 hours. Creating the DEMs in Ames Stereo Pipeline took another 3 hours. Here are my results with and without my jigsaw solution rendered in Google Earth. I’ve overlayed a hillshaded render of my DEMs at 50% opacity on top of the THEMIS Map I registered to. My DEMs also look a little crummy since I just used parabola subpixel for speed in processing.

Bundle Adjusted DEM

Non Bundle Adjusted DEM