Automatic production of HiRISE DEMs

After scraping Dr. Shane Byrne’s website, I realized that there are only 3110 completed HiRISE stereo pairs. These stereo pairs seem to require about 150 GB of intermediate storage to make their way through ISIS and ASP. Afterwards the 1 m /px projected results come to a size of about 200 MB each, or 607 GB total. I have 3 TB free on my home server, so I decided to see how many I could process unattended. It’s interesting to me to see the ASP failure cases and it would give me something to write about every 2 weeks or so. I also just really like pretty pictures.

I’m using the latest ASP development branch to do this processing. It has some critical improvements over ASP 2.0. We managed to fix the deadlock problem that was happening in our parallel tile cache system. We also fixed the segfault that sometimes happens in point2dem. Finally we fixed a bug in our RANSAC implementation that has gone unnoticed since 2006. All of these fixes were critical for making sure the code wouldn’t kill itself. We also killed our random seed so that users could always exactly reproduce their results. All ASP sessions should reproduce prior ASP sessions. These improvements will be available in the next release of ASP, along with RPC support.

Here’s my script. I’ll be clearing the backup archives made by pixz (parallel xz) manually every now and then once I’m sure I no longer have interest in the stereo pair. Also, all of this data is only processed with parabola subpixel. Bayes EM will always improve the result and never harm anything. Since my goal was just to process a lot of HiRISE and see where things broke, I decided to forego this extra refinement step. Anyways, at the resolution I’m sharing with all of you, Parabola still looks pretty sexy. Finally, at no point am I specifying the search range for these stereo pairs. They’re all being processed completely automatically.

#!/bin/bash

# expects ./hirise2dem_v2.sh <left prefix> <right prefix> <user name> <comment>
set -e
set -x

left_prefix=$1
right_prefix=$2
username=$3
comment=$4

hiedr2mosaic.py -h > /dev/null || { echo >&2 "I require ASP to be installed Aborting."; exit 1; }

function write_w_conv_to_height {
    # write_w_conv_to_height
    percent=$1
    min=$2
    max=$3
    string=$4
    log=$5
    height=$(echo "$min + $percent*($max - $min)" | bc | awk '{ printf "%.1f\n", $1}')
    echo $height","$string","$height >> $log
}

function download_hirise_single_img {
    phase_prefix=${1:0:3}
    orbit_prefix=${1:4:4}
    obsv_prefix=${1:0:15}
    if [ -e "$1" ]
    then
        echo "  Found " $1 >> ${left_prefix}.log
    else
        echo "  Downloading " http://hirise-pds.lpl.arizona.edu/PDS/EDR/${phase_prefix}/ORB_${orbit_prefix}00_${orbit_prefix}99/$obsv_prefix/$1 >> ${left_prefix}.log
        wget -O $1 http://hirise-pds.lpl.arizona.edu/PDS/EDR/${phase_prefix}/ORB_${orbit_prefix}00_${orbit_prefix}99/$obsv_prefix/$1
    fi
}

function download_hirise_cube {
    if [ -e "$1_IMGBackup.tpxz" ]
    then
        echo "  Found " $1_IMGBackup.tpxz >> ${left_prefix}.log
        xz -d -k -c $1_IMGBackup.tpxz | tar xv
        hiedr2mosaic.py $1*IMG
        rm $1*IMG
    else
        for i in {0..9}; do
            download_hirise_single_img $1_RED${i}_0.IMG
            download_hirise_single_img $1_RED${i}_1.IMG
        done
        hiedr2mosaic.py $1*IMG

        # Compress and backup the IMGs. This will reduce them to 1/3rd
        # their size.
        tar cvf $1_IMGBackup.tar $1*IMG
        pixz $1_IMGBackup.tar
        rm $1*IMG
    fi
}

# Get the files
mkdir -p $left_prefix
echo ${username} >> ${left_prefix}.log
echo ${comment} >> ${left_prefix}.log
cd $left_prefix
echo "Start: " `date` > ${left_prefix}.log
if [ ! -e "${left_prefix}_RED.mos_hijitreged.norm.cub" ]; then
    download_hirise_cube $left_prefix
    echo "Left Download Fin: " `date` >> ${left_prefix}.log
else
    echo "  Found " ${left_prefix}_RED.mos_hijitreged.norm.cub >> ${left_prefix}.log
fi
if [ ! -e "${right_prefix}_RED.mos_hijitreged.norm.cub" ]; then
    download_hirise_cube $right_prefix
    echo "Right Download Fin: " `date` >> ${left_prefix}.log
else
    echo "  Found " ${right_prefix}_RED.mos_hijitreged.norm.cub >> ${left_prefix}.log
fi

# Work out a nice latitude to project with
caminfo from= ${left_prefix}_RED.mos_hijitreged.norm.cub to=caminfo.txt
echo "Preprocessing Fin: " `date` >> ${left_prefix}.log
latitude=$(grep CenterLatitude caminfo.txt | awk -F "=" '{print $2}' | awk '{if ($1 > 0 ) {print int($1+0.5)} else {print int($1-0.5)}}')
srs_string="+proj=eqc +lat_ts=${latitude} +lat_0=0 +lon_0=0 +x_0=0 +y_0=0 +a=3396000 +b=3396000 +units=m +no_defs"

# # Process the stereo
stereo ${left_prefix}_RED.mos_hijitreged.norm.cub ${right_prefix}_RED.mos_hijitreged.norm.cub ${left_prefix}/${left_prefix} --stereo ../stereo.default
echo "Stereo Fin: " `date` >> ${left_prefix}.log
point2dem ${left_prefix}/${left_prefix}-PC.tif --orthoimage ${left_prefix}/${left_prefix}-L.tif --tr 2 --t_srs "${srs_string}" --nodata -32767
echo "Point2dem Fin: " `date` >> ${left_prefix}.log
mv ${left_prefix}/${left_prefix}-D{RG,EM}.tif .

# Compress the cube files
pixz ${left_prefix}_RED.mos_hijitreged.norm.cub
echo "Compressed Left Fin: " `date` >> ${left_prefix}.log
pixz ${right_prefix}_RED.mos_hijitreged.norm.cub
echo "Compressed Right Fin: " `date` >> ${right_prefix}.log

# Compress the old Stereo file
tar cvf ${left_prefix}.tar ${left_prefix}
pixz ${left_prefix}.tar
echo "Final Compression Fin: " `date` >> ${left_prefix}.log

# Generate colormap and LUT for QGIS
gdalinfo -stats ${left_prefix}-DEM.tif > tmp.txt
maximum=$(grep MAXIMUM tmp.txt | awk -F "=" '{n=$2/100; printf("%i\n",int(n+0.5))}' | awk '{n=$1*100.0; print n}')
minimum=$(grep MINIMUM tmp.txt | awk -F "=" '{n=$2/100; printf("%i\n",int(n-0.5))}' | awk '{n=$1*100.0; print n}')
rm tmp.txt

echo "# QGIS Generated Color Map Export File" > ${left_prefix}_LUT.txt
echo "INTERPOLATION:INTERPOLATED" >> ${left_prefix}_LUT.txt
write_w_conv_to_height 0.0 $minimum $maximum "255,120,255,255" ${left_prefix}_LUT.txt
write_w_conv_to_height 0.2 $minimum $maximum "120,120,255,255" ${left_prefix}_LUT.txt
write_w_conv_to_height 0.4 $minimum $maximum "120,255,255,255" ${left_prefix}_LUT.txt
write_w_conv_to_height 0.6 $minimum $maximum "120,255,120,255" ${left_prefix}_LUT.txt
write_w_conv_to_height 0.7 $minimum $maximum "255,255,120,255" ${left_prefix}_LUT.txt
write_w_conv_to_height 0.9 $minimum $maximum "255,120,120,255" ${left_prefix}_LUT.txt
write_w_conv_to_height 1.0 $minimum $maximum "255,255,255,255" ${left_prefix}_LUT.txt

# Render hillshade
gdaldem hillshade ${left_prefix}-DEM.tif ${left_prefix}_HILLSHADE.tif
colormap --lut /home/zmoratto/raid/asp_test_data/Mars/HiRISE/LMMP_color_medium.lut -s ${left_prefix}_HILLSHADE.tif --min $minimum --max $maximum ${left_prefix}-DEM.tif

# Render MOLA regional crop and create a difference map
/home/zmoratto/raid/asp_test_data/Mars/HiRISE/tools/generate_mola_secondary ${left_prefix}-DEM.tif

On top of producing a colormap and orthoimage, I decided to render a difference map against MOLA. I’m using the gridded MOLA product that is shipped in ISIS. I then bicubicly interpolate it to 1/4th the resolution of my HiRISE DEM. I also render an over-crop to make a hillshaded render of MOLA for the background of all of my pictures. I think it shows how little information is available in MOLA at the resolution I’m rendering at. The code I used to do this is available below and is written against Vision Workbench.

using namespace vw;
using namespace vw::cartography;
namespace fs = boost::filesystem;

double taste_nodata_value( std::string const& filename ) {
  boost::scoped_ptr rsrc( DiskImageResource::open( filename ) );
  return rsrc->nodata_read();
}

int main( int argc, char **argv ) {

  // expect ./generate_mola_secondary <dem>

  std::string input_dem( argv[1] );
  std::string isis_prefix( getenv("ISIS3DATA") );

  // Load georeference information
  GeoReference input_georef, mola_georef;
  read_georeference( input_georef, input_dem );
  read_georeference( mola_georef, isis_prefix+"/base/dems/molaMarsPlanetaryRadius0005.cub" );
  DiskImageView<float> input_image( input_dem );
  DiskImageView<float> mola_image( isis_prefix+"/base/dems/molaMarsPlanetaryRadius0005.cub" );

  // Decide on an input box to rasterize (preferrably a 30% pad around the input dem)
  BBox2i output_bbox = bounding_box( input_image );
  output_bbox.expand( 0.15 * ( input_image.cols() + input_image.rows() ) );

  // Down sample by 2 since there is very little content here
  ImageViewRef<float> output_mola = resample( crop( geo_transform( mola_image, mola_georef, input_georef,
                                                                   PeriodicEdgeExtension(), BicubicInterpolation() ), output_bbox), 0.25 );
  GeoReference output_georef = resample( crop( input_georef, output_bbox ), 0.25 );

  // Render output MOLA DEM with same projection as input
  {
    boost::shared_ptr rsrc( new DiskImageResourceGDAL( fs::basename( input_dem ) + "_MOLA.tif", output_mola.format() ) );
    write_georeference( *rsrc, output_georef );
    block_write_image( *rsrc, output_mola,
                       TerminalProgressCallback("asp", "MOLA Context:") );
  }

  // Render difference map where nodata is -1.
  ImageViewRef< PixelMask<float> > input_mask =
    create_mask( input_image, taste_nodata_value( input_dem ) );
  ImageViewRef< float > difference_map =
    apply_mask(abs(output_mola - crop(subsample(crop(edge_extend(input_mask, ValueEdgeExtension >( PixelMask() )), output_bbox),4),bounding_box(output_mola))),-1);
  {
    boost::shared_ptr rsrc( new DiskImageResourceGDAL( fs::basename( input_dem ) + "_MOLADiff.tif", difference_map.format() ) );
    rsrc->set_nodata_write( -1 );
    write_georeference( *rsrc, output_georef );
    block_write_image( *rsrc, difference_map,
                       TerminalProgressCallback("asp", "MOLA Difference:") );
  }

  return 0;
}

Unfortunately for this post, I didn’t correctly record the processing times since I was still debugging my epic shell script above. In the future, I hope to give better estimates of time. All of this processing is happening on my personal server since it has finally gotten cold enough that I want my heaters on in San Jose. That server has a single AMD FX 8120 processor with 8 cores and a generous 16 GB of DD3 1600 RAM. There’s also three 2 TB hard drives in a software RAID5 that seem to give me pretty epic read bandwidth. This machine cost me $1200 when I built it 6 months ago and is cheaper than my government supplied laptop. I think the most important bit is that the FX 8120 supports the wide AVX instruction set. I compiled all the code with GCC 4.7 so that I could take advantage of this.

Here’s ESP_011277_1825 and ESP_011910_1825, requested by user cweitz. This first pair also shows my first error. The search range solved inside D_sub wasn’t large enough to see the crater floor. This caused a trickle down error where the integer correlation took many days because its search range was defined by white noise and still didn’t contain values that matched the actual disparity. Since it couldn’t zero in, it did wide searches at every pyramid level. Ick.

ESP_014167_1300 and ESP_021947_1300, requested by mcewen. Flat enough that everything just worked perfectly. This pair finished very quickly (about 1.5 days).

ESP_018181_1730 and ESP_018893_1730, requested by rbeyer. Another pair that was pretty flat and just worked.

ESP_018959_1730 and ESP_019025_1730, requested by rbeyer. Again, another perfect stereo pair.

Seems like major improvements could be had if we focus on improving results that happen in the generation of D_sub. Since D_sub only takes a few seconds to calculate, having a large search range on that step wouldn’t be too bad and would fix ESP_011277_1825’s problem. Then I just need to focus on filtering D_sub so noise doesn’t propagate to the integer correlator who works on full resolution.

Bundle Adjusting HiRISE

Last week, I showed off a method for processing LRO-NAC. Now I’m going to show an even more difficult process with HiRISE. Each observation of LRO-NAC was 2 CCDs and it made for a lot of click work. In the case of HiRISE, it has a whopping 10 CCDs. This will make bundle adjustment very tricky if we treated all 20 files as individual cameras. To get around this problem we’ll deploy a new feature in Jigsaw (the Observation option) and we’ll have to modify the HiEDR2Mosaic script that is released with Ames Stereo Pipeline.

HiRISE Preparation

I recently found out that there is a nifty UofA site from Shane Byrne that details all the stereo pairs captured by HiRISE. It is available at this link. I then stalked the user ‘mcewen’ and process a stereo pair he selected. If I was a nice guy I would process stuff selected by user ‘rbeyer’, but he images boring places. Loser! Anyways, for this demo I’ll be processing ESP_013660_1475 and ESP_013950_1475. UofA says it’s a “Gullied 35 Kilometer Diameter Impact Crater in Promethei Terra”. Whatever, I just want 3D.

At this point I would normally tell you to run HiEDR2Mosaic blindly. Unfortunately that won’t work because that script will attempt to project the outer CCDs into the RED5 or RED4’s frame of reference. Any projection is bad because it requires using the spacecraft’s ephemeris, which we don’t trust and we haven’t corrected yet. It also won’t work because noproj will drop some observation serial or whatnots. To fix this, we want to stop HiEDR2Mosaic right before it does noproj. Then we’ll perform our jigsaw and then afterwards we want to resume the process of HiEDR2Mosaic. Doing that required the modification that I checked into Github here. Just download the ‘py.in’ file and rename it to ‘py’. The ‘in’ suffix just means that our build system is going to burn the ASP version into the script at compile time.

Now the run looks like the following:

download all ESP_013660_1475 IMGs
download all ESP_013950_1475 IMGs
hiedr2mosaic.py --stop-at-no-proj ESP_013660*IMG
hiedr2mosaic.py --stop-at-no-proj ESP_013950*IMG

Bundle Adjusting

No magic here, it’s same process as usual for creating a control network. I just picked a special XSpacing of 200 meters and YSpacing of 3 km for Autoseed. This was to make sure that there would be control points on all 10 CCDs. I was guessing that a single HiRISE CCD had a swath of ~500 meters. I also made the MinimumThickness a very small number (something like .0000001) since the HiRISE CCDs are very thin strips. After Autoseed, you then proceed to manually clean up the control network and it will take a very long while. Then you should perform a couple of jigsaw runs to identify control points that have mistakes and correct them in qnet. However, for HiRISE all your jigsaw runs should use the option, observations=yes. This says that images with the same image serial should be treated as the same camera or observation. So CCDs 0 through 9 will be treated as a single camera. Without this flag, jigsaw will never converge.

You might be asking why I didn’t use this for LRO-NAC, that camera also had multiple CCDs. That’s because in the case of LRO-NAC, the CCDs are in two separate optical housing whose position and angle changes noticeably with the thermal cycling of the spacecraft. On HiRISE, all the CCDs are on the same optical plane and they all use the same optics. It is not noticeable at all. If you use the observation option for LRO-NAC, you’ll find that it is impossible to get a sigma0 value under 2 pixels.

In my last post, I tried an idea where I didn’t use any ground control points and tried to make jigsaw auto register to the default ISIS DEM, which is usually the best altimeter data available. I wanted to try that again, unfortunately HiRISE doesn’t have a big enough footprint to cover much detail in MOLA so I added 2 CTX images to the jigsaw problem. Those images were B10_013660_1473_XN_32S256W and B11_013950_1473_XN_32S256W. I made their control network separately and then used cnetmerge to add them to the HiRISE network. I then proceeded to match a bunch of the control points between the two imagers. This would have been easier if I had just processed the images from the beginning together.

Another problem I noticed from the last post was that with every jigsaw run, all control points would start from the ‘apriori’ position. I was updating their radius with cnetnewradii, but their latitude and longitude kept being locked back to their original position. I wanted this to be like ICP, so I modified my script so that after every jigsaw run, ‘apriori’ latitude and longitude would be replaced by their ‘adjusted’ solution. Below is that code.

Adjusted2Apriori.py:

#!/usr/bin/env python
# Expect ./adjusted2apriori.py <input pvl> <output pvl>
import sys

inf = open(sys.argv[1], 'r')
outf = open(sys.argv[2], 'w')

delayed_write = False
delayed_lines = []

for line in inf:
    # Search for Apriori X
    if 'AprioriX' in line:
        delayed_write = True

    if 'AdjustedX' in line:
        delayed_lines.insert(5,line.replace('Adjusted','Apriori'))
    if 'AdjustedY' in line:
        delayed_lines.insert(6,line.replace('Adjusted','Apriori'))
    if 'AdjustedZ' in line:
        delayed_lines.insert(7,line.replace('Adjusted','Apriori'))
        delayed_write = False
        for dline in delayed_lines:
            outf.write( dline )
        delayed_lines = []

    if not delayed_write:
        outf.write( line )
    else:
        if 'AprioriX' in line or 'AprioriY' in line or 'AprioriZ' in line:
            pass
        else:
            delayed_lines.append(line)

bundleadjust.sh:

#!/bin/bash

input_control=control_comb_pointreg_const.net
radius_source=/home/zmoratto/raid/isis/isis3.4.1_ubuntu1204/data/base/dems/molaMarsPlanetaryRadius0005.cub

cp $input_control control_loop.net

for i in `seq 1 50`; do
    echo Iteration $i

    # Convert point's apriori position to be adjusted position
    cnetbin2pvl from= control_loop.net to= control_loop.pvl
    ./adjusted2apriori.py control_loop.pvl control_loop2.pvl
    cnetpvl2bin from= control_loop2.pvl to= control_loop.net

    cnetnewradii cnet= control_loop.net onet= output.net model= $radius_source getlatlon= apriori
    mv output.net control_loop.net

    jigsaw fromlist=cube.lis radius=yes twist=yes cnet= control_loop.net  onet= output.net update=yes spsolve= position camsolve= velocities observations= yes maxits= 100
    mv output.net control_loop.net

    #Gathering statistics for user monitoring
    grep Sigma0: bundleout.txt
    list_length=`wc -l < bundleout.txt`
    interesting_part=`grep -n "POINTS DETAIL" bundleout.txt | awk -F ":" '{print $1}'`
    tail -n $(expr ${list_length} - ${interesting_part}) bundleout.txt | grep RADIUS --color=no | awk -F " " '{print $4}' | awk '{sum+=$1; sumsq+=$1*$1;} END {print "stdev = " sqrt(sumsq/NR - (sum/NR)**2) " meters";}'
    tail -n $(expr ${list_length} - ${interesting_part}) bundleout.txt | grep RADIUS --color=no | awk -F " " '{print $4}' | awk '{mean+=$1} END {print "mean = " mean/NR " meters";}'
done

Running jigsaw in this case was just running my script.

./bundleadjust

Unfortunately I didn’t see the standard deviation of the radius correction reducing ever. So possibly my whole idea of a jigsaw without control points is flawed, or I need a better height source. I really want to go back and redo LRO-NAC now.

After we finish our bundle adjustment, we now finish our HiEDR2Mosaic to create a single image of a HiRISE observation. This looks like the following with the modified script. Notice specifically that I’m giving the script the histitch cube files this time instead of the IMG files.

./hiedr2mosiac.py --resume-at-no-proj ESP_013660*histitch.cub
./hiedr2mosiac.py --resume-at-no-proj ESP_013950*histitch.cub

Processing in ASP

I ran the stereo command like this and I used the default stereo.options file. That just means everything is in full auto and I’m only using parabola subpixel. Thus, my output DEM looks little ugly up close.

stereo ESP_013660_1475_RED.mos_hijitreged.norm.cub ESP_013950_1475_RED.mos_hijitreged.norm.cub HiRISE/HiRISE

Then I used the following point2dem option. Notice I’m using a custom latitude of scale so that the crater of interest will be circular in the projection.

point2dem --t_srs "+proj=eqc +lat_ts=-32 +lat_0=0 +lon_0=0 +x_0=0 +y_0=0 +a=3396000 +b=3396000 +units=m +no_defs" --orthoimage HiRISE-L.tif HiRISE-PC.tif --tr 2

Results

Here are the difference maps between MOLA and a non Bundle Adjusted ASP HiRISE DEM (Raw), a Bundle Adjusted DEM with no GCPs, a Bundle Adjusted DEM with no GCPs but with the addition of the CTX imagery. We can see that the bundle adjustment helps definitely; it removes a 200-meter error.

Adding the CTX imagery definitely helped reduce the error against MOLA. However if we look at a DEM that can be created from the additional 2 CTX images we processed, we’ll see that there are still large pockets of error around the rim of the crater. When I flip back and forth between the MOLA hillshade and the CTX hillshade, I think I can definitely see a shift where the CTX DEM is too low and slightly shrunk.

So possibly my whole ‘no-gcps’ idea might be bunk. For the case of Mars, it is really easy to go get ground control points against the THEMIS mosaic or a processed HRSC observation. You’ll just want to chain THEMIS Mosaic to CTX and then to HiRISE since there is such a large resolution change. I’ll leave it as a homework assignment for someone to work out the exact commands you need to run. At least now you should understand how to bundle adjust HiRISE.

Creating a registered LRO-NAC DEM without GCPs

I’ve previously written a tutorial on performing a bundle adjustment, jigsaw in ISIS lingo, using CTX. Let’s try something a little bit more complex with LRO-NAC. If you are not previously aware, LRO-NAC is actually two separate optical assemblies that are joined by the spacecraft bus. Ideally this spacecraft hardware would be solid as a rock and wouldn’t bend. Unfortunately that’s not the case, so what would have been a bundle adjustment between 2 cameras is now 4 in the case of LRO-NAC.

LRO-NAC Data Preparation

Picking a stereo pair has always been the most difficult part of the job for me. I honestly don’t know how to find stereo pairs. So instead I’m just going to recreate something ASU did in SOCET SET. If you are not aware, ASU has processed about 100 DEMs with LRO-NAC. I’m going to recreate their Moore F Crater DEM using M125713813 and M125720601. You can download the raw data via PDS.

I picked this stereo pair blindly and it ended up being extremely difficult to register correctly. In order to help out, I added a fifth image to the bundle adjustment to give another orbit’s worth of spacecraft ephemeris into the measurements. That image was M110383422LE and I picked it because it overlapped 2 of my previous images and had the same lighting conditions as my previous images.

Once you have downloaded all the images, prep the files in ISIS with the following:

parallel "lronac2isis from={} to={.}.cub; lronaccal from={.}.cub to={.}.cal.cub; spiceinit from={.}.cal.cub; rm {.}.cub" ::: *IMG

* The ‘parallel’ command is GNU Parallel, a non-standard system utility that I love dearly.

Attaching a custom Elevation Source

From a previous article I wrote about, we saw that map projecting against the WAC Global DTM gave much better results than working against the sparse LOLA that is the default for map projecting by in ISIS. So I’m going to do that now. Once we have a successful jigsaw result, then we’ll map-project. If you forgot how to attach the WAC Global DTM, here are the commands I used. My image pair was at 37.3N and 185E. So the tile WAC_GLD100_E300N2250_100M.IMG from this link was what I needed.

pds2isis from=WAC_GLD100_E300N2250_100M.IMG to=WAC_GLD100_E300N2250_100M.cub
algebra from=WAC_GLD100_E300N2250_100M.cub to=WAC_GLD100_E300N2250_100M.lvl1.cub operator=unary A=1 C=1737400
demprep from=WAC_GLD100_E300N2250_100M.lvl1.cub to=WAC_GLD100_E300N2250_100M.lvl2.cub
rm WAC_GLD100_E300N2250_100M.cub WAC_GLD100_E300N2250_100M.lvl1.cub

parallel spiceinit from={} shape=USER model=$PATH_TO/WAC_GLD100_E300N2250_100M.lvl2.cub ::: *cal.cub

Making the control network

Making the control network is pretty straightforward and is just time consuming. I performed the commands I previously outlined here on this blog.

Autoseed.def:

Group = PolygonSeederAlgorithm
      Name = Grid
      MinimumThickness = 0.01
      MinimumArea = 1
      XSpacing = 600
      YSpacing = 1000
End_Group

AutoRegTemplate.def:

Object = AutoRegistration
   Group = Algorithm
     Name         = MaximumCorrelation
     Tolerance    = 0.7
   EndGroup

   Group = PatternChip
     Samples = 19
     Lines   = 19
     MinimumZScore = 1.5
     ValidPercent = 80
   EndGroup

   Group = SearchChip
     Samples = 120
     Lines   = 100
   EndGroup
 EndObject

Commands

parallel footprintinit from={} ::: *cal.cub
echo *.cub | xargs -n1 echo > cube.lis
findimageoverlaps from=cube.lis overlaplist=overlap.lis
autoseed fromlist=cube.lis overlaplist=overlap.lis onet=control.net deffile=autoseed.def networkid=lronac pointid=???? description=moon
pointreg fromlist=cube.lis cnet=control.net onet=control_pointreg.net deffile=autoRegTemplate.def
qnet (editting and fixing control_pointreg.net)

I had to spend a lot of time in Qnet for this stereo pair. About half of my automatic matches didn’t work. (Possibly my search range was too small?) In order to correct this, I had to then manually edit the output control network in Qnet. I then filter down all the control points to the ones marked ‘Ignored’. Then I just manually align the measurements. You just need to get the measures close to the right spot by clicking on the same feature. Afterwards, you can then click the ‘Register’ button to get subpixel accurate alignment.

Autoseed and AutoReg also have a bad habit of only matching LE images to LE, and RE images to RE. It is a good idea to run along the edge of an LE image and try to find common points that are seen in both RE images. These are going to be the few control points that have more than 2 control measures. I also had to do this between my 5th image and the other 2 images that it had overlap with. This problem was likely because the overlap between these 3 images didn’t meet the requirement of MinimumThickness in Autoseed.def.

Normally at this point I would then recommend to you to start picking out ground control points. I tried that. Then I went a little insane. I’m better now. Instead we’re going to do crazy stuff in the next section.

The reason we’re not using GCPs, is that the only source to align against is the WAC Image mosaic. I tried this and I could easily get a bundle adjustment result that aligned well with the image mosaic. However the output DEM would always show a massive error between my height results and the results of LOLA and the WAC Global DTM. After adding the 5th image and clamping the trust of my GCP to 50 meters in radius, I got a result that matched LOLA and WAC DTM in an okay manner but didn’t really align at all to the WAC image mosaic. Things made sense once I compared the image mosaic to a hillshade of WAC DTM.

WAC’s Image mosaic doesn’t agree with WAC’s Global DTM! Ghaaaa! Madness! I was so incredibly disappointed. I didn’t know what to do at this point, so I just dropped the ground control points.

Bundle Adjusting

Alrighty then, time to start bundle adjusting with jigsaw. Likely on the first run things are going to blow up because bad measurements still exist in the control network. My solution to this was to first have jigsaw solve for only a few camera parameters and then check out the output control network for features that had the highest error. In Qnet you can filter control measures that have errors higher than a certain amount of pixels. After I fixed those measurements manually, I resave the control network as my input network and then redo the jigsaw.

Each time I redo jigsaw, I let jigsaw solve for more parameters. My pattern was:

jigsaw fromlist=cube.lis radius=no twist=no cnet= control_pointreg_gcp.net onet=control_jigsaw.net update=no
qnet
jigsaw fromlist=cube.lis radius=no twist=yes cnet= control_pointreg.net onet=control_jigsaw.net observations=no update=no
qnet
jigsaw fromlist=cube.lis radius=yes twist=yes cnet= control_pointreg.net onet=control_jigsaw.net observations=no update=no
qnet
jigsaw fromlist=cube.lis radius=yes twist=yes cnet= control_pointreg.net onet=control_jigsaw.net observations=no update=no camsolve=velocities
qnet

After iterating many times, I eventually produced a control network and jigsaw result that had a sigma0 of .34 pixels. However, if we’re to write this camera solution to file and then make a DEM, we would see that I have worse alignment with LOLA than if we hadn’t bundle adjusted at all.

So how can we get a good bundle adjustment result that aligns well with LOLA? One idea would be to pick GCPs directly against LOLA using their gridded data product. I know folks in the USGS who have done exactly this. However I don’t even see how they can determine correct alignment, so I’m going to ‘nix that idea. Another idea would be to have Jigsaw not solve for radius (radius=no), but instead use only the radius values provided by LOLA. This will keep problem in the ball park, but the resulting jigsaw solution will have a sigma0 around 15 pixels! Not solving for radius essentially ties our hands behind our back in the case of LRO-NAC when our radius source is so incredibly low res.

What we need instead is to simply constrain / rubber-band our Control Points to have radius values that are similar to LOLA. Unfortunately, how do we determine what correct Lat Long to sample the LOLA radius for each control measure? My solution was to not define that. I did two things: (1) I changed all my control measures from ‘free’ to ‘constrained’ with a sigma of 200,200,50. This can be performed with cneteditor. (2) I then wrote and ran the following bash script.

#/bin/env bash

input_control=control_pointreg_all_constrained.net
radius_source=/Users/zmoratto/Data/Moon/LROWAC/DTM/WAC_GLD100_E300N2250_100M.lvl2.cub

cp $input_control control_loop.net

for i in {0..100}; do
    echo Iteration $i
    cnetnewradii cnet= control_loop.net onet= output.net model= $radius_source getlatlon= adjusted
    mv output.net control_loop.net
    jigsaw fromlist=cube.lis radius=yes twist=yes cnet= control_loop.net  onet= output.net update=yes spsolve= position camsolve= velocities
    mv output.net control_loop.net

done

Your mind should be blown at this point. I’ve essentially written an iterative closest point algorithm using the ISIS utilities jigsaw and cnetnewradii. This process is only able to work because we have a perfect control network that we vetted with the previous jigsaw runs. This also works because our LOLA data here is not flat, and has plenty of texture. Finally, LRO-NAC was close to the correct solution. Imagine images with horrible attitude data, like Apollo, would just blow up with this technique.

On each iteration, the sigma0 shouldn’t change. Or if it changes, it should be extremely small. That’s because a translation or scale change of the control points and camera location has no effect on the reprojection error. What should be reducing on each iteration is the standard deviation of the radius correction for each control point as listed in bundleout.txt. The sparse 3D model of the problem in Jigsaw is slowly fitting itself to the LOLA DEM but constrained only by the camera geometry. The standard deviation of the radius correction will never go to zero and will eventually plateau. This is because there will always be some error against LOLA since large portions of the measurements are interpolations between orbits.

Disclaimer: I’ve only tried this technique for this LRO-NAC stereo pair. I don’t know if it will work for others. However this was the only technique I could come up with that would produce results that best aligned with LOLA. I couldn’t get anything good to come out of using WAC Image Mosaic GCPs.

Map projecting for Stereo

Now that we have an updated ephemeris, we’re ready to perform a map projection to speed up stereo. Be sure to not accidentally spiceinit your files, which would wipe your updated ephemeris. Notice that I’m map projecting everything at a low resolution. This is because I’m in a hurry for this demo. In a production environment for LRO-NAC, we would likely map project at 1 MPP or 0.5 MPP. This step and ASP’s triangulation are usually the longest part of this entire process.

parallel cam2map pixres=mpp resolution=5 from={} to={/.}.map.cub ::: ../*cal.cub

Running ASP

Nothing special here, I’m just processing the LE-LE, RE-RE, and LE-RE combination. RE-LE stereo pair wouldn’t resolve anything at 5 m/px. Afterwards, I gdalbuildvrt the outputs together. The massive lines separating the individual DEMs that make up this stereo pair may concern you. However that effect will diminish if I had map projected my inputs at a higher resolution. The reason being that, correlation is much like convolution, in that we erode the input image by a half kernel size along the image border. That border gets perceivably smaller if the image is of higher resolution.

parallel -n 2 stereo M125713813{1}.cal.map.cub M125720601{2}.cal.map.cub {1}{2}/{1}{2} ::: LE LE RE RE LE RE
parallel -n2 point2dem -r moon --t_srs \"+proj=eqc +lat_ts=37 +lat_0=0 +lon_0=180 +x_0=0 +y_0=0 +a=1737400 +b=1737400 +units=m +no_defs\" --tr 5 --orthoimage {1}{2}/{1}{2}-L.tif {1}{2}/{1}{2}-PC.tif --nodata -32767 ::: LE LE RE RE LE RE
gdalbuildvrt aspba_drg.vrt */*-DRG.tif
gdalbuildvrt aspba_dem.vrt */*-DEM.tif
parallel gdal_translate -co TILED=yes -co COMPRESS=LZW -of GTiff {} {.}.tif ::: *.vrt

Conclusion and Comparison

Now the moment you’ve been waiting for. The point where I have to prove I’m not crazy. I have three versions of this stereo pair. There is the result produced by ASU using SOCET SET and then there’s ASP with and without Bundle Adjustment. The look of ASP’s result will be a little shoddy because I map projected my input at 5 m/px which prevent me from get the most detail out. It did however save me a lot processing time while I experimented this method out.

Here are the hillshades for those 3 datasets overlaid a hillshade of the LRO-WAC Global DTM which is 100 m/px. There’s not much to tell, but you can at least see that everyone seems to have decent horizontal alignment.

Here are the difference maps between those 3 datasets and the LRO-WAC Global DTM.

Here are the difference maps between those 3 datasets and the LOLA Gridded Data Product, which I resampled to 100 m/px. I think this shows straight forward the improvement bundle adjustment made to the output DEM.

Just for giggles, I also did a difference between the ASU DEM and the bundle adjusted ASP DEM. They’re pretty close to each other. There seems to just be a simple tilt between the two or that the ASP DEM is just shifted to a slightly lower latitude. Who’s more correct is anyone’s guess for now.

 

Ideally at this point I would then compare the ASU DEM and the BA’d ASP DEM against the raw LOLA shot points. This is where curiosity got me and where I move on to the next personal project. Maybe someone else would be interested in a more rigorous review.

Also, I’m not sure if the 5th image is required.

Winging a DEM for a mission using World View 1

The group I work for at NASA has a big robot that likes to drive in a quarry at speed. Doing this is risky as we could easily put the robot in a position to hurt itself or hurt others. One of things we do to mitigate the risk is by having a prior DEM of the test area. The path planning software can then use the DEM to determine where it is and what terrain is too difficult to cross.

Since ASP recently gained the ability to process Digital Globe and GeoEye imagery (more about that in a later post), I was given a request to make a DEM from some World View 1 imagery they purchased. The location was Basalt Hills, a quarry at the south end of the San Luis Reservoir. To process this imagery with any speed, it is required to map project the imagery on some prior DEM. My choices were SRTM or NED. In my runs, both DEMs have problems. SRTM has holes in the middle of it that needed to be filled so ASP would work correctly. NED had linear jumps in it that ASP couldn’t entirely reverse in its math.

I ended up using SRTM as a seed to create my final DEM of the quarry. If you haven’t seen this, the process looks like the following commands below in ASP 2.0+. What’s happening is that ASP uses an RPC map projection to overlay the imagery over SRTM. When it comes time for ASP to triangulate, it reverses math it used to map project, and then in the case of Digital Globe it will triangulate using the full camera model. Another thing worth noting is that ASP needs control over how the interpolation is performed when doing RPC map projection. This forces us not to use the GDAL utilities during this step and instead use our own custom utility.

parallel rpc_mapproject --tr 0.5 \
      --t_srs'"+proj=utm +zone=10 +datum=WGS84 +units=m +no_defs"' \
      filled_srtm_dem.tif {} {.}.XML {.}.srtm.crop.tif ::: left.TIF right.TIF
stereo left.srtm.crop.tif right.srtm.crop.tif left.XML right.XML \
      r1c1_srtm_crop/r1c1_srtm_crop filled_srtm_dem.tif

Afterwards we got a pretty spiffy result that definitely shows more detail than the prior DEM sources. Unfortunately the result was shifted from the NED DEM source that my crew had previously been using. This ideally would be fixed by bundle adjusting the World View camera locations. It was clearly needed as most of our projected rays only came within 3 meters of each other. Unfortunately ASP doesn’t have that implemented.

EDIT: If I had paid closer attention to my data I would have noticed that a large part of the differences I was seeing between my DEM and USGS’s NED was because the NED data uses a vertical datum. My ASP DEM are referenced against the WGS84 ellipsoid. NED data is referenced against WGS84 plus the NAVD88. This would account for a large part of the 30 meter error I was seeing. (11.19.12)

My “I’m-single-with-nothing-going-on-tonight” solution was the Point Cloud Library. It has promising iterative closest point (ICP) implementations inside it and will eventually have the normal distribution transform algorithm in it. It also has the benefit of having its libraries designed with some forethought compared to the hideous symbol mess that is OpenCV.

PCL's pcd_viewer looking at the quarry.

I achieved ICP with PCL by converted my PC (point cloud) file from ASP into a PCL PCD file [1]. I also converted the NED DEM into a PCD file [2]. I then subsampled my ASP point cloud file to something more manageable by PCL’s all-in-memory tactics [3]. Then I performed ICP to solve for the translation offset I had between the two clouds [4]. My offset ended up being about a 40 meter shift in the north and vertical direction. I then applied this translation back to the ASP PC file [5] so that the DEM and DRG could be re-rendered together using point2dem like normal.

I wrote this code in the middle of the night using a lot of C++ because I’m that guy. Here’s the code I used just for reference in the event that it might help someone. Likely some of the stuff I performed could have been done in Python using GDAL.

1. convert_pc_to_pcd.cc
2. convert_dem_to_pcd.cc
3. pcl_random_subsample.cc
4. pcl_icp_align.cc
5. apply_pc_offset.cc

After rendering a new DEM of the shifted point cloud, I used MDenoise to clean up the DEM a bit. This tool is well documented at its own site (http://personalpages.manchester.ac.uk/staff/neil.mitchell/mdenoise/).

I’ve also been learning some QGIS. Here are some screen shots where you can see the improved difference map between NED and my result after ICP. Generally this whole process was very easy. It leaves me to believe that with some polish this could make a nice automated way to build DEMs and register them against a trusted source. Ideally bundle adjustment would be performed, but I have a hunch that the satellite positioning for Earth targets is so good that very little shape distortion has happen in our DEM triangulations. I hope this has been of interest to some of you out there!

Difference map between the USGS NED map and ASP's WV01 result.

Interesting GDAL Usage Examples

Despite first appearances, GDAL is way more than an image I/O library. I routinely use it resize images, transcode images, and reproject images. But did you know that it can also draw contour lines, write kml, and orthorectify RPC images? Even if you did, I think it’s good to document here because I keep forgetting.

Render Contour Lines

Publishing DEM’s created by ASP is a tricky because it is easiest visualize height change through a color gradient. That loses its usefulness when publishing to a conference where they print in gray scale. A snazzier option would be a hill-shaded render of a DEM with contour lines plotted on top. You can see an example on right and the commands below:

gdaldem hillshade <DEM> output.tif -z 0.000012
gdal_contour -i 50 <DEM> contour
gdal_rasterize -burn 0 -l contour contour output.tif

The “-z” value on gdaldem hillshade requires some playing around to figure out what is best for your image. The “-i” value on gdal_contour is the interval between lines in units of height defined by the DEM (for ASP products, it’s meters).

KML Outline of a Cube File’s footprint

Rendering of KML outputEver wanted to provide context for your ISIS cube file image on Google Earth? These commands allow you visualize the image on right. The first two lines are ISIS commands.

footprintinit from=<ISIS Cube File>
isis2gml from=<ISIS Cube File> to=output.gml label=footprint
ogr2ogr -f KML output.kml output.gml

Orthorectify an RPC image on a DEM

This one doesn’t apply to planetary imagery, but most Earth images tend to have an RPC camera model attached to them. Download a DEM from USGS’s NED, and then with GDAL you can drape your source imagery over the DEM.

 gdalwarp -of GTiff -co TILED=yes -co COMPRESS=lzw -r cubicspline \
    -rpc -to RPC_DEM=<your DEM> <RPC image> output.tif -dstnodata 0

Creating Control Networks and Bundle Adjusting with ISIS3

Bundle Adjustment is the process of back solving for a camera’s trajectory and pose. This process needs to be performed for most satellites images at some point or another because there is always an error in the camera location. Satellite position and velocity is usually found though radio communications via the Deep Space Network via methods such as measuring the antennae direction, time of flight, and doppler effects on the signal. The spacecraft’s pose is usually made from outward facing cameras called star-trackers. All of this is surprisingly accurate considering that it’s a measurement made from at least one planet away but it doesn’t meet the demand of photogrammetrists who wish to register images to meter level precision.

In this article, I’ll run an example of producing a control network and running jigsaw with USGS’s ISIS3 software. I’ll be playing today with two CTX images (P02_001918_1735_XI_06S076W and P02_001984_1735_XI_06S076W.IMG) that looked at the southwest end of Candor Chasma on Mars. Everything that is practiced here will equally apply to other missions with some additional number fiddling.

You probably don’t need this reminder, but before you can do anything, these files need to be ingested into ISIS. I’m going to also take this time to radiometric calibrate and attach spice data. The parallel you see in my examples is GNU Parallel; its not usually installed on systems by default. I strongly recommend that everyone gets it and learns to use it as it is a time saving utility.

parallel mroctx2isis from={} to={.}.cub ::: *.IMG
parallel spiceinit from={} ::: *.cub
parallel ctxcal from={} to={.}.cal.cub ::: *.cub

Now that you have beautiful images of Mars, lets break into the new stuff. We are going to start by building a control network. Control Networks are databases of image measurements that are used during a bundle adjustment. It defines a location in 3D space called Control Points and the pixel locations for which that point projects into, called Control Measures. Before we go too far, we need to build some metadata so that the control network code knows how the images overlap.

parallel footprintinit from={} ::: *cal.cub
echo *cal.cub | xargs –n1 echo > cube.lis
findimageoverlaps from=cube.lis overlaplist=overlap.lis

In the commands above, we have created 2 files, cube.lis and overlap.lis, that we’ll be repeatedly using. An interesting side note, footprintinit has created a vector layer inside each of the cube files that shows the lat-lon outline of the image. If one is so inclined, that vector layer can be extracted with an “isis2gml label=Footprint” call. That gml can then be rendered with the gdal_rasterize tool or can be converted to KML with the ogr2ogr tool.

Since most of the pretty NASA cameras are linescan, we are trying to bundle adjust a trajectory and thus need many control points. About 20-30 points are required. Ideally these control points would be distributed evenly across the surface of each image. ISIS has provided the autoseed command to help with that.

autoseed fromlist=cube.lis overlaplist=overlap.lis onet=control.net deffile=autoseed.def networkid=ctx pointid=???? description=mars

The settings of how autoseed works is defined in the definitions file, autoseed.def. I haven’t given you this; so let’s take a look into what should be inside that file.

Group = PolygonSeederAlgorithm
      Name = Grid
      MinimumThickness = 0.1
      MinimumArea = 1
      XSpacing = 8000
      YSpacing = 8000
End_Group

The minimum thickness defines the minimum ratio between the sides of the region that can have points applied to it. A choice of 1 would define a square and anything less defines thinner and thinner rectangles. The minimum area argument defines the minimum square meters that must be in an overlap region. The last two are the spacing in meters between control points. I played with those two values so that I could get about 50 control points out of autoseed. Having more control points just makes for more work later on in this process.

After the autoseed command, we finally have a control network that we can view with ISIS’s qnet utility. Run qnet in the terminal and a window should pop up. You’ll then have to click ‘open’ and select the cube.lis file and then the control.net file that we created earlier. In the control network navigator window, select the drop down menu so that you can select ‘cubes’. Highlight the names that show up on the left side and then press the ‘view cubes’ button.

You can now see the location of the control points in the two images that we have been playing with. However the alignment between the control measures is imperfect at this point. We can see this visually by requesting that qnet show us a single control point. In the control network navigator window, select the drop down menu and select points. Then in the left side, double click point ‘0001’. A new window should have popped up called ‘Qnet Tool’. You should click on the ‘geom’ option in the bottom right of the window. The two pictures of this window show the control point projected into the reference image (left) and then the second image on right.

You can click the right image or use the arrow keys to reposition the control measure. You can also click the play button on the bottom left of the window so that reference image keeps flipping between images. I prefer to operate with that window playing as it clearly shows the misalignment between measures. An example is show left if you click on the picture.

We could at this point fix these 50 or so control points by hand using qnet. There is instead a better option. ISIS’s pointreg is an automatic control measure registration tool that tends to get about 75% of the points correct. The command to use it looks like the following:

pointreg fromlist=cube.lis cnet=control.net onet=control_pointreg.net deffile=autoRegTemplate.def

Again all the settings are in the definition file. Here are the contents of autoRegTemplate.def.

Object = AutoRegistration
   Group = Algorithm
     Name         = MaximumCorrelation
     Tolerance    = 0.7
   EndGroup

   Group = PatternChip
     Samples = 19
     Lines   = 19
     MinimumZScore = 1.5
     ValidPercent = 80
   EndGroup

   Group = SearchChip
     Samples = 75
     Lines   = 75
   EndGroup
 EndObject

If you are a user of Ames Stereo Pipeline, these settings should look pretty similar. The search chip defines the search range for which pointreg will look for matching imagery. The pattern chip is simply the kernel size of the matching template. You will likely have to redefine the search range when you are working with new imagery. Use qnet to get an idea for what your pixel shifts are search ranges should be. A search range of 75 pixels in all directions is what happened to work for me with these specific CTX images.

With those settings, I was able to register 38/47 existing control points! The reset I’ll have to register manually in qnet. Using qnet to quickly register points is a bit of a fine art that I don’t think I can describe here. Maybe when I have free time I could make a video.

After you cleaned up the last 9 control points in qnet, we should have a complete control network in the control_pointreg.net file. We are ready to run jigsaw and update the camera pointing. Here’s the command:

jigsaw fromlist=cube.lis update=yes twist=no radius=yes cnet=control_pointreg.net onet=control_ba.net

From my jigsaw’s terminal output, I can see that the starting projection error for this control network was 58.9 pixels on average. This is the sigma0 value under iteration 1. After jigsaw converged by rotating the cameras’ pose and by moving the control points, the sigma0 dropped to 1.2 pixels. This is quite an improvement that should help in DTM quality. If you happen to mess up and write a camera solution to your input files that is incorrect, you can simply run spiceinit to revert your changes.

In the next version of Ames Stereo Pipeline (ver 1.0.6 or 2.0), we’ll probably be providing the ability to render the triangulation error of a DTM. Triangulation error is simply how close the projected rays of the image came together. It is one of many measurements that can be used to judge the quality of a DTM. I’ve gone ahead and rendered DTMs that use both the jigsaw’d and non versions of these CTX images. On the upright right is their hillshaded colormap output. Visually, there’s not a noticeable difference between the two. However the height range has changed drastically. The orignal data produced height ranges between 0.6 km and 4.9 km however the bundle adjusted data produces a range between -8.9 km and -4.4 km. The colormap figure I’ve produced uses two different color scales for those DTMs just to simply show that the DTM hasn’t pivoted any. The only change was a height drop.

I’ve also produce colormap output of the triangulation error. Here we can really see that jigsaw has made a difference for good. The color green represents a triangulation error of 1 meter, blue 0.1 meter, and yellow 10 meters. From the figure on the left, it’s clear to show that for the most part bundle adjustment improved every pixel in the image.

I’m sorry that this has been a run on article. Writing this was also an experiment for me. I hope I’ve shown you how to use ISIS’s control network tools and I’ve managed to show myself that fixed ground control points in jigsaw seem to be required. I have very little trust in the absolute height values in the DTM. I think their relative measurements are correct but I was definitely not expecting the 10 km drop in height between non and bundle adjusted solutions.

Ames Stereo Pipeline and Earth

I’ve been working on Ames Stereo Pipeline for about 4 years now. During that time the developers and I have always been focusing on what we could do with data that has come from NASA satellites. So we’ve produce 3D models of the Moon, Mars, and of the moons of Saturn. Recent work however has allowed us to produce 3D models of data much closer to home.

Earth example from ASP

In the next ASP release, I’m happy to announce that we will be providing limited support for processing images from Digital Globe satellites. Digital Globe sells images of Earth via World View and Quick Bird satellites. Their imagery is commonly seen in the base layer of Google Earth. ASP was recently awarded a grant to add support for those images to allow mapping of Earth’s polar regions.

Just last week I was able to produce ASP’s first 3D model of Earth using World View data. The image is a screenshot of the colormap DEM output draped in Google Earth. The area is just west of Denver in the Rockies and the imagery represented a huge challenge for ASP. There were lots of trees that make template correlation difficult. The mountains also made for a massive search range, which allowed outliers to make their way into the final result. Even though, we were still able to produce a result. I’m very proud of this result as it leaves me with confidence that ASP will do well in the polar regions where the terrain is not as aggressive.