3D Modelling with a R/C Quadcopter

Aarcon Racicot talks at FOSS4G 2014 about using his DJI Phantom 2 to model his community in several software packages including Agisoft Photoscan and Visual SFM.

* I want to add that Visual SFM can be run from the command line for batch processing and I believe that Visual SFM is an improvement over Bundler SFM.

Patch Match Stereo

3 months ago during my Semi Global Matching post, I mentioned I would have a follow along post about another algorithm I was interested in. That algorithm is PatchMatch, and is in my opinion a better fit for satellite processing. It was first written for hole filling [1] and then later it was applied to stereo [2]. Unlike Ames Stereo Pipeline’s currently implemented hierarchical discrete correlator, PatchMatch easily applies to high dimensionality searching and it has a run time that corresponds to number of iterations and image size. Not image content, which I hope interprets into predictable run times. Unfortunately my current implementation is really slow and has several faults. However my hope is that eventually we’ll make it fast and that it’s predictability will enable our users to budget for the CPU cost of creating a DEM while still improving quality.

How it Works

The algorithm for PatchMatch is actually very simple. The disparity image is seeded with uniform noise. That noise is random guesses for what the disparity should be. However since every pixel is another guess, and the image is quite a bit larger than the search range we are looking over, a handful of these guesses are bound to be close to correct. We then propagate disparities with low cost to the neighboring disparities.  The cycle repeats by then adding more noise to the current propagated disparity but at half the previous amplitude. This cycle repeats until everything converges. Surprisingly, a recognizable disparity appears after the first iteration.

If you are having a hard time visualizing this, here is a video from the original authors of Patch Match Stereo.

PatchMatch can be quick in the hole filling application and probably for stereo from video. (Propagating between sequential frames can greatly help convergence.) Unfortunately it is quite slow in use to pairwise stereo. When performing stereo correlation in ASP 2.3, we group kernel evaluations by being at the same disparity value (sometimes called disparity plane in literature). This means that overlapping kernel evaluations will reuse pixel comparisons. The reuse of pixel comparisons is a huge speed boost. My implementation of PatchMatch has none of that. My implementation is also solving for a floating-point precision of disparity. While this gives me very detailed disparity maps, the downside is that my implementation spends 75% of its time performing interpolation. I think for this algorithm to become useful for researchers, I’ll need to discretize the disparities and prerender the input as super sampled to avoid repeated interpolation calculations.

I have one final statement to make on PatchMatch algorithm. In practice, PatchMatch produces results that are very similar to performing a brute force search over the entire search range where winner (lowest cost) takes all. That is different from ASP’s hierarchal search, which at each pyramid level falls into local minimums. It is only the hierarchal part of it that has any use in finding the global. What this means for PatchMatch is that we need to use a cost metric that is globally minimal. For narrow baseline stereo in a controlled lighting environment, the fast Sum of Absolute Differences (SAD) fits the bill. But in the cruel realities of satellite imagery, the only solution is Normalized Cross Correlation (NCC). Unfortunately, NCC is quite a bit slower to evaluate than SAD. Also, in cases where SAD worked, NCC performs worse (probably due to being sensitive to the calculation of mean?).

Where PatchMatch does poorly

I’ve already hit my primary concern, which is the speed of Patch Match. However I think if we view PatchMatch as replacement for both correlation and BayesEM, it already appears cost effective. A different concern is what happens when the search range’s area is larger than the area of the image being correlated. A real world example would be rasterizing a 256^2 tile for a WV2 image where the search range is 1400×100. The area of the tile is less than the search’s. The number of close valid guesses seeded by the initial uniform noise drops dramatically. It might be a good idea to then take the interest points that ASP finds and dump them in the initial random disparity guess that PatchMatch evaluates. This would insure there is a disparity to propagate from.

Previously I mentioned that PatchMatch in my experiments seems to behave as a brute force winner takes all algorithms. This means that increase the search size also means a decrease in quality because our odds of finding an outlier with less cost than the actually correct match have gone up. So maybe completely abandoning hierarchal search entirely is a bad idea.  Other ideas might be enforcing a smoothness constraint that would heavily penalize the random jumping that characterizes outliers. The enforcement of smoothness constraint was the driving force behind the power of Semi Global Matching.

Expanding the Algorithm

Since kernel evaluations in PatchMatch are individual problems and there is no shared arithmetic like there is in ASP’s stereo correlation, it makes it easy to add on some advance algorithms to PatchMatch. The original PatchMatch Stereo paper [#] mentioned adding on parameters to the disparity maps so that an affine window could be used for matching. This would improve results on slopes and I believe would be a better alternative to prior map projecting the input imagery.

Another idea mentioned in the paper was adding Adaptive Support Weights (ASW) [3] to each kernel evaluation. It adds weighting to pixels that match the color of the center of the kernel in addition to weighting central pixels more importantly than pixels at the edge. The idea is that pixels of the same color are likely to be at the same disparity. While not always correct, it definitely applies to scenarios where the top of a building is a different color than its sides. Implementations I show in my figures operate only on grayscale imagery and not color like the original paper proposed.

In practice, this additional weighting does a good job at preserving edges at depth discontinuity. This is important for cliffs and for buildings. A proposed improvement is geodesic adaptive support weights, which weighs same color pixel heavily that are connected to the central pixels. This fixes problems like a picture of blades of grass, where the blades have the same color but are actually at different disparities.

Wrapping back around to the idea that Patch Match needs to have a cost metric that is globally minimal. It might be worth while exploring different cost metric such as Mutual Information or if I remember correctly, the fast evaluating Matching by Tone Mapping (MTM) [4]. Nice side effect of this would be that correlation is completely lighting invariant and could even correlate between infrared and visible.

Additional Comparisons

Here’s a disparity image of NASA Ames Research Center captured from a Pleiades satellite. Whatever they do to pre-process their imagery, I couldn’t epipolar rectify that imagery. Search range given to both ASP and PatchMatch was [-40,-20,70,40]. Despite being noisy, I like how the PatchMatch w/ ASW preserved straight edges. The specularity of some of the roof lights on a hanger however really threw ASW for a loop.

I also processed a crop from a World View 2 image of somewhere in the McMurdo Dry Valleys. This really shot a hole in my argument that Patch Match would be completely invariant to search range. The search range was [-700,-50,820,50]. I did however have to hand tune the search range to get this excellent result from ASP. The automatic detection initially missed the top of the mountains.

Source Code

I’m being messy because this is my research code and not something production. You’ll need to understand VW and Makefiles if you want to compile and modify.

https://github.com/zmoratto/PatchMatch

Further Research

I’m still diving through papers during my free evenings. I don’t do this at work because I have another project that is taking my soul. However there appears to be a good paper called Patch Match Filter that tries to improve speed through super pixel calculation [5]. There is also an implementation of PatchMatch that adds a smoothness constraint that performs better than the original PatchMatch paper [6]. When it came out, it was the best performing algorithm on the Middlebury dataset. However, recently another graph cut algorithm dethroned it. I myself will also just look at applying patch match as a refinement to a noisy disparity result from ASP 2.3 using a kernel size of 3. Having a small kernel still evaluates extremely quickly even if the search range is huge.

I’m sorry this post doesn’t end in a definitive conclusion. However I hope you found it interesting.

References

[1] Barnes, Connelly, et al. “PatchMatch: a randomized correspondence algorithm for structural image editing.” ACM Transactions on Graphics-TOG 28.3 (2009): 24.
[2] Bleyer, Michael, Christoph Rhemann, and Carsten Rother. “PatchMatch Stereo-Stereo Matching with Slanted Support Windows.” BMVC. Vol. 11. 2011.
[3] Yoon, Kuk-Jin, and In So Kweon. “Adaptive support-weight approach for correspondence search.” Pattern Analysis and Machine Intelligence, IEEE Transactions on 28.4 (2006): 650-656.
[4] Hel-Or, Yacov, Hagit Hel-Or, and Eyal David. “Fast template matching in non-linear tone-mapped images.” Computer Vision (ICCV), 2011 IEEE International Conference on. IEEE, 2011.
[5] Lu, Jiangbo, et al. “Patch Match Filter: Efficient Edge-Aware Filtering Meets Randomized Search for Fast Correspondence Field Estimation.” Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference. 2013.
[6] Heise, Philipp, et al. “PM-Huber: PatchMatch with Huber Regularization for Stereo Matching.”

Mass CTX Processing

A few weeks back, Ross Beyer presented my blog posts on autonomous HiRISE DEM processing at the HiRISE team meeting in Tucson, AZ. This brought about a question of could this be performed for CTX. Of course! ASP can be applied to do bulk processing for any of the missions it supports. Earth or any place ISIS and CSpice have defined coordinate system for. Just put in some safeties about run time into the processing scripts because ASP still occasionally goes mad and eats a whole bunch of processing time for no output. (Were working on it!)

Processing CTX stereo pairs however is in fact a little more difficult compared to the HiRISE processing I was doing before. HiRISE lists all their stereo pairs from their website or Dr. Shane Byrne’s website. There’s no equivalent for CTX. Luckily for me, some folks at UofA wrote PairendipityCTX (Chris Schaller?). They provided Ross and I a detailed report of overlapping files and other statistics. I cut out everything but the filenames for my own use and you can get a copy of the list with its 1,542 stereo pairs here.

Another difference was how these two missions stored their data on PDS. I can look at a HiRISE filename and work out its download path in PDS with no trouble. CTX on the other hand seems to have arbitrary volume breaks, thus the download URL is not predictable. My solution is a bad solution, but a quick solution. I wrote a python script that scraped PDS’s servers and identified all the CTX images it has and what their URLs are. I then just ‘grep’ for the URL in the processing scripts. Here’s the resulting text file that lists all of the 50,708 CTX images that existed at the time of my scraping. This is a mean trick because my script can make HTTP requests much faster than a human can. In a sense, I’m DOS’ing the PDS servers. So please copy my results rather than my methods.

Processing scripts

Previously with my autonomous HiRISE processing efforts, I just wrote a Bash script and then spawned it multiple times on my server using GNU parallel. That’s great, but looking around my office I saw a lot of unused computers that I’d like to do my bidding. What I wanted was a job management system like PBS, which is what they use on the super computer. But PDS is a little too heavy and sometimes cost money. So I instead found two alternative projects that seemed to fit the bill, Gearman and Celery. Gearman is the minimalist. Celery required a database backing and multiple ports open with each slave worker. I decided to use Gearman since it seemed simpler to learn.

My code for this little project is available on Github in my Mars3DGearman project. I hope to eventually add HiRISE support. Here’s how it plays out. All machines make a working directory that contains the CTX stereo list, the CTX url lookup list, and the folders DEM and DRG. One machine is designated the server, in my case, my workstation at home. It starts the only instance of ctx_processor.py with the help of a backing ‘gearmand’ executable (gearman daemon). All of the slaves then SSH back to my server and forward 4730, the port used by Gearman for communication. Then all the slave machines can start one or more instances of ctx_worker.py.

The part I haven’t worked out is how to send home the completed results to my main server. The ctx_worker script produces a DEM and orthophoto and then just dumps it locally into the DEM and DRG folder. Gearman does allow sending binary strings back the main server, but I’m betting a 100 MB long string would break something. I chose a different route. Since I already have all the slaves SSH’ing back to my main server, I decided to simply rsync all the slaves’ DEM and DRG folder back home. I don’t have to re-enter my password as I’ve enabled SSH ControlMaster which re-uses previous connections. For now, I just put that rsync in a watch command that fires every 2 hours. Alternatively it could be inside the ctx_worker script. A better bet would be to use SSH keys.

Another worthwhile improvement compared to my HiRISE processing scripts is the inclusion of a timeout method for each step. When it comes to CTX, if the correlation doesn’t finish in 2 hours, I’m not interested. This timeout is achieved through the run_cmd and process_timeout functions in the ctx_worker script. The Internet helped me a lot in figuring out how to make that a reality. Thanks Internet!

Results

These last few days I’ve roped 4 machines into doing my bidding. So far they’ve produced ~260 DEMs. 5 new DEMs completed just while I was writing this article. There are still some hiccups in the process. But when the stars align, I seem to produce over 50 new DEMs every day. I’m not going to show you all of them as that felt like a lot of work to post on to the blog. Instead I’m just going to show off a couple screenshots of some interesting places in Google Mars. The color ramp is a little funky because someday I need to learn to reference everything against the Mars Aeroid and not just the sphere.

Not everything looks as great as those screenshots. Here are some places that failed to correlate. I’m not sure what went wrong and I unfortunately don’t have the time to investigate.

In conclusion, this is just another proof of concept of what is possible. I hope that someday someone else will attempt this and do a better job than me. ASP is not perfect, but it can achieve a lot of processing on its own that could be beneficial to the scientific community.

Related

Shean, D. E., et al. “MRO CTX Stereo Image Processing and Preliminary DEM Quality Assessment.“ Lunar and Planetary Institute Science Conference Abstracts. Vol. 42. 2011.

David Shean, MSSS, and Larry Edwards actually already attempted this once before! In the above abstract you can see that they went above and beyond my little demo and processed 1180 stereo pairs. They also developed some initial steps for registering the output DEMs to MOLA and plotted the relationship convergence angle has on the outcome of a stereo pair.

Processing Antarctica

I’ve been sick all last week. That hasn’t stopped me from trying to process World View imagery in bulk on NASA’s Pleiades supercomputer. Right now I’m just trying to characterize how big of a challenge it is to process this large satellite data on a limited memory system for an upcoming proposal. I’m not pulling out all the tricks we have to insure that all parts of the image correlate. Still that hasn’t stopped ASP from producing this interesting elevation model of a section of Antarctica’s coastline, just off of Ross Island. Supposedly Marble Point Heliport is in this picture (QGIS told me it was the blue dot at the bottom of the coastline).

I’m using homography alignment, auto search range, parabola subpixel, and no hole filling. The output DEMs were rasterized at 5 meters per pixel. The crosses or fiducials in the image are posted 5 km apart. This represents a composite of 10 pairs of WV01 stereo imagery from 2009 to 2011 and no bundle adjustment or registration has been applied. The image itself is just a render in QGIS where the colorized DEM has had a hillshade render of the same DEM overlayed at 75% transparency.

I haven’t investigated why more of the mountains didn’t come out. When it looks like a whole elevation contour has been dropped, that’s likely because auto search range didn’t guess correctly. When it looks like a side of the mountain didn’t resolve, that’s likely because there was shadow or highlight saturation in the image. Possibly it could also be that ASP couldn’t correlate correctly on such a steep slope.

Winging a DEM for a mission using World View 1

The group I work for at NASA has a big robot that likes to drive in a quarry at speed. Doing this is risky as we could easily put the robot in a position to hurt itself or hurt others. One of things we do to mitigate the risk is by having a prior DEM of the test area. The path planning software can then use the DEM to determine where it is and what terrain is too difficult to cross.

Since ASP recently gained the ability to process Digital Globe and GeoEye imagery (more about that in a later post), I was given a request to make a DEM from some World View 1 imagery they purchased. The location was Basalt Hills, a quarry at the south end of the San Luis Reservoir. To process this imagery with any speed, it is required to map project the imagery on some prior DEM. My choices were SRTM or NED. In my runs, both DEMs have problems. SRTM has holes in the middle of it that needed to be filled so ASP would work correctly. NED had linear jumps in it that ASP couldn’t entirely reverse in its math.

I ended up using SRTM as a seed to create my final DEM of the quarry. If you haven’t seen this, the process looks like the following commands below in ASP 2.0+. What’s happening is that ASP uses an RPC map projection to overlay the imagery over SRTM. When it comes time for ASP to triangulate, it reverses math it used to map project, and then in the case of Digital Globe it will triangulate using the full camera model. Another thing worth noting is that ASP needs control over how the interpolation is performed when doing RPC map projection. This forces us not to use the GDAL utilities during this step and instead use our own custom utility.

parallel rpc_mapproject --tr 0.5 \
      --t_srs'"+proj=utm +zone=10 +datum=WGS84 +units=m +no_defs"' \
      filled_srtm_dem.tif {} {.}.XML {.}.srtm.crop.tif ::: left.TIF right.TIF
stereo left.srtm.crop.tif right.srtm.crop.tif left.XML right.XML \
      r1c1_srtm_crop/r1c1_srtm_crop filled_srtm_dem.tif

Afterwards we got a pretty spiffy result that definitely shows more detail than the prior DEM sources. Unfortunately the result was shifted from the NED DEM source that my crew had previously been using. This ideally would be fixed by bundle adjusting the World View camera locations. It was clearly needed as most of our projected rays only came within 3 meters of each other. Unfortunately ASP doesn’t have that implemented.

EDIT: If I had paid closer attention to my data I would have noticed that a large part of the differences I was seeing between my DEM and USGS’s NED was because the NED data uses a vertical datum. My ASP DEM are referenced against the WGS84 ellipsoid. NED data is referenced against WGS84 plus the NAVD88. This would account for a large part of the 30 meter error I was seeing. (11.19.12)

My “I’m-single-with-nothing-going-on-tonight” solution was the Point Cloud Library. It has promising iterative closest point (ICP) implementations inside it and will eventually have the normal distribution transform algorithm in it. It also has the benefit of having its libraries designed with some forethought compared to the hideous symbol mess that is OpenCV.

PCL's pcd_viewer looking at the quarry.

I achieved ICP with PCL by converted my PC (point cloud) file from ASP into a PCL PCD file [1]. I also converted the NED DEM into a PCD file [2]. I then subsampled my ASP point cloud file to something more manageable by PCL’s all-in-memory tactics [3]. Then I performed ICP to solve for the translation offset I had between the two clouds [4]. My offset ended up being about a 40 meter shift in the north and vertical direction. I then applied this translation back to the ASP PC file [5] so that the DEM and DRG could be re-rendered together using point2dem like normal.

I wrote this code in the middle of the night using a lot of C++ because I’m that guy. Here’s the code I used just for reference in the event that it might help someone. Likely some of the stuff I performed could have been done in Python using GDAL.

1. convert_pc_to_pcd.cc
2. convert_dem_to_pcd.cc
3. pcl_random_subsample.cc
4. pcl_icp_align.cc
5. apply_pc_offset.cc

After rendering a new DEM of the shifted point cloud, I used MDenoise to clean up the DEM a bit. This tool is well documented at its own site (http://personalpages.manchester.ac.uk/staff/neil.mitchell/mdenoise/).

I’ve also been learning some QGIS. Here are some screen shots where you can see the improved difference map between NED and my result after ICP. Generally this whole process was very easy. It leaves me to believe that with some polish this could make a nice automated way to build DEMs and register them against a trusted source. Ideally bundle adjustment would be performed, but I have a hunch that the satellite positioning for Earth targets is so good that very little shape distortion has happen in our DEM triangulations. I hope this has been of interest to some of you out there!

Difference map between the USGS NED map and ASP's WV01 result.

Fly by Night application of ASP to Video

This is a video of horizontal disparity from a video stereo rig onboard the International Space Station. Specifically it was this camera. I used ffmpeg to split up the data into individual frames and then I applied ASP to the individual frames. I attempted solving for the focal length and convergence angle of this set but unfortunately I didn’t properly constrain focal length. (My algorithm cheated an brought its error to zero by focusing on infinity). Regardless, I’m pretty happy with the result from a Tuesday night at home.

Creating Control Networks and Bundle Adjusting with ISIS3

Bundle Adjustment is the process of back solving for a camera’s trajectory and pose. This process needs to be performed for most satellites images at some point or another because there is always an error in the camera location. Satellite position and velocity is usually found though radio communications via the Deep Space Network via methods such as measuring the antennae direction, time of flight, and doppler effects on the signal. The spacecraft’s pose is usually made from outward facing cameras called star-trackers. All of this is surprisingly accurate considering that it’s a measurement made from at least one planet away but it doesn’t meet the demand of photogrammetrists who wish to register images to meter level precision.

In this article, I’ll run an example of producing a control network and running jigsaw with USGS’s ISIS3 software. I’ll be playing today with two CTX images (P02_001918_1735_XI_06S076W and P02_001984_1735_XI_06S076W.IMG) that looked at the southwest end of Candor Chasma on Mars. Everything that is practiced here will equally apply to other missions with some additional number fiddling.

You probably don’t need this reminder, but before you can do anything, these files need to be ingested into ISIS. I’m going to also take this time to radiometric calibrate and attach spice data. The parallel you see in my examples is GNU Parallel; its not usually installed on systems by default. I strongly recommend that everyone gets it and learns to use it as it is a time saving utility.

parallel mroctx2isis from={} to={.}.cub ::: *.IMG
parallel spiceinit from={} ::: *.cub
parallel ctxcal from={} to={.}.cal.cub ::: *.cub

Now that you have beautiful images of Mars, lets break into the new stuff. We are going to start by building a control network. Control Networks are databases of image measurements that are used during a bundle adjustment. It defines a location in 3D space called Control Points and the pixel locations for which that point projects into, called Control Measures. Before we go too far, we need to build some metadata so that the control network code knows how the images overlap.

parallel footprintinit from={} ::: *cal.cub
echo *cal.cub | xargs –n1 echo > cube.lis
findimageoverlaps from=cube.lis overlaplist=overlap.lis

In the commands above, we have created 2 files, cube.lis and overlap.lis, that we’ll be repeatedly using. An interesting side note, footprintinit has created a vector layer inside each of the cube files that shows the lat-lon outline of the image. If one is so inclined, that vector layer can be extracted with an “isis2gml label=Footprint” call. That gml can then be rendered with the gdal_rasterize tool or can be converted to KML with the ogr2ogr tool.

Since most of the pretty NASA cameras are linescan, we are trying to bundle adjust a trajectory and thus need many control points. About 20-30 points are required. Ideally these control points would be distributed evenly across the surface of each image. ISIS has provided the autoseed command to help with that.

autoseed fromlist=cube.lis overlaplist=overlap.lis onet=control.net deffile=autoseed.def networkid=ctx pointid=???? description=mars

The settings of how autoseed works is defined in the definitions file, autoseed.def. I haven’t given you this; so let’s take a look into what should be inside that file.

Group = PolygonSeederAlgorithm
      Name = Grid
      MinimumThickness = 0.1
      MinimumArea = 1
      XSpacing = 8000
      YSpacing = 8000
End_Group

The minimum thickness defines the minimum ratio between the sides of the region that can have points applied to it. A choice of 1 would define a square and anything less defines thinner and thinner rectangles. The minimum area argument defines the minimum square meters that must be in an overlap region. The last two are the spacing in meters between control points. I played with those two values so that I could get about 50 control points out of autoseed. Having more control points just makes for more work later on in this process.

After the autoseed command, we finally have a control network that we can view with ISIS’s qnet utility. Run qnet in the terminal and a window should pop up. You’ll then have to click ‘open’ and select the cube.lis file and then the control.net file that we created earlier. In the control network navigator window, select the drop down menu so that you can select ‘cubes’. Highlight the names that show up on the left side and then press the ‘view cubes’ button.

You can now see the location of the control points in the two images that we have been playing with. However the alignment between the control measures is imperfect at this point. We can see this visually by requesting that qnet show us a single control point. In the control network navigator window, select the drop down menu and select points. Then in the left side, double click point ‘0001’. A new window should have popped up called ‘Qnet Tool’. You should click on the ‘geom’ option in the bottom right of the window. The two pictures of this window show the control point projected into the reference image (left) and then the second image on right.

You can click the right image or use the arrow keys to reposition the control measure. You can also click the play button on the bottom left of the window so that reference image keeps flipping between images. I prefer to operate with that window playing as it clearly shows the misalignment between measures. An example is show left if you click on the picture.

We could at this point fix these 50 or so control points by hand using qnet. There is instead a better option. ISIS’s pointreg is an automatic control measure registration tool that tends to get about 75% of the points correct. The command to use it looks like the following:

pointreg fromlist=cube.lis cnet=control.net onet=control_pointreg.net deffile=autoRegTemplate.def

Again all the settings are in the definition file. Here are the contents of autoRegTemplate.def.

Object = AutoRegistration
   Group = Algorithm
     Name         = MaximumCorrelation
     Tolerance    = 0.7
   EndGroup

   Group = PatternChip
     Samples = 19
     Lines   = 19
     MinimumZScore = 1.5
     ValidPercent = 80
   EndGroup

   Group = SearchChip
     Samples = 75
     Lines   = 75
   EndGroup
 EndObject

If you are a user of Ames Stereo Pipeline, these settings should look pretty similar. The search chip defines the search range for which pointreg will look for matching imagery. The pattern chip is simply the kernel size of the matching template. You will likely have to redefine the search range when you are working with new imagery. Use qnet to get an idea for what your pixel shifts are search ranges should be. A search range of 75 pixels in all directions is what happened to work for me with these specific CTX images.

With those settings, I was able to register 38/47 existing control points! The reset I’ll have to register manually in qnet. Using qnet to quickly register points is a bit of a fine art that I don’t think I can describe here. Maybe when I have free time I could make a video.

After you cleaned up the last 9 control points in qnet, we should have a complete control network in the control_pointreg.net file. We are ready to run jigsaw and update the camera pointing. Here’s the command:

jigsaw fromlist=cube.lis update=yes twist=no radius=yes cnet=control_pointreg.net onet=control_ba.net

From my jigsaw’s terminal output, I can see that the starting projection error for this control network was 58.9 pixels on average. This is the sigma0 value under iteration 1. After jigsaw converged by rotating the cameras’ pose and by moving the control points, the sigma0 dropped to 1.2 pixels. This is quite an improvement that should help in DTM quality. If you happen to mess up and write a camera solution to your input files that is incorrect, you can simply run spiceinit to revert your changes.

In the next version of Ames Stereo Pipeline (ver 1.0.6 or 2.0), we’ll probably be providing the ability to render the triangulation error of a DTM. Triangulation error is simply how close the projected rays of the image came together. It is one of many measurements that can be used to judge the quality of a DTM. I’ve gone ahead and rendered DTMs that use both the jigsaw’d and non versions of these CTX images. On the upright right is their hillshaded colormap output. Visually, there’s not a noticeable difference between the two. However the height range has changed drastically. The orignal data produced height ranges between 0.6 km and 4.9 km however the bundle adjusted data produces a range between -8.9 km and -4.4 km. The colormap figure I’ve produced uses two different color scales for those DTMs just to simply show that the DTM hasn’t pivoted any. The only change was a height drop.

I’ve also produce colormap output of the triangulation error. Here we can really see that jigsaw has made a difference for good. The color green represents a triangulation error of 1 meter, blue 0.1 meter, and yellow 10 meters. From the figure on the left, it’s clear to show that for the most part bundle adjustment improved every pixel in the image.

I’m sorry that this has been a run on article. Writing this was also an experiment for me. I hope I’ve shown you how to use ISIS’s control network tools and I’ve managed to show myself that fixed ground control points in jigsaw seem to be required. I have very little trust in the absolute height values in the DTM. I think their relative measurements are correct but I was definitely not expecting the 10 km drop in height between non and bundle adjusted solutions.