ASP v2 Status Update

If you watch your email or ISIS’s website like a hawk, you’ll notice ISIS 3.4.0 just dropped this week. So where’s Ames Stereo Pipeline in all of this?

The current plan is to get Ames Stereo Pipeline 2.0 out before the Planetary Data Workshop on June 25th in Flagstaff. At this meeting I hope to be giving a tutorial and showing off all the cool stuff the community and the team have been developing for ASP.

http://astrogeology.usgs.gov/groups/Planetary-Data-Workshop

I currently have two limiting factors that I hope will get resolved in the next 2 weeks:

  • ASP and Binary Builder need to agree with the changes in ISIS 3.4.0.
  • The NASA Ames Lawyers need to re-license Vision Workbench under the Apache 2 license. ASP has already been re-licensed to Apache 2, however one of its prime dependencies is Vision Workbench. This is a legal block against releasing binaries.

Thank you all for your patience and for doing neat projects with Ames Stereo Pipeline. I enjoy seeing the pretty pictures at science conferences.

Things to look forward in this next release are:

  • Digital Globe Image support.
  • Faster and Memory efficient integer correlator.
  • GDAL projection interface for point2dem.
  • Parameter settings from command line and configuration file.

Getting better results with LRO-NAC in Ames Stereo Pipeline

The Lunar Reconnaissance Orbiter (LRO) has been in space now for more than 2 years. It produces extremely high-resolution images of the Moon that the world hasn’t seen the likes of in half a century. This is both a blessing and a curse for the Stereo Pipeline. It’s high-resolution nature means that we can get meter level accuracy easily. The downside is that even the smallest hills in LRO-NAC imagery can cause huge disparities. Large disparities mean long processing times.

LRONAC Example Raw Stereo Input Image

Take for example this pair of LRO-NAC imagery of the Apollo 15 Landing site. These images are observations M111571816 and M111578606. For simplicity, let’s only discuss the left frame (or LE files). The bright impact crater, seen at the top of the frame, moves right 2500 pixels between frames. However, the hills at the bottom of the frame move left 500 pixels. This means that in the horizontal direction only, the Stereo Pipeline must search ~3000 pixels! This is not only slow, but makes the correlation process very susceptible to matching noise. When we correlate those images directly, the whole process fails because the integer correlator couldn’t tell the difference between noise and the correct solution.

LRONAC Images Mapprojected to LOLAIf we map project the images, the results get a little better. The images below were created by running ISIS3’s spiceinit and cam2map. Now the disparity is more reasonable and ASP can easily correlate the top of the frame. However the search at the bottom of the frame is still pretty extreme. There is about a ~500 pixel shift in the hills. This is less than before; unfortunately ASP still correlates to noise.  The automatic settings recommended a search of 1400 by 100 pixels. This is larger than probably what is required since the automatic settings pads its numbers to catch parts of the image it couldn’t get readings from.

The big question is, why didn’t the disparity completely disappear after map projection? This is a fault of the 3D data used to map project. By default, ISIS uses LOLA data to map project by. At the equator the LOLA data is very sparse and doesn’t do a good job of modeling the mountains and hills. Luckily, a new 3D data source has become available, the LRO-WAC Global DTM created by the DLR. This is a dense 100 meters/px 3D model and would make a much better product to map project imagery on. It’s still not perfect since it is about 2 magnitudes lower resolution than our LRO-NAC imagery, but it is still better than LOLA data and will help reduce our disparity search range.

The first trick to perform is getting ISIS to map project on new DEM data. This is not entirely easy and requires some planning.

Download and prepare a WAC DTM tile

The example images I’m using for this article occur at 3.64E, 26N. So I’ve only downloaded the tile that covers the north most and first 90 degrees of the global DTM. This link is where you can download WAC DTM tiles yourself. You then need to convert it to a cube format that ISIS can understand. This involves a file conversion, a conversion to radial measurements, and then finally attaching some statistical information on the DTM. Here are the commands I used. I strongly recommend reading the ISIS documentation to learn exactly what the commands are doing.

> <download WAC_GLD100_E300N0450_100M.IMG>
> pds2isis from=WAC_GLD100_E300N0450_100M.IMG to=WAC_GLD100_E300N0450_100M.cub
> algebra from=WAC_GLD100_E300N0450_100M.cub to=WAC_GLD100_E300N0450_100M.lvl1.cub operator=unary A=1 C=1737400
> demprep from=WAC_GLD100_E300N0450_100M.lvl1.cub to=WAC_GLD100_E300N0450_100M.lvl2.cub

I’m using the algebra command to convert the values in the WAC DTM tile from their offset-from-datum value (where zero equals the datum radius of 1,737.4 km) to radius values (where zero equals the center of the Moon), which are the kind of values that ISIS needs for a shape model that it will project images on to. There is probably a better way of doing this by just editing a metadata offset somewhere. Unfortunately I don’t know how. The demprep adds a statistics table. You can open the result up in qview to check that the positioning is correct.

Attaching and project against a custom shape model

When users ‘spiceinit’ a file, they are attaching a lot of metadata to the image. Everyone knows that they are adding the camera model and the spacecraft’s ephemeris data. Yet they are also attaching what ISIS calls a shapemodel, this is actually a DEM/DTM. Map projecting against a new shape models requires redoing the ‘spiceinit’ step. Here’s an example assuming you’ve already completed the ‘lronac2isis’.

> spiceinit from=M111571816LE.cub shape=USER model=WAC_GLD100_E300N0450_100M.lvl2.cub

At that point you can run cam2map yourself. I strongly recommend just using the cam2map4stereo.py script provided by ASP to map project your input for the stereo session. On the left is an example of LRO-NAC imagery draped over the WAC DTM. The differences are very fine. You might want to try toggling back and forth between the image of the WAC projected imagery and the LOLA projected imagery.

Stereo Pipeline Results

Once you’ve map projected against the LRO-WAC DTM, you’ve created images that are much better aligned to each other. If you run everything in full automatic mode, it will attempt a search of about 600 px by 100 px. That’s half the search range of the previous cam2map results against LOLA! Thus, this means faster processing time and less required memory during the integer correlation step. On the right is my final result. Oddly, this image exposes a problem I’ve never seen before. The top half of the image exhibits ‘stair step’ like features. Anyways, this is one interesting idea for speeding up things and make LRO-NAC processing faster. I hope it allows more people to process these images for 3D.

Finished 3D from Apollo

Render of a DIM and DEM map from Apollo Metric Images It’s been a long 3 years in the making, but today I can finally say that I have finished my 3D reconstruction from the Apollo Metric cameras. After ten of thousands of CPU hours and several hundreds of liters soda, the Mapmakers at the Intelligent Robotics Group have managed to produce an Image mosaic and Digital Elevation Map. The final data products are going up on LMMP’s website for scientists to use. I encourage everyone else to instead take a look at the following KML link I’ve provided below.

IRG’s Apollo Metric/Mapping Mosaic

It’s so pretty! But don’t be sad! IRG’s adventure with Apollo images doesn’t end here. Next year we’ll be working on a new and fancier Image Mosaic called an Albedo Map. Immediately after that, our group will be working with the folks at USGS’s Astrogeology Science Center to include more images into the Apollo mosaic. In that project we’ll include the images that are not only looking straight down on the Moon, but also the images that look off into the horizon.

All of the above work was produced using our open source libraries Vision Workbench and Ames Stereo Pipeline. Check them out if you find yourself producing 3D models of terrain. At the very least, our open source license allows you to look under the hood and see how we did things so that you may improve upon them!

Apollo 15 Mosaic Completed

Image of the Moon with images from Apollo 15 projected onto it.Let me secretly show you a cool project I just finished. During the later missions of Apollo during the 70’s, NASA came to understand that their funding would be cut back. In attempt to extract as much science as possible from the last few missions to the Moon they increased the astronauts’ time on the surface, gave them a car, and added a science bay to the orbiting spacecraft. Inside that science bay (called SIM) were two repurposed spy cameras. One was the Apollo Metric Camera, whose 1400 images from Apollo 15 are seen projected above. Recently ASU has been digitally scanning this imagery. This has allowed me and my colleagues to be able to create a 3D model of a large section of the near side of the Moon and to create a beautifully stitched mosaic.

3D model of Tsiolkovsky CraterBesides these being pretty pictures, I’m proud to say that all of this work was created by open source software that NASA has produced and that is also currently available on GitHub. Vision Workbench and Stereo Pipeline are the two projects that have made this all possible. The process is computationally expensive and is not recreate-able at home, but a university or company with access to a cluster could easily recreate our results. So what does the process look like?

  1. Collect Imagery and Interest Points (using ipfind and ipmatch).
  2. Perform Bundle Adjustment to solve for correct location of cameras (using isis_adjust).
  3. Create 3D models from stereo pairs using correct camera models (with stereo).
  4. Create terrain-rectified imagery from original images (with orthoproject).
  5. Mosaic imagery and solve for exposure times (using PhotometryTK).
  6. Export imagery into tiles or KML (with plate2dem or image2qtree).

This long process above is not entirely documented yet and some tools have not yet been released in the binary version of Stereo Pipeline. Still, for the ambitious the tools are already there. Better yet, we’ll keep working on those tools to improve them as IRG is chock-full of ideas for new algorithms and places to apply these tools.