Ames Stereo Pipeline and Earth

I’ve been working on Ames Stereo Pipeline for about 4 years now. During that time the developers and I have always been focusing on what we could do with data that has come from NASA satellites. So we’ve produce 3D models of the Moon, Mars, and of the moons of Saturn. Recent work however has allowed us to produce 3D models of data much closer to home.

Earth example from ASP

In the next ASP release, I’m happy to announce that we will be providing limited support for processing images from Digital Globe satellites. Digital Globe sells images of Earth via World View and Quick Bird satellites. Their imagery is commonly seen in the base layer of Google Earth. ASP was recently awarded a grant to add support for those images to allow mapping of Earth’s polar regions.

Just last week I was able to produce ASP’s first 3D model of Earth using World View data. The image is a screenshot of the colormap DEM output draped in Google Earth. The area is just west of Denver in the Rockies and the imagery represented a huge challenge for ASP. There were lots of trees that make template correlation difficult. The mountains also made for a massive search range, which allowed outliers to make their way into the final result. Even though, we were still able to produce a result. I’m very proud of this result as it leaves me with confidence that ASP will do well in the polar regions where the terrain is not as aggressive.

Getting better results with LRO-NAC in Ames Stereo Pipeline

The Lunar Reconnaissance Orbiter (LRO) has been in space now for more than 2 years. It produces extremely high-resolution images of the Moon that the world hasn’t seen the likes of in half a century. This is both a blessing and a curse for the Stereo Pipeline. It’s high-resolution nature means that we can get meter level accuracy easily. The downside is that even the smallest hills in LRO-NAC imagery can cause huge disparities. Large disparities mean long processing times.

LRONAC Example Raw Stereo Input Image

Take for example this pair of LRO-NAC imagery of the Apollo 15 Landing site. These images are observations M111571816 and M111578606. For simplicity, let’s only discuss the left frame (or LE files). The bright impact crater, seen at the top of the frame, moves right 2500 pixels between frames. However, the hills at the bottom of the frame move left 500 pixels. This means that in the horizontal direction only, the Stereo Pipeline must search ~3000 pixels! This is not only slow, but makes the correlation process very susceptible to matching noise. When we correlate those images directly, the whole process fails because the integer correlator couldn’t tell the difference between noise and the correct solution.

LRONAC Images Mapprojected to LOLAIf we map project the images, the results get a little better. The images below were created by running ISIS3’s spiceinit and cam2map. Now the disparity is more reasonable and ASP can easily correlate the top of the frame. However the search at the bottom of the frame is still pretty extreme. There is about a ~500 pixel shift in the hills. This is less than before; unfortunately ASP still correlates to noise.  The automatic settings recommended a search of 1400 by 100 pixels. This is larger than probably what is required since the automatic settings pads its numbers to catch parts of the image it couldn’t get readings from.

The big question is, why didn’t the disparity completely disappear after map projection? This is a fault of the 3D data used to map project. By default, ISIS uses LOLA data to map project by. At the equator the LOLA data is very sparse and doesn’t do a good job of modeling the mountains and hills. Luckily, a new 3D data source has become available, the LRO-WAC Global DTM created by the DLR. This is a dense 100 meters/px 3D model and would make a much better product to map project imagery on. It’s still not perfect since it is about 2 magnitudes lower resolution than our LRO-NAC imagery, but it is still better than LOLA data and will help reduce our disparity search range.

The first trick to perform is getting ISIS to map project on new DEM data. This is not entirely easy and requires some planning.

Download and prepare a WAC DTM tile

The example images I’m using for this article occur at 3.64E, 26N. So I’ve only downloaded the tile that covers the north most and first 90 degrees of the global DTM. This link is where you can download WAC DTM tiles yourself. You then need to convert it to a cube format that ISIS can understand. This involves a file conversion, a conversion to radial measurements, and then finally attaching some statistical information on the DTM. Here are the commands I used. I strongly recommend reading the ISIS documentation to learn exactly what the commands are doing.

> <download WAC_GLD100_E300N0450_100M.IMG>
> pds2isis from=WAC_GLD100_E300N0450_100M.IMG to=WAC_GLD100_E300N0450_100M.cub
> algebra from=WAC_GLD100_E300N0450_100M.cub to=WAC_GLD100_E300N0450_100M.lvl1.cub operator=unary A=1 C=1737400
> demprep from=WAC_GLD100_E300N0450_100M.lvl1.cub to=WAC_GLD100_E300N0450_100M.lvl2.cub

I’m using the algebra command to convert the values in the WAC DTM tile from their offset-from-datum value (where zero equals the datum radius of 1,737.4 km) to radius values (where zero equals the center of the Moon), which are the kind of values that ISIS needs for a shape model that it will project images on to. There is probably a better way of doing this by just editing a metadata offset somewhere. Unfortunately I don’t know how. The demprep adds a statistics table. You can open the result up in qview to check that the positioning is correct.

Attaching and project against a custom shape model

When users ‘spiceinit’ a file, they are attaching a lot of metadata to the image. Everyone knows that they are adding the camera model and the spacecraft’s ephemeris data. Yet they are also attaching what ISIS calls a shapemodel, this is actually a DEM/DTM. Map projecting against a new shape models requires redoing the ‘spiceinit’ step. Here’s an example assuming you’ve already completed the ‘lronac2isis’.

> spiceinit from=M111571816LE.cub shape=USER model=WAC_GLD100_E300N0450_100M.lvl2.cub

At that point you can run cam2map yourself. I strongly recommend just using the script provided by ASP to map project your input for the stereo session. On the left is an example of LRO-NAC imagery draped over the WAC DTM. The differences are very fine. You might want to try toggling back and forth between the image of the WAC projected imagery and the LOLA projected imagery.

Stereo Pipeline Results

Once you’ve map projected against the LRO-WAC DTM, you’ve created images that are much better aligned to each other. If you run everything in full automatic mode, it will attempt a search of about 600 px by 100 px. That’s half the search range of the previous cam2map results against LOLA! Thus, this means faster processing time and less required memory during the integer correlation step. On the right is my final result. Oddly, this image exposes a problem I’ve never seen before. The top half of the image exhibits ‘stair step’ like features. Anyways, this is one interesting idea for speeding up things and make LRO-NAC processing faster. I hope it allows more people to process these images for 3D.

Building Ames Stereo Pipeline against ISIS on Ubuntu 10.04

This is a guide for advanced bearded users. If you don’t have a beard, don’t try this at home! These instructions will also work for OSX minus the package manager.

Ames Stereo Pipeline is an open source collection of tools for building 3D models from NASA’s planetary satellites. Our software is able to do this by depending on USGS’s ISIS for all the camera models. That saves me a lot of time because I no longer have to program up custom models for all the many cameras that are out there (MOC, HiRISE, MDIS, LROC, ISS, and etc ). The downside is that building ISIS is next to impossible as they expect you to use their binary releases. This means that to compile against their binaries, we must recreate their development environment, down to every third party library.

There are some base tools that you need installed on your Ubuntu Box.

sudo apt-get install g++ gfortran libtool autoconf   \
   automake ccache git-core subversion cmake chrpath \
   xserver-xorg-dev xorg-dev libx11-dev libgl1-mesa-dev \
   libglu1-mesa-dev libglut3-dev

Building an ISIS environment is incredibly difficult to do by hand. Never-mind the difficulty in sanitizing the bash shell so that it doesn’t find libraries in odd places. So a co-worker of mine created an awesome collection of scripts to make this easier. It’s called Binary Builder and it’s available on Github. The commands below checkout the scripts from Github and then run them. What BB is doing in this first step is downloading and compiling all of the dependencies of Vision Workbench and Ames Stereo Pipeline. This means we’re building Boost, GDAL, Zip, OpenSceneGraph, LAPACK, and many others. As you can imagine, this step takes a long time.

cd ~; mkdir projects; cd projects
git clone
cd BinaryBuilder
./ --dev-env

Most likely things will die at this point. Here is where your bearded powers are to be applied! Good luck. When you fix the bug or think you’ve worked it out. You should use the following command to quickly restart.

./ --dev-env --resume

You’ll know you’ve had a completely successful ./ session when it prints “All done!” and gives you a list of the environmental variables used. Next, let’s clean up by making a BaseSystem tarball.

./ --include all --set-name BaseSystem last-completed-run/install

This tarball will house all the headers, libraries, and a copy of ISIS that you need to proceed. It will be your lighthouse when everything else fails. You can also share this tarball with other users who have similar systems. Anyways, it’s time to deploy this BaseSystem tarball into a permanent position.

mkdir ~/projects/base_system
./ BaseSystem-*.tar.gz ~/projects/base_system

Installing Vision Workbench

You’re ready for step 2. This is all pretty straight forward. However you should notice that the deploy-base script produced config.options for both Vision Workbench and Stereo Pipeline. A config.options script is just another way to feed the arguments to ./configure.  When we install Vision Workbench, the base options in config.options.vw should be fine for us.

cd ~/projects
git clone
cd visionworkbench
cp ~/projects/base_system/config.options.vw config.options
./autogen && ./configure
make -j <N Processors>
make install
make check -j <N Processors>

All unit tests should pass at this point. If not, bearded you knows what to do.

Installing Ames Stereo Pipeline

cd ~/projects
git clone
cd StereoPipeline
cp ~/projects/base_system/config.options.asp config.options

We’re going to take a moment to deviate here. At this point you will need to make some modifications to your copy of ‘config.options’ for Ames Stereo Pipeline. You need to modify the variable ‘VW’ to be equal to the install (prefix) path that you used. In this example, it should be set to ‘~/projects/visionworkbench/build’. You can also take this time to flip other bits that you find interesting. For example, there’s a ENABLE_MODULE_CONTROLNETTK variable that you can set equal to ‘yes’ which would enable prototype utilities to manipulate control networks. Once you’re done playing around, finish your build of ASP.

cd ~/projects/StereoPipeline
./autogen && ./configure
make -j <N processors>
make install

You can also run ‘make check’, you just need to have your ISIS3DATA set up. You can fall back to your own install of ISIS and everything should work fine. If it wasn’t clear before, you’ll find the executables in “~/projects/visionworkbench/build/bin” and “~/projects/StereoPipeline/build/bin”. That’s all folks, I hope everything worked out okay.

Finished 3D from Apollo

Render of a DIM and DEM map from Apollo Metric Images It’s been a long 3 years in the making, but today I can finally say that I have finished my 3D reconstruction from the Apollo Metric cameras. After ten of thousands of CPU hours and several hundreds of liters soda, the Mapmakers at the Intelligent Robotics Group have managed to produce an Image mosaic and Digital Elevation Map. The final data products are going up on LMMP’s website for scientists to use. I encourage everyone else to instead take a look at the following KML link I’ve provided below.

IRG’s Apollo Metric/Mapping Mosaic

It’s so pretty! But don’t be sad! IRG’s adventure with Apollo images doesn’t end here. Next year we’ll be working on a new and fancier Image Mosaic called an Albedo Map. Immediately after that, our group will be working with the folks at USGS’s Astrogeology Science Center to include more images into the Apollo mosaic. In that project we’ll include the images that are not only looking straight down on the Moon, but also the images that look off into the horizon.

All of the above work was produced using our open source libraries Vision Workbench and Ames Stereo Pipeline. Check them out if you find yourself producing 3D models of terrain. At the very least, our open source license allows you to look under the hood and see how we did things so that you may improve upon them!

Apollo 15 Mosaic Completed

Image of the Moon with images from Apollo 15 projected onto it.Let me secretly show you a cool project I just finished. During the later missions of Apollo during the 70’s, NASA came to understand that their funding would be cut back. In attempt to extract as much science as possible from the last few missions to the Moon they increased the astronauts’ time on the surface, gave them a car, and added a science bay to the orbiting spacecraft. Inside that science bay (called SIM) were two repurposed spy cameras. One was the Apollo Metric Camera, whose 1400 images from Apollo 15 are seen projected above. Recently ASU has been digitally scanning this imagery. This has allowed me and my colleagues to be able to create a 3D model of a large section of the near side of the Moon and to create a beautifully stitched mosaic.

3D model of Tsiolkovsky CraterBesides these being pretty pictures, I’m proud to say that all of this work was created by open source software that NASA has produced and that is also currently available on GitHub. Vision Workbench and Stereo Pipeline are the two projects that have made this all possible. The process is computationally expensive and is not recreate-able at home, but a university or company with access to a cluster could easily recreate our results. So what does the process look like?

  1. Collect Imagery and Interest Points (using ipfind and ipmatch).
  2. Perform Bundle Adjustment to solve for correct location of cameras (using isis_adjust).
  3. Create 3D models from stereo pairs using correct camera models (with stereo).
  4. Create terrain-rectified imagery from original images (with orthoproject).
  5. Mosaic imagery and solve for exposure times (using PhotometryTK).
  6. Export imagery into tiles or KML (with plate2dem or image2qtree).

This long process above is not entirely documented yet and some tools have not yet been released in the binary version of Stereo Pipeline. Still, for the ambitious the tools are already there. Better yet, we’ll keep working on those tools to improve them as IRG is chock-full of ideas for new algorithms and places to apply these tools.