Fly by Night application of ASP to Video

This is a video of horizontal disparity from a video stereo rig onboard the International Space Station. Specifically it was this camera. I used ffmpeg to split up the data into individual frames and then I applied ASP to the individual frames. I attempted solving for the focal length and convergence angle of this set but unfortunately I didn’t properly constrain focal length. (My algorithm cheated an brought its error to zero by focusing on infinity). Regardless, I’m pretty happy with the result from a Tuesday night at home.

Getting better results with LRO-NAC in Ames Stereo Pipeline

The Lunar Reconnaissance Orbiter (LRO) has been in space now for more than 2 years. It produces extremely high-resolution images of the Moon that the world hasn’t seen the likes of in half a century. This is both a blessing and a curse for the Stereo Pipeline. It’s high-resolution nature means that we can get meter level accuracy easily. The downside is that even the smallest hills in LRO-NAC imagery can cause huge disparities. Large disparities mean long processing times.

LRONAC Example Raw Stereo Input Image

Take for example this pair of LRO-NAC imagery of the Apollo 15 Landing site. These images are observations M111571816 and M111578606. For simplicity, let’s only discuss the left frame (or LE files). The bright impact crater, seen at the top of the frame, moves right 2500 pixels between frames. However, the hills at the bottom of the frame move left 500 pixels. This means that in the horizontal direction only, the Stereo Pipeline must search ~3000 pixels! This is not only slow, but makes the correlation process very susceptible to matching noise. When we correlate those images directly, the whole process fails because the integer correlator couldn’t tell the difference between noise and the correct solution.

LRONAC Images Mapprojected to LOLAIf we map project the images, the results get a little better. The images below were created by running ISIS3’s spiceinit and cam2map. Now the disparity is more reasonable and ASP can easily correlate the top of the frame. However the search at the bottom of the frame is still pretty extreme. There is about a ~500 pixel shift in the hills. This is less than before; unfortunately ASP still correlates to noise.  The automatic settings recommended a search of 1400 by 100 pixels. This is larger than probably what is required since the automatic settings pads its numbers to catch parts of the image it couldn’t get readings from.

The big question is, why didn’t the disparity completely disappear after map projection? This is a fault of the 3D data used to map project. By default, ISIS uses LOLA data to map project by. At the equator the LOLA data is very sparse and doesn’t do a good job of modeling the mountains and hills. Luckily, a new 3D data source has become available, the LRO-WAC Global DTM created by the DLR. This is a dense 100 meters/px 3D model and would make a much better product to map project imagery on. It’s still not perfect since it is about 2 magnitudes lower resolution than our LRO-NAC imagery, but it is still better than LOLA data and will help reduce our disparity search range.

The first trick to perform is getting ISIS to map project on new DEM data. This is not entirely easy and requires some planning.

Download and prepare a WAC DTM tile

The example images I’m using for this article occur at 3.64E, 26N. So I’ve only downloaded the tile that covers the north most and first 90 degrees of the global DTM. This link is where you can download WAC DTM tiles yourself. You then need to convert it to a cube format that ISIS can understand. This involves a file conversion, a conversion to radial measurements, and then finally attaching some statistical information on the DTM. Here are the commands I used. I strongly recommend reading the ISIS documentation to learn exactly what the commands are doing.

> <download WAC_GLD100_E300N0450_100M.IMG>
> pds2isis from=WAC_GLD100_E300N0450_100M.IMG to=WAC_GLD100_E300N0450_100M.cub
> algebra from=WAC_GLD100_E300N0450_100M.cub to=WAC_GLD100_E300N0450_100M.lvl1.cub operator=unary A=1 C=1737400
> demprep from=WAC_GLD100_E300N0450_100M.lvl1.cub to=WAC_GLD100_E300N0450_100M.lvl2.cub

I’m using the algebra command to convert the values in the WAC DTM tile from their offset-from-datum value (where zero equals the datum radius of 1,737.4 km) to radius values (where zero equals the center of the Moon), which are the kind of values that ISIS needs for a shape model that it will project images on to. There is probably a better way of doing this by just editing a metadata offset somewhere. Unfortunately I don’t know how. The demprep adds a statistics table. You can open the result up in qview to check that the positioning is correct.

Attaching and project against a custom shape model

When users ‘spiceinit’ a file, they are attaching a lot of metadata to the image. Everyone knows that they are adding the camera model and the spacecraft’s ephemeris data. Yet they are also attaching what ISIS calls a shapemodel, this is actually a DEM/DTM. Map projecting against a new shape models requires redoing the ‘spiceinit’ step. Here’s an example assuming you’ve already completed the ‘lronac2isis’.

> spiceinit from=M111571816LE.cub shape=USER model=WAC_GLD100_E300N0450_100M.lvl2.cub

At that point you can run cam2map yourself. I strongly recommend just using the cam2map4stereo.py script provided by ASP to map project your input for the stereo session. On the left is an example of LRO-NAC imagery draped over the WAC DTM. The differences are very fine. You might want to try toggling back and forth between the image of the WAC projected imagery and the LOLA projected imagery.

Stereo Pipeline Results

Once you’ve map projected against the LRO-WAC DTM, you’ve created images that are much better aligned to each other. If you run everything in full automatic mode, it will attempt a search of about 600 px by 100 px. That’s half the search range of the previous cam2map results against LOLA! Thus, this means faster processing time and less required memory during the integer correlation step. On the right is my final result. Oddly, this image exposes a problem I’ve never seen before. The top half of the image exhibits ‘stair step’ like features. Anyways, this is one interesting idea for speeding up things and make LRO-NAC processing faster. I hope it allows more people to process these images for 3D.

Finished 3D from Apollo

Render of a DIM and DEM map from Apollo Metric Images It’s been a long 3 years in the making, but today I can finally say that I have finished my 3D reconstruction from the Apollo Metric cameras. After ten of thousands of CPU hours and several hundreds of liters soda, the Mapmakers at the Intelligent Robotics Group have managed to produce an Image mosaic and Digital Elevation Map. The final data products are going up on LMMP’s website for scientists to use. I encourage everyone else to instead take a look at the following KML link I’ve provided below.

IRG’s Apollo Metric/Mapping Mosaic

It’s so pretty! But don’t be sad! IRG’s adventure with Apollo images doesn’t end here. Next year we’ll be working on a new and fancier Image Mosaic called an Albedo Map. Immediately after that, our group will be working with the folks at USGS’s Astrogeology Science Center to include more images into the Apollo mosaic. In that project we’ll include the images that are not only looking straight down on the Moon, but also the images that look off into the horizon.

All of the above work was produced using our open source libraries Vision Workbench and Ames Stereo Pipeline. Check them out if you find yourself producing 3D models of terrain. At the very least, our open source license allows you to look under the hood and see how we did things so that you may improve upon them!