Let me secretly show you a cool project I just finished. During the later missions of Apollo during the 70’s, NASA came to understand that their funding would be cut back. In attempt to extract as much science as possible from the last few missions to the Moon they increased the astronauts’ time on the surface, gave them a car, and added a science bay to the orbiting spacecraft. Inside that science bay (called SIM) were two repurposed spy cameras. One was the Apollo Metric Camera, whose 1400 images from Apollo 15 are seen projected above. Recently ASU has been digitally scanning this imagery. This has allowed me and my colleagues to be able to create a 3D model of a large section of the near side of the Moon and to create a beautifully stitched mosaic.
Besides these being pretty pictures, I’m proud to say that all of this work was created by open source software that NASA has produced and that is also currently available on GitHub. Vision Workbench and Stereo Pipeline are the two projects that have made this all possible. The process is computationally expensive and is not recreate-able at home, but a university or company with access to a cluster could easily recreate our results. So what does the process look like?
- Collect Imagery and Interest Points (using ipfind and ipmatch).
- Perform Bundle Adjustment to solve for correct location of cameras (using isis_adjust).
- Create 3D models from stereo pairs using correct camera models (with stereo).
- Create terrain-rectified imagery from original images (with orthoproject).
- Mosaic imagery and solve for exposure times (using PhotometryTK).
- Export imagery into tiles or KML (with plate2dem or image2qtree).
This long process above is not entirely documented yet and some tools have not yet been released in the binary version of Stereo Pipeline. Still, for the ambitious the tools are already there. Better yet, we’ll keep working on those tools to improve them as IRG is chock-full of ideas for new algorithms and places to apply these tools.