Interesting GDAL Usage Examples

Despite first appearances, GDAL is way more than an image I/O library. I routinely use it resize images, transcode images, and reproject images. But did you know that it can also draw contour lines, write kml, and orthorectify RPC images? Even if you did, I think it’s good to document here because I keep forgetting.

Render Contour Lines

Publishing DEM’s created by ASP is a tricky because it is easiest visualize height change through a color gradient. That loses its usefulness when publishing to a conference where they print in gray scale. A snazzier option would be a hill-shaded render of a DEM with contour lines plotted on top. You can see an example on right and the commands below:

gdaldem hillshade <DEM> output.tif -z 0.000012
gdal_contour -i 50 <DEM> contour
gdal_rasterize -burn 0 -l contour contour output.tif

The “-z” value on gdaldem hillshade requires some playing around to figure out what is best for your image. The “-i” value on gdal_contour is the interval between lines in units of height defined by the DEM (for ASP products, it’s meters).

KML Outline of a Cube File’s footprint

Rendering of KML outputEver wanted to provide context for your ISIS cube file image on Google Earth? These commands allow you visualize the image on right. The first two lines are ISIS commands.

footprintinit from=<ISIS Cube File>
isis2gml from=<ISIS Cube File> to=output.gml label=footprint
ogr2ogr -f KML output.kml output.gml

Orthorectify an RPC image on a DEM

This one doesn’t apply to planetary imagery, but most Earth images tend to have an RPC camera model attached to them. Download a DEM from USGS’s NED, and then with GDAL you can drape your source imagery over the DEM.

 gdalwarp -of GTiff -co TILED=yes -co COMPRESS=lzw -r cubicspline \
    -rpc -to RPC_DEM=<your DEM> <RPC image> output.tif -dstnodata 0

Building Ames Stereo Pipeline against ISIS on Ubuntu 10.04

This is a guide for advanced bearded users. If you don’t have a beard, don’t try this at home! These instructions will also work for OSX minus the package manager.

Ames Stereo Pipeline is an open source collection of tools for building 3D models from NASA’s planetary satellites. Our software is able to do this by depending on USGS’s ISIS for all the camera models. That saves me a lot of time because I no longer have to program up custom models for all the many cameras that are out there (MOC, HiRISE, MDIS, LROC, ISS, and etc ). The downside is that building ISIS is next to impossible as they expect you to use their binary releases. This means that to compile against their binaries, we must recreate their development environment, down to every third party library.

There are some base tools that you need installed on your Ubuntu Box.

sudo apt-get install g++ gfortran libtool autoconf   \
   automake ccache git-core subversion cmake chrpath \
   xserver-xorg-dev xorg-dev libx11-dev libgl1-mesa-dev \
   libglu1-mesa-dev libglut3-dev

Building an ISIS environment is incredibly difficult to do by hand. Never-mind the difficulty in sanitizing the bash shell so that it doesn’t find libraries in odd places. So a co-worker of mine created an awesome collection of scripts to make this easier. It’s called Binary Builder and it’s available on Github. The commands below checkout the scripts from Github and then run them. What BB is doing in this first step is downloading and compiling all of the dependencies of Vision Workbench and Ames Stereo Pipeline. This means we’re building Boost, GDAL, Zip, OpenSceneGraph, LAPACK, and many others. As you can imagine, this step takes a long time.

cd ~; mkdir projects; cd projects
git clone https://github.com/NeoGeographyToolkit/BinaryBuilder.git
cd BinaryBuilder
./build.py --dev-env

Most likely things will die at this point. Here is where your bearded powers are to be applied! Good luck. When you fix the bug or think you’ve worked it out. You should use the following command to quickly restart.

./build.py --dev-env --resume

You’ll know you’ve had a completely successful ./build.py session when it prints “All done!” and gives you a list of the environmental variables used. Next, let’s clean up by making a BaseSystem tarball.

./make-dist.py --include all --set-name BaseSystem last-completed-run/install

This tarball will house all the headers, libraries, and a copy of ISIS that you need to proceed. It will be your lighthouse when everything else fails. You can also share this tarball with other users who have similar systems. Anyways, it’s time to deploy this BaseSystem tarball into a permanent position.

mkdir ~/projects/base_system
./deploy-base.py BaseSystem-*.tar.gz ~/projects/base_system

Installing Vision Workbench

You’re ready for step 2. This is all pretty straight forward. However you should notice that the deploy-base script produced config.options for both Vision Workbench and Stereo Pipeline. A config.options script is just another way to feed the arguments to ./configure.  When we install Vision Workbench, the base options in config.options.vw should be fine for us.

cd ~/projects
git clone https://github.com/visionworkbench/visionworkbench.git
cd visionworkbench
cp ~/projects/base_system/config.options.vw config.options
./autogen && ./configure
make -j <N Processors>
make install
make check -j <N Processors>

All unit tests should pass at this point. If not, bearded you knows what to do.

Installing Ames Stereo Pipeline

cd ~/projects
git clone https://github.com/NeoGeographyToolkit/StereoPipeline.git
cd StereoPipeline
cp ~/projects/base_system/config.options.asp config.options

We’re going to take a moment to deviate here. At this point you will need to make some modifications to your copy of ‘config.options’ for Ames Stereo Pipeline. You need to modify the variable ‘VW’ to be equal to the install (prefix) path that you used. In this example, it should be set to ‘~/projects/visionworkbench/build’. You can also take this time to flip other bits that you find interesting. For example, there’s a ENABLE_MODULE_CONTROLNETTK variable that you can set equal to ‘yes’ which would enable prototype utilities to manipulate control networks. Once you’re done playing around, finish your build of ASP.

cd ~/projects/StereoPipeline
./autogen && ./configure
make -j <N processors>
make install

You can also run ‘make check’, you just need to have your ISIS3DATA set up. You can fall back to your own install of ISIS and everything should work fine. If it wasn’t clear before, you’ll find the executables in “~/projects/visionworkbench/build/bin” and “~/projects/StereoPipeline/build/bin”. That’s all folks, I hope everything worked out okay.

MER Data Introduction

Getting data from the NASA Planetary Data Services (PDS) can be a little intimidating. The most import thing a person needs to understand about the MER data is that it is sectioned into individual sites that the rovers visited during their trip. These sites are localized on interesting features like an individual bay into a large crater or a particularly large bolder. These sites are further subdivided into sols (Martian days) at the site. When we search for images we limit them by both sites and sols.

Gathering Data

It is now time for us to select a site for which we want to render in 3D. I find MER’s Analyst’s Notebook as a good tool for this task. Here’s what I did:

  1. Go to: http://an.rsl.wustl.edu/mer/
  2. Click on “Opportunity”.
  3. To find a site: click on the “Map” icon on the toolbar, and then click on “Traverse Map”.

Each one of the black dots represents a site location. The number after the slash is a sol number. I compared this map to Google Earth (GE) and decided that I wanted to render “Cape Agulhas”. You can do that too in GE if you click on the flag icon for a MER and select “load rover way points”. Just note that the map on GE doesn’t show as much information as the traverse map on Analyst’s Notebook.

  1. On the “Traverse Map”, click “91/1673”.
  2. Select only Navcam data products on the left.
  3. On the Data Products drop down box, select “Show for sol 1674”
  4. Click “Redraw Map”.

From here we can see that the Navcam did a full panorama. We are now ready to start downloading data from PDS. You can download from Analyst’s Notebook, however I was only able to download a single image at a time. On NASA PDS we can download a ‘wget’ script that will download all Navcam images for us in one go.

  1. Go to NASA PDS’s image search for MER, http://pds-imaging.jpl.nasa.gov/search/search.html#QuickSearch
  2. On the left, under “Select Instrument(s):”, select “NAVCAM.
  3. Next to “Instrument Host ID”, select “MERB/MER1/Opportunity”.
  4. Next to “Product Type”, select “EDR”.
  5. Next to “Image Type”, select “Regular”.
  6. Next to “Eye”, select “LEFT”.
  7. Next to “Planet Day Number”, type 1674 for both the Min and Max text boxes.
  8. On the left, click “Get Results”.

You now have a listing of all the Left EDR NAVCAM. We want all of these; so on the left select ‘WGET’ under download products. Then click download. Move your downloaded ‘atlas_wget_script’ to your work directory. Here is how you start the download in the terminal:

cd $YOUR_WORK_DIR
source atlas_wget_script

Later on I can show how to produce 3D maps from this imagery. However this seems like a good start.

Fly by night Cassini ISS processing

Anytime I want to process Cassini ISS data, I always run into problems. This is largely because the documentation is lacking. Today is the day that is going to change. Here are my notes on how to extract ISS data of Iapetus, all for the low cost of free. I assume you have USGS’s ISIS3 installed.

Iapetus, file N1483152391_1

Downloading from NASA PDS

As usual, all NASA data can be pulled free from their PDS (Planetary Data System). Since I’m looking for Iapetus, I click ‘Saturn’ and then ‘Cassini Image Search’. This will bring you to the Cassini Image Quick Search page. On the left side of the page, select the ISS instrument. Then select EDR under product type. EDR products are raw images while RDR are processed products like maps. Finally under target name, select the option Iapetus. Click ‘Get Results’ on the far right. At the time of writing, there was 4025 records, or images available.

Here’s where things get a little messy. A lot of these images are of Iapetus from very far away. So far away that they wouldn’t be good for my projects. I’d like to sort these images by their distance. On the far right, under SELECT PARAMETERS FOR REPORT OR TABLE COLUMNS; I can add the column ‘Target Distance’. On the new column, I can click the up arrow and hopefully it will sort the images by who is closest. However it doesn’t work. In the middle of the page; PDS says “508 of 4025″ products. The page is only sorting 1/8th of the total images of Iapetus. To correct this, keep clicking the ‘Get More’ button until it says “4025 of 4025″ products.

At this point we are ready to start downloading products. On the far right of the page, download the CSV file of the records they found along with the WGET product. When you open up the CSV file you’ll notice that they didn’t keep the images ordered in distance from target. Lame, I know. However, we use Linux and we can pull ourselves out of this. Here’s the bash commands I used to sort the file. Gurus can probably do a better job of this than me.

cat atlas_report.csv | awk -F "," '{print $3 $4}' | sort -n -k 2 > sorted_files

If you open up sorted_files in your favorite text editor, you can clean up the stray lines that were mis-sorted. I chose to keep the files with distances marked ‘-1.0e3′. I also hand deleted the lines where the distances were marked greater than 300 km out from the moon. This left me with 758 names of images that are pretty close to Iapetus. Now we need to pull these 758 names from the atlas_wget_script so that I only download the images I want. However when you look in that script, you’ll notice that they name the images slightly different from what we saw in the atlas_report.csv. Here’s an example:

1_N1466586085.118 versus N1466586085_1.IMG

Here’s how I filtered sorted_files down to only that mantissa like bit I wanted and then pulled out the respected lines I wanted out of atlas_wget_script.

cat sorted_files | awk '{print substr($1,3,11)}' > wanted_keywords
cat atlas_wget_script | grep -f wanted_keywords > wanted_get_script

Hooray! Finally we are ready to really download imagery files! If you look inside wanted_get_script, you’ll see the download commands for all the images of Iapetus that I want. To start downloading:

source wanted_get_script

Processing with ISIS 3

You probably think it’s all kisses and hugs from here. You have a direct full of IMG and LBL_label files do ya? Well the command we want to use is ciss2isis. Well then, don’t let me hold you back, read their instructions and give it shot on one of the images you downloaded.

ciss2isis from=N1483150900_1.LBL_label to=test.cub

What it broke?

**I/O ERROR** Unable to read PVL file [/Volumes/Data/Saturn/CASSINI_data/Iapetus/temp/N1483150900_1.LBL_label]
**PVL ERROR** Error in pvl on line [166]
**PVL ERROR** Unable to read keyword [DESCRIPTION    = "For the packet from which this telemetry header was Extracted:  the first line of this packet is a continuation of a line begun in the previous packet. OBJECT = COLUMN NAME = NULL_PADDING DATA_TYPE = MSB_UNSIGNED_INTEGER START_BYTE = 61 BYTES = 475 END_OBJECT = COLUMN END_OBJECT = TELEMETRY_TABLE OBJECT = LINE_PREFIX_TABLE INTERCHANGE_FORMAT = BINARY ROWS = 256 COLUMNS = 7 ROW_BYTES = 24 ROW_SUFFIX_BYTES = 512 OBJECT             = COLUMN NAME               = LINE_NUMBER DATA_TYPE          = LSB_UNSIGNED_INTEGER START_BYTE         = 1 BYTES              = 2 DESCRIPTION        = "The image line number of this record.  Maintained at]
**PVL ERROR** Keyword has extraneous data [The image line number of this record.  Maintained at] at the end

This looks darn scary. However if you download the original LBL files from the PDS manually without using the atlas_wget_scripts ‘s version, the problem becomes clear. The download script was accessing a new label file that had a poorly inserted ‘EXTENDED_ISS_SCIENCE_HEADER’. This was probably done by some batch processing job at PDS or Cassini imaging lab. This mistake on PDS managed to delete some of the lines defining the end of objects. This caused the ISIS3 command ciss2isis to freak out an die.

Yet don’t worry .. this can be corrected. I have written a patch for these LBL_label files that will make then usable. In a new file, write the following contents.

--- N1483150900_1.LBL_label     2010-10-19 19:46:20.000000000 -0700
+++ N1483150900_1.LBL_corr      2010-10-19 19:45:01.000000000 -0700
@@ -99,6 +99,7 @@
BIT_DATA_TYPE  = BINARY
START_BYTE     = 1
BYTES          = 60
+END_OBJECT     = COLUMN

 OBJECT          = BIT_COLUMN
NAME           = CAMERA
@@ -144,7 +145,8 @@
BITS           = 1
DESCRIPTION    = "For the packet from which this telemetry header was
Extracted:  the first line of this packet is a
-                    continuation of a line begun in the previous packet.
+                    continuation of a line begun in the previous packet."
+ END_OBJECT   = BIT_COLUMN
OBJECT = COLUMN
NAME = NULL_PADDING
DATA_TYPE = MSB_UNSIGNED_INTEGER
@@ -209,6 +211,8 @@
DATA_TYPE          = LSB_UNSIGNED_INTEGER
START_BYTE         = 9
BYTES              = 2
+END_OBJECT         = COLUMN
+
END_OBJECT = LINE_PREFIX_TABLE
OBJECT = IMAGE
      LINES = 256

I saved the above as correction.patch. It can then be applied to all LBL_label files to make then work.

echo *label | xargs -n1 echo | xargs -n1 -I{} patch {} correction.patch

You will now find that ciss2isis works. Here’s how I do that with multiprocessors.

echo *label | xargs -n1 echo | awk -F "." '{print $1}' | xargs -n1 -I{} -P8 ciss2isis from={}.LBL_label to={}.cub

You now have a directory with cub files. You can view those with the qview command or export them with isis2std. There’s still more that needs to be done inorder clean up this imagery. I won’t tell you today, but I can hint what the next few steps will be. Have fun!

Iapetus plus stars

Installing Vision Workbench on Ubuntu

It seems everyone runs into trouble when installing Vision Workbench. Here’s a quick outline on how to get my favorite software on my second favorite platform. We’ll need to start with downloading all the dependencies through Ubuntu’s package manager called Aptitude.

sudo apt-get install build-essential automake libtool
             libboost1.42-all-dev libproj-dev git
             git-completion liblapack-dev ccache

The above command gets you most of the way. There is however one more dependency that we require. We need GDAL, which is a wonderful library for interfacing with many different file types and their geographic information. However Ubuntu only provides an old 1.6 version where Vision Workbench requires the latest and greatest 1.7.

Getting the newer version of GDAL requires an additional repo provided by UbuntuGIS. This can be achieved easily by doing the following:

sudo apt-get install python-software-properties
sudo add-apt-repository ppa:ubuntugis/ubuntugis-unstable
sudo apt-get update
sudo apt-get install libgdal1-dev gdal-bin

If you happen to be one of my unfortunate interns (Muhawahaha) working from a laptop and stuck behind a government firewall. Here’s what Josh has to say about that:

However, if you are on the ARC-WLAN-GUEST network, port
11371 will most likely be blocked which will cause the above com-
mand (and programs like ping) to time out. If this is the case, the
First, add the line ”http://ppa.launchpad.net/ubuntugis/ubuntugis-
unstable/ubuntu lucid main” to your repository. Then, go to
the website http://wwwkeys.eu.pgp.net/ and search for the key
0x314DF160. Copy the text that appears into a local file. Then
apt-key add filename-here. Now you should be able to get the latest gdal. Run apt-get’s update and then install libgdal1-dev.
Alright! We are now ready to get Vision Workbench which as of May is available via a GitHub account. If you want the last stable version, download the source tarball from the official site. But instead I’ll show you how to download the current development code with Git. If you don’t have any plans of ever trying to modify Vision Workbench you can do the following.
git clone git://github.com/visionworkbench/visionworkbench.git
          VisionWorkbench

Others with ambitions of contributing back to the software will need to create a GitHub account and then proceed to fork Vision Workbench. Before you create a GitHub account, if you are unfamiliar with Git then you should read the first 2 chapters of ProGit. Proceed to GitHub and follow all the instructions until you have your ssh-keys setup. Then you’ll want to follow their instructions for how to fork a project. In the future when you want to contribute back to the group it will be performed with a pull request.

From inside your checked out copy of Vision Workbench in the first directory you’ll want to create a config.options file. In this file you’ll want to following contents.

ENABLE_DEBUG=yes
ENABLE_OPTIMIZE=yes
PREFIX=`pwd`/build/
ENABLE_CCACHE=yes
ENABLE_RPATH=yes
ENABLE_MODULE_MOSAIC=yes
ENABLE_MODULE_CAMERA=yes
ENABLE_MODULE_CARTOGRAPHY=yes
HAVE_PKG_GDAL=/usr
PKG_GDAL_CPPFLAGS=`gdal-config --cflags`
PKG_GDAL_LIBS=`gdal-config --libs`

Finally you are ready to compile and install Vision Workbench.

./autogen
./configure
make install

Vision Workbench is now installed in the build directory where the source code is checked out. You’ll probably want to point the environmental variable PATH to the build/bin directory.

If you still have spare cycles, you should run the test suite to find out if Vision Workbench is installed correctly. This is achieved with:

make check

You are now finished. It is now time to party.