Patch Match Stereo

3 months ago during my Semi Global Matching post, I mentioned I would have a follow along post about another algorithm I was interested in. That algorithm is PatchMatch, and is in my opinion a better fit for satellite processing. It was first written for hole filling [1] and then later it was applied to stereo [2]. Unlike Ames Stereo Pipeline’s currently implemented hierarchical discrete correlator, PatchMatch easily applies to high dimensionality searching and it has a run time that corresponds to number of iterations and image size. Not image content, which I hope interprets into predictable run times. Unfortunately my current implementation is really slow and has several faults. However my hope is that eventually we’ll make it fast and that it’s predictability will enable our users to budget for the CPU cost of creating a DEM while still improving quality.

How it Works

The algorithm for PatchMatch is actually very simple. The disparity image is seeded with uniform noise. That noise is random guesses for what the disparity should be. However since every pixel is another guess, and the image is quite a bit larger than the search range we are looking over, a handful of these guesses are bound to be close to correct. We then propagate disparities with low cost to the neighboring disparities.  The cycle repeats by then adding more noise to the current propagated disparity but at half the previous amplitude. This cycle repeats until everything converges. Surprisingly, a recognizable disparity appears after the first iteration.

If you are having a hard time visualizing this, here is a video from the original authors of Patch Match Stereo.

PatchMatch can be quick in the hole filling application and probably for stereo from video. (Propagating between sequential frames can greatly help convergence.) Unfortunately it is quite slow in use to pairwise stereo. When performing stereo correlation in ASP 2.3, we group kernel evaluations by being at the same disparity value (sometimes called disparity plane in literature). This means that overlapping kernel evaluations will reuse pixel comparisons. The reuse of pixel comparisons is a huge speed boost. My implementation of PatchMatch has none of that. My implementation is also solving for a floating-point precision of disparity. While this gives me very detailed disparity maps, the downside is that my implementation spends 75% of its time performing interpolation. I think for this algorithm to become useful for researchers, I’ll need to discretize the disparities and prerender the input as super sampled to avoid repeated interpolation calculations.

I have one final statement to make on PatchMatch algorithm. In practice, PatchMatch produces results that are very similar to performing a brute force search over the entire search range where winner (lowest cost) takes all. That is different from ASP’s hierarchal search, which at each pyramid level falls into local minimums. It is only the hierarchal part of it that has any use in finding the global. What this means for PatchMatch is that we need to use a cost metric that is globally minimal. For narrow baseline stereo in a controlled lighting environment, the fast Sum of Absolute Differences (SAD) fits the bill. But in the cruel realities of satellite imagery, the only solution is Normalized Cross Correlation (NCC). Unfortunately, NCC is quite a bit slower to evaluate than SAD. Also, in cases where SAD worked, NCC performs worse (probably due to being sensitive to the calculation of mean?).

Where PatchMatch does poorly

I’ve already hit my primary concern, which is the speed of Patch Match. However I think if we view PatchMatch as replacement for both correlation and BayesEM, it already appears cost effective. A different concern is what happens when the search range’s area is larger than the area of the image being correlated. A real world example would be rasterizing a 256^2 tile for a WV2 image where the search range is 1400×100. The area of the tile is less than the search’s. The number of close valid guesses seeded by the initial uniform noise drops dramatically. It might be a good idea to then take the interest points that ASP finds and dump them in the initial random disparity guess that PatchMatch evaluates. This would insure there is a disparity to propagate from.

Previously I mentioned that PatchMatch in my experiments seems to behave as a brute force winner takes all algorithms. This means that increase the search size also means a decrease in quality because our odds of finding an outlier with less cost than the actually correct match have gone up. So maybe completely abandoning hierarchal search entirely is a bad idea.  Other ideas might be enforcing a smoothness constraint that would heavily penalize the random jumping that characterizes outliers. The enforcement of smoothness constraint was the driving force behind the power of Semi Global Matching.

Expanding the Algorithm

Since kernel evaluations in PatchMatch are individual problems and there is no shared arithmetic like there is in ASP’s stereo correlation, it makes it easy to add on some advance algorithms to PatchMatch. The original PatchMatch Stereo paper [#] mentioned adding on parameters to the disparity maps so that an affine window could be used for matching. This would improve results on slopes and I believe would be a better alternative to prior map projecting the input imagery.

Another idea mentioned in the paper was adding Adaptive Support Weights (ASW) [3] to each kernel evaluation. It adds weighting to pixels that match the color of the center of the kernel in addition to weighting central pixels more importantly than pixels at the edge. The idea is that pixels of the same color are likely to be at the same disparity. While not always correct, it definitely applies to scenarios where the top of a building is a different color than its sides. Implementations I show in my figures operate only on grayscale imagery and not color like the original paper proposed.

In practice, this additional weighting does a good job at preserving edges at depth discontinuity. This is important for cliffs and for buildings. A proposed improvement is geodesic adaptive support weights, which weighs same color pixel heavily that are connected to the central pixels. This fixes problems like a picture of blades of grass, where the blades have the same color but are actually at different disparities.

Wrapping back around to the idea that Patch Match needs to have a cost metric that is globally minimal. It might be worth while exploring different cost metric such as Mutual Information or if I remember correctly, the fast evaluating Matching by Tone Mapping (MTM) [4]. Nice side effect of this would be that correlation is completely lighting invariant and could even correlate between infrared and visible.

Additional Comparisons

Here’s a disparity image of NASA Ames Research Center captured from a Pleiades satellite. Whatever they do to pre-process their imagery, I couldn’t epipolar rectify that imagery. Search range given to both ASP and PatchMatch was [-40,-20,70,40]. Despite being noisy, I like how the PatchMatch w/ ASW preserved straight edges. The specularity of some of the roof lights on a hanger however really threw ASW for a loop.

I also processed a crop from a World View 2 image of somewhere in the McMurdo Dry Valleys. This really shot a hole in my argument that Patch Match would be completely invariant to search range. The search range was [-700,-50,820,50]. I did however have to hand tune the search range to get this excellent result from ASP. The automatic detection initially missed the top of the mountains.

Source Code

I’m being messy because this is my research code and not something production. You’ll need to understand VW and Makefiles if you want to compile and modify.

https://github.com/zmoratto/PatchMatch

Further Research

I’m still diving through papers during my free evenings. I don’t do this at work because I have another project that is taking my soul. However there appears to be a good paper called Patch Match Filter that tries to improve speed through super pixel calculation [5]. There is also an implementation of PatchMatch that adds a smoothness constraint that performs better than the original PatchMatch paper [6]. When it came out, it was the best performing algorithm on the Middlebury dataset. However, recently another graph cut algorithm dethroned it. I myself will also just look at applying patch match as a refinement to a noisy disparity result from ASP 2.3 using a kernel size of 3. Having a small kernel still evaluates extremely quickly even if the search range is huge.

I’m sorry this post doesn’t end in a definitive conclusion. However I hope you found it interesting.

References

[1] Barnes, Connelly, et al. “PatchMatch: a randomized correspondence algorithm for structural image editing.” ACM Transactions on Graphics-TOG 28.3 (2009): 24.
[2] Bleyer, Michael, Christoph Rhemann, and Carsten Rother. “PatchMatch Stereo-Stereo Matching with Slanted Support Windows.” BMVC. Vol. 11. 2011.
[3] Yoon, Kuk-Jin, and In So Kweon. “Adaptive support-weight approach for correspondence search.” Pattern Analysis and Machine Intelligence, IEEE Transactions on 28.4 (2006): 650-656.
[4] Hel-Or, Yacov, Hagit Hel-Or, and Eyal David. “Fast template matching in non-linear tone-mapped images.” Computer Vision (ICCV), 2011 IEEE International Conference on. IEEE, 2011.
[5] Lu, Jiangbo, et al. “Patch Match Filter: Efficient Edge-Aware Filtering Meets Randomized Search for Fast Correspondence Field Estimation.” Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference. 2013.
[6] Heise, Philipp, et al. “PM-Huber: PatchMatch with Huber Regularization for Stereo Matching.”

Code to Semi Global Matching

I’ve received a few emails asking me for the code to my implementation of Semi Global Matching. So here it is in the state that I last touched it. This is my icky dev code and I have no plans for maintaining it. In another month I’ll probably forget how it works. Before you look at the code, take to heart that this is not a full implementation of what Hirschmuller described in his papers. I didn’t implement 16-path integration or the mutual information cost metric.

TestSemiGlobalMatching is my main function. I developed inside a GTest framework while I was writing this code. [source]

The core of the algorithm is inside the following header and source file links, titled SemiGlobalMatching. As you can see it is all in VW-ese. Most of the math looks Eigen-like and there’s a lot of STL in action that I hope you can still read along. [header] [source]

Also, I haven’t forgotten that I promised to write another article on what I thought was a cool correlator idea. I’m still working on that code base so that I can show off cool urban 3D reconstruction using the satellite imagery I have of NASA Ames Research Center and Mountain View. (I think I have Google’s HQ in the shot.)

Update 3/3/2014

Someone emailed me to figure out what is the MOC imagery that the code keeps referring to? In this case MOC stands for the Mars Orbital Camera on board the 1996 Mars Global Surveyor mission. The stereo pair is epipolar rectified images of Hrad Vallis that I use regularly and represent a classical problem for me when performing satellite stereo correlation. A copy of the input imagery is now available here [epi-L.tif][epi-R.tif].

Automatic production of HiRISE DEMs

After scraping Dr. Shane Byrne’s website, I realized that there are only 3110 completed HiRISE stereo pairs. These stereo pairs seem to require about 150 GB of intermediate storage to make their way through ISIS and ASP. Afterwards the 1 m /px projected results come to a size of about 200 MB each, or 607 GB total. I have 3 TB free on my home server, so I decided to see how many I could process unattended. It’s interesting to me to see the ASP failure cases and it would give me something to write about every 2 weeks or so. I also just really like pretty pictures.

I’m using the latest ASP development branch to do this processing. It has some critical improvements over ASP 2.0. We managed to fix the deadlock problem that was happening in our parallel tile cache system. We also fixed the segfault that sometimes happens in point2dem. Finally we fixed a bug in our RANSAC implementation that has gone unnoticed since 2006. All of these fixes were critical for making sure the code wouldn’t kill itself. We also killed our random seed so that users could always exactly reproduce their results. All ASP sessions should reproduce prior ASP sessions. These improvements will be available in the next release of ASP, along with RPC support.

Here’s my script. I’ll be clearing the backup archives made by pixz (parallel xz) manually every now and then once I’m sure I no longer have interest in the stereo pair. Also, all of this data is only processed with parabola subpixel. Bayes EM will always improve the result and never harm anything. Since my goal was just to process a lot of HiRISE and see where things broke, I decided to forego this extra refinement step. Anyways, at the resolution I’m sharing with all of you, Parabola still looks pretty sexy. Finally, at no point am I specifying the search range for these stereo pairs. They’re all being processed completely automatically.

#!/bin/bash

# expects ./hirise2dem_v2.sh <left prefix> <right prefix> <user name> <comment>
set -e
set -x

left_prefix=$1
right_prefix=$2
username=$3
comment=$4

hiedr2mosaic.py -h > /dev/null || { echo >&2 "I require ASP to be installed Aborting."; exit 1; }

function write_w_conv_to_height {
    # write_w_conv_to_height
    percent=$1
    min=$2
    max=$3
    string=$4
    log=$5
    height=$(echo "$min + $percent*($max - $min)" | bc | awk '{ printf "%.1f\n", $1}')
    echo $height","$string","$height >> $log
}

function download_hirise_single_img {
    phase_prefix=${1:0:3}
    orbit_prefix=${1:4:4}
    obsv_prefix=${1:0:15}
    if [ -e "$1" ]
    then
        echo "  Found " $1 >> ${left_prefix}.log
    else
        echo "  Downloading " http://hirise-pds.lpl.arizona.edu/PDS/EDR/${phase_prefix}/ORB_${orbit_prefix}00_${orbit_prefix}99/$obsv_prefix/$1 >> ${left_prefix}.log
        wget -O $1 http://hirise-pds.lpl.arizona.edu/PDS/EDR/${phase_prefix}/ORB_${orbit_prefix}00_${orbit_prefix}99/$obsv_prefix/$1
    fi
}

function download_hirise_cube {
    if [ -e "$1_IMGBackup.tpxz" ]
    then
        echo "  Found " $1_IMGBackup.tpxz >> ${left_prefix}.log
        xz -d -k -c $1_IMGBackup.tpxz | tar xv
        hiedr2mosaic.py $1*IMG
        rm $1*IMG
    else
        for i in {0..9}; do
            download_hirise_single_img $1_RED${i}_0.IMG
            download_hirise_single_img $1_RED${i}_1.IMG
        done
        hiedr2mosaic.py $1*IMG

        # Compress and backup the IMGs. This will reduce them to 1/3rd
        # their size.
        tar cvf $1_IMGBackup.tar $1*IMG
        pixz $1_IMGBackup.tar
        rm $1*IMG
    fi
}

# Get the files
mkdir -p $left_prefix
echo ${username} >> ${left_prefix}.log
echo ${comment} >> ${left_prefix}.log
cd $left_prefix
echo "Start: " `date` > ${left_prefix}.log
if [ ! -e "${left_prefix}_RED.mos_hijitreged.norm.cub" ]; then
    download_hirise_cube $left_prefix
    echo "Left Download Fin: " `date` >> ${left_prefix}.log
else
    echo "  Found " ${left_prefix}_RED.mos_hijitreged.norm.cub >> ${left_prefix}.log
fi
if [ ! -e "${right_prefix}_RED.mos_hijitreged.norm.cub" ]; then
    download_hirise_cube $right_prefix
    echo "Right Download Fin: " `date` >> ${left_prefix}.log
else
    echo "  Found " ${right_prefix}_RED.mos_hijitreged.norm.cub >> ${left_prefix}.log
fi

# Work out a nice latitude to project with
caminfo from= ${left_prefix}_RED.mos_hijitreged.norm.cub to=caminfo.txt
echo "Preprocessing Fin: " `date` >> ${left_prefix}.log
latitude=$(grep CenterLatitude caminfo.txt | awk -F "=" '{print $2}' | awk '{if ($1 > 0 ) {print int($1+0.5)} else {print int($1-0.5)}}')
srs_string="+proj=eqc +lat_ts=${latitude} +lat_0=0 +lon_0=0 +x_0=0 +y_0=0 +a=3396000 +b=3396000 +units=m +no_defs"

# # Process the stereo
stereo ${left_prefix}_RED.mos_hijitreged.norm.cub ${right_prefix}_RED.mos_hijitreged.norm.cub ${left_prefix}/${left_prefix} --stereo ../stereo.default
echo "Stereo Fin: " `date` >> ${left_prefix}.log
point2dem ${left_prefix}/${left_prefix}-PC.tif --orthoimage ${left_prefix}/${left_prefix}-L.tif --tr 2 --t_srs "${srs_string}" --nodata -32767
echo "Point2dem Fin: " `date` >> ${left_prefix}.log
mv ${left_prefix}/${left_prefix}-D{RG,EM}.tif .

# Compress the cube files
pixz ${left_prefix}_RED.mos_hijitreged.norm.cub
echo "Compressed Left Fin: " `date` >> ${left_prefix}.log
pixz ${right_prefix}_RED.mos_hijitreged.norm.cub
echo "Compressed Right Fin: " `date` >> ${right_prefix}.log

# Compress the old Stereo file
tar cvf ${left_prefix}.tar ${left_prefix}
pixz ${left_prefix}.tar
echo "Final Compression Fin: " `date` >> ${left_prefix}.log

# Generate colormap and LUT for QGIS
gdalinfo -stats ${left_prefix}-DEM.tif > tmp.txt
maximum=$(grep MAXIMUM tmp.txt | awk -F "=" '{n=$2/100; printf("%i\n",int(n+0.5))}' | awk '{n=$1*100.0; print n}')
minimum=$(grep MINIMUM tmp.txt | awk -F "=" '{n=$2/100; printf("%i\n",int(n-0.5))}' | awk '{n=$1*100.0; print n}')
rm tmp.txt

echo "# QGIS Generated Color Map Export File" > ${left_prefix}_LUT.txt
echo "INTERPOLATION:INTERPOLATED" >> ${left_prefix}_LUT.txt
write_w_conv_to_height 0.0 $minimum $maximum "255,120,255,255" ${left_prefix}_LUT.txt
write_w_conv_to_height 0.2 $minimum $maximum "120,120,255,255" ${left_prefix}_LUT.txt
write_w_conv_to_height 0.4 $minimum $maximum "120,255,255,255" ${left_prefix}_LUT.txt
write_w_conv_to_height 0.6 $minimum $maximum "120,255,120,255" ${left_prefix}_LUT.txt
write_w_conv_to_height 0.7 $minimum $maximum "255,255,120,255" ${left_prefix}_LUT.txt
write_w_conv_to_height 0.9 $minimum $maximum "255,120,120,255" ${left_prefix}_LUT.txt
write_w_conv_to_height 1.0 $minimum $maximum "255,255,255,255" ${left_prefix}_LUT.txt

# Render hillshade
gdaldem hillshade ${left_prefix}-DEM.tif ${left_prefix}_HILLSHADE.tif
colormap --lut /home/zmoratto/raid/asp_test_data/Mars/HiRISE/LMMP_color_medium.lut -s ${left_prefix}_HILLSHADE.tif --min $minimum --max $maximum ${left_prefix}-DEM.tif

# Render MOLA regional crop and create a difference map
/home/zmoratto/raid/asp_test_data/Mars/HiRISE/tools/generate_mola_secondary ${left_prefix}-DEM.tif

On top of producing a colormap and orthoimage, I decided to render a difference map against MOLA. I’m using the gridded MOLA product that is shipped in ISIS. I then bicubicly interpolate it to 1/4th the resolution of my HiRISE DEM. I also render an over-crop to make a hillshaded render of MOLA for the background of all of my pictures. I think it shows how little information is available in MOLA at the resolution I’m rendering at. The code I used to do this is available below and is written against Vision Workbench.

using namespace vw;
using namespace vw::cartography;
namespace fs = boost::filesystem;

double taste_nodata_value( std::string const& filename ) {
  boost::scoped_ptr rsrc( DiskImageResource::open( filename ) );
  return rsrc->nodata_read();
}

int main( int argc, char **argv ) {

  // expect ./generate_mola_secondary <dem>

  std::string input_dem( argv[1] );
  std::string isis_prefix( getenv("ISIS3DATA") );

  // Load georeference information
  GeoReference input_georef, mola_georef;
  read_georeference( input_georef, input_dem );
  read_georeference( mola_georef, isis_prefix+"/base/dems/molaMarsPlanetaryRadius0005.cub" );
  DiskImageView<float> input_image( input_dem );
  DiskImageView<float> mola_image( isis_prefix+"/base/dems/molaMarsPlanetaryRadius0005.cub" );

  // Decide on an input box to rasterize (preferrably a 30% pad around the input dem)
  BBox2i output_bbox = bounding_box( input_image );
  output_bbox.expand( 0.15 * ( input_image.cols() + input_image.rows() ) );

  // Down sample by 2 since there is very little content here
  ImageViewRef<float> output_mola = resample( crop( geo_transform( mola_image, mola_georef, input_georef,
                                                                   PeriodicEdgeExtension(), BicubicInterpolation() ), output_bbox), 0.25 );
  GeoReference output_georef = resample( crop( input_georef, output_bbox ), 0.25 );

  // Render output MOLA DEM with same projection as input
  {
    boost::shared_ptr rsrc( new DiskImageResourceGDAL( fs::basename( input_dem ) + "_MOLA.tif", output_mola.format() ) );
    write_georeference( *rsrc, output_georef );
    block_write_image( *rsrc, output_mola,
                       TerminalProgressCallback("asp", "MOLA Context:") );
  }

  // Render difference map where nodata is -1.
  ImageViewRef< PixelMask<float> > input_mask =
    create_mask( input_image, taste_nodata_value( input_dem ) );
  ImageViewRef< float > difference_map =
    apply_mask(abs(output_mola - crop(subsample(crop(edge_extend(input_mask, ValueEdgeExtension >( PixelMask() )), output_bbox),4),bounding_box(output_mola))),-1);
  {
    boost::shared_ptr rsrc( new DiskImageResourceGDAL( fs::basename( input_dem ) + "_MOLADiff.tif", difference_map.format() ) );
    rsrc->set_nodata_write( -1 );
    write_georeference( *rsrc, output_georef );
    block_write_image( *rsrc, difference_map,
                       TerminalProgressCallback("asp", "MOLA Difference:") );
  }

  return 0;
}

Unfortunately for this post, I didn’t correctly record the processing times since I was still debugging my epic shell script above. In the future, I hope to give better estimates of time. All of this processing is happening on my personal server since it has finally gotten cold enough that I want my heaters on in San Jose. That server has a single AMD FX 8120 processor with 8 cores and a generous 16 GB of DD3 1600 RAM. There’s also three 2 TB hard drives in a software RAID5 that seem to give me pretty epic read bandwidth. This machine cost me $1200 when I built it 6 months ago and is cheaper than my government supplied laptop. I think the most important bit is that the FX 8120 supports the wide AVX instruction set. I compiled all the code with GCC 4.7 so that I could take advantage of this.

Here’s ESP_011277_1825 and ESP_011910_1825, requested by user cweitz. This first pair also shows my first error. The search range solved inside D_sub wasn’t large enough to see the crater floor. This caused a trickle down error where the integer correlation took many days because its search range was defined by white noise and still didn’t contain values that matched the actual disparity. Since it couldn’t zero in, it did wide searches at every pyramid level. Ick.

ESP_014167_1300 and ESP_021947_1300, requested by mcewen. Flat enough that everything just worked perfectly. This pair finished very quickly (about 1.5 days).

ESP_018181_1730 and ESP_018893_1730, requested by rbeyer. Another pair that was pretty flat and just worked.

ESP_018959_1730 and ESP_019025_1730, requested by rbeyer. Again, another perfect stereo pair.

Seems like major improvements could be had if we focus on improving results that happen in the generation of D_sub. Since D_sub only takes a few seconds to calculate, having a large search range on that step wouldn’t be too bad and would fix ESP_011277_1825’s problem. Then I just need to focus on filtering D_sub so noise doesn’t propagate to the integer correlator who works on full resolution.

Moar Speed Please!

Number one complaint from ASP users, make it faster. Number two complaint, is what is the LE90 of their DEM. I’m only going to take a stab at answering the first one in this post. However we’ll look at it from the lazy perspective of just changing the compiler and not implementing any new crazy algorithms, because algorithms are hard.

When we build ASP’s binaries, we use an Apple variant of GCC-4.2 on OSX. When we build our Linux binaries, we use GCC 4.4.6 from RHEL6. Those compilers are relatively old, the newest GCC 4.4.6, was compiled back in 2010. Since then, new versions of GCC have been released. Clang++ has also been maturing. There have even been new processor instruction sets that have been released, like the 256 bit wide AVX.

The first test I performed was simply recording the run time for our unit tests on the Bayes EM subpixel refinement algorithm.  I tested on both an OSX 10.7.5 system with a Core i7-2675QM and then an Ubuntu 12.04 system with an AMD FX 8120. Both systems support the AVX instruction set. I was able to get newer compilers on the OSX system by using MacPorts. For the Ubuntu box, I installed almost everything through Aptitude. However, I got the Clang-3.1 binaries directly from the LLVM website.

Bayes EM timings for Ubuntu 12.04
Compiler -O3 -mno-avx -O3 -mavx -O3 -mavx -funsafe -0fast -mavx -funsafe
Sum Px Error Avg Time Std Dev Sum Px Error Avg Time Std Dev Sum Px Error Avg Time Std Dev Sum Px Error Avg Time Std Dev
g++-4.4 0.871589 2.714 0.146 0.871589 2.629 0.142 0.887567 2.629 0.172
g++-4.5 0.871589 2.621 0.05 0.871589 2.587 0.034 0.887566 2.669 0.183
g++-4.6 0.871589 2.493 0.009 0.871589 2.743 0.1 0.88774 2.542 0.173 0.88774 2.285 0.125
g++-4.7 0.871589 2.439 0.017 0.871589 2.62 0.127 0.887566 2.581 0.111 0.887566 2.36 0.202
clang++-2.9 segfaulted
clang++-3.0 0.871589 2.29 0.195 14.2007 2.475 0.159 14.2007 2.44 0.102
clang++-3.1 0.871589 2.434 0.215 0.871589 2.492 0.238 0.87157 2.309 0.225
Bayes EM timings for OSX 10.7.5
Compiler -O3 -O3 -funsafe -Ofast
Sum Px Error Avg Time Std Dev Sum Px Error Avg Time Std Dev Sum Px Error Avg Time Std Dev
g++-4.2 0.871582 2.59 0.103 0.887563 2.52 0.111
g++-4.4.7 0.871582 2.48 0.212 0.887563 2.27 0.027
g++-4.5.4 0.871582 2.265 0.03 0.887564 2.187 0.032
g++-4.7.1 0.871582 2.122 0.036 0.887777 2.005 0.02 0.887777 1.905 0.011
clang++-2.1 0.871582 2.193 0.021 0.871582 2.485 0.313
clang++-2.9 0.871582 2.273 0.014 0.871582 2.247 0.039
clang++-3.1 0.871582 1.996 0.013 0.871586 1.91 0.014
llvm-g++-4.2 0.871582 2.149 0.008 0.871582 2.19 0.027

I tested Clang-2.9 on Ubuntu. Unfortunately every compile operation resulted in an internal seg-fault. Clang-3.0 worked most of the time, until I manually turned on ‘-mavx’. This caused no improvement in BayesEM performance, however it did cause the test code to return bad results. Overall, GCC 4.7 and Clang 3.1 showed about 20% improvement in speed over GCC 4.4.

I also tested the performance of our integer correlator under different compilers.

Integer Correlator timings for Ubuntu 12.04
Compiler -O3 -mno-avx -O3 -mavx -O3 -mavx -funsafe -Ofast -mavx -funsafe
Avg Time Std Dev Avg Time Std Dev Avg Time Std Dev Avg Time Std Dev
g++-4.4 8.288 0.037 8.136 0.03 8.127 0.032
g++-4.5 8.396 0.014 8.267 0.024 8.326 0.022
g++-4.6 5.168 0.078 5.094 0.022 5.102 0.019 5.11 0.022
g++-4.7 4.525 0.019 4.624 0.014 4.669 0.012 4.638 0.017
clang++-2.9
clang++-3.0 5.147 0.053 5.079 0.094 5.06 0.012
clang++-3.1 5.119 0.012 5.059 0.32 4.949 0.016
Integer Correlator timings for OSX 10.7.5
Compiler -O3 -O3 -funsafe -Ofast
Avg Time Std Dev Avg Time Std Dev Avg Time Std Dev
g++-4.2 8.973 0.096 8.654 0.047
g++-4.4.7 8.61 0.034 8.654 0.181
g++-4.5.4 8.131 0.083 7.67 0.033
g++-4.7.1 4.044 0.024 4.084 0.03 3.9 0.023
clang++-2.1 5.077 0.023 5.072 0.029
clang++-2.9 5.211 0.032 5.192 0.013
clang++-3.1 4.966 0.018 4.973 0.027
llvm-g++-4.2 5.097 0.023 5.113 0.021

Here, the newer compilers showed significant performance gains. GCC 4.7 and Clang 3.1 both showed a 100% speed improvement over GCC 4.4. Clang also managed to compile the code correctly every time unlike in the Bayes EM tests. However I would still recommend sticking with the safe and stable GCC. Their 4.7 release was able to get just as much or better performance than the Clang compilers. GCC just provides the comfort of mind knowing that it has always been able to compile VW correctly. Clang still has me on edge since it burned me so many times because it produced segfaulting assembly instructions since it is so aggressive with SIMD.

New CTX Layer in Google Earth

The Google community silently released a few new features for their Mars mode in Google Earth. MER-B, Opportunity, now has an updated traverse path thanks to a fellow at the Unmanned Spaceflight forum along with a new base map that Ross created. However I’m really excited about a new CTX Global Map that is available. Below is a screen shot:

From this high view you can see that that CTX hasn’t imaged all of Mars. This is expected. CTX isn’t trying to image all of Mars, it is simply the context imager for HiRISE. Meaning that CTX tends to roll tape only when HiRISE is. Never the less, the imagery is still beautiful and, in my opinion, shows more detail that the default base layer from an HRSC composite.

This mosaic was created by simply downloading all CTX data from NASA’s PDS servers and then calibrating and map projected the imagery with USGS’s ISIS software. The composite was then made with in house software from our team at NASA Ames Research Center. This is a Vision Workbench combination plus our Plate Filesystem. It is the same software we used to do a global MOC-NA and HiRISE mosaic for Microsoft’s World Wide Telescope. Unfortunately, I believe those servers have bit the dust. Regardless, if you have free time, I encourage you to check out the CTX Mosaic and Opportunity traverse in Google Earth. Click the planet, select “Mars”, and then in the bottom left select “CTX Mosaic” under “Global Maps”.

MacPorts Portfiles available for VW and ASP

As of just a minute ago, I’ve committed a portfile for VW and for ASP in their respective code repositories. The ASP one doesn’t support ISIS or point2mesh. It’s only good for performing stereo on pinhole sessions (MER/Personal Robots) or DG sessions (Digital Globe). I hope that eventually Macports will accept them into their distribution as vw-devel and asp-devel. Until that day, you can use these port files manually using these instructions.