Satellite Epipolar Rectification

Whenever I meet anyone remotely familiar with stereo algorithms, they tend to ask me why ASP doesn’t have an epipolar rectification step.

“It does, but for pinhole camera models.”
“Why not satellites?”
“Well, they don’t really have epipolar lines but instead curves! Making rectification very difficult and not reversible.”

This has been my default response the last few years. But it has been a long time since I first tried to understand this topic. Back then I thought the solution would require some difficult to implement polynomial mapping or rubbersheet transforms [1]. I’ve been with NASA a while now, maybe this time I can better grasp what is required.

We’ll I’ve gone back through the papers. Surprisingly epipolar rectification is still a popular subject to get published in the photogrammetric and remote sensing world. The new ones share a lot of my complaints in that proper resampling using the rigorous camera model is hard. However they show that for the height ranges that stereo modeling is interested in, the epipolar lines can be considered to be linear. Even more interesting, the papers note that all the epipolar lines are parallel [2][3]. They then attempt to solve for an affine transform for both the left and the right images that reduces Y disparity and minimizes image distortion. Solving for the best affine that doesn’t warp the image too much is the primary trick to this task and a few master’s theses have been written on the topic.

Some of the students like to introduce a new camera model, like the Parallel Projection model. I’d like to connect the dots and say that this is not a novel idea at all. In the computer science community this is known as the affine camera model. My favorite book has a whole chapter on the topic [4]. Hartley even goes into a few pages about solving for the affine fundamental matrix and then gives a teaser image about it’s application to stereo rectification for high altitude aerial imagery.


Hartley never goes into how to do affine epipolar rectification in his book. I’d like to document my attempt at this topic as I believe it is a lot simpler than the remote sensing papers make it out to be. It is also simpler than some of the computer science papers make it to be [5].

The general plan of affine epipolar image rectification is best described in 3 steps.

1.)  Identify the slope of the epipolar lines and rotate them so they are horizontal. I’ll be using image correspondences to solve for the affine fundamental matrix, which will get me the epipolar lines and then rotation matrices.
2.)  Align and scale the Y axis of the right image to match the left image.
3.)  Align, scale, and solve for an appropriate skew to minimize the amount of searching a stereo algorithm would have to do in the X axis.

Solving for the fundamental matrix is an algorithm I took from [4]. An affine fundamental matrix differs from a regular fundamental matrix in the sense that its epipoles are at infinity. This makes the top left 2×2 area of the matrix zero. Only the remaining 5 elements contain any information. Solving for those elements we force the constraint that:

When you expand that equation out, you’ll see that the bottom right corner element is not multiplied by any of the image correspondence measurements. So we’ll solve for the other elements first and then we could back substitute for ‘e’ later. But I’m never going to use that element … so let’s forget ‘e’ exists. Solving for the other parameters is achieved by finding the eigen vector with the smallest singular value.

The last row and column of the affine fundamental matrix represent the epipoles of the left and right image.

Let’s use them to solve for the rotation matrices to flatten those epipolar lines. Here was my derivation where I attempted to avoid any trigonometric functions. In hindsight, this makes the algorithm harder to read. At least we avoided a CPU table lookup!

Now we are finally at step 2 where we need to find scaling and offset for the Y axis. This can be performed in a single blow using LAPACK least_squares routine if we just arrange the problem correctly.

Step 3 is solving for the scaling, skew, and offset for X. This too can be performed in a single blow using least_squares.

Again we’ll apply the X scaling and offset only to the right affine transform. However I thought that skew should be applied equally to both images so to minimize distortion. Splitting the skew evenly between both image transforms and how to apply the X scaling is achieved via:

Application to Mars Imagery

I wrote a C++ implementation of the above algorithm and applied it to the example stereo pairs ASP supplies for MOC, CTX, and HiRISE missions. I collect image correspondences for each stereo pair to work out what the search range needs to be in order to create a DEM. The area of this search range is proportional to how much work the stereo correlator would need to do. (Please ignore that in ASP, we use a hierarchal method that provides some optimization making processing time not linearly related to area of the search range.) I then compared the expected search range with alignment options of NONE, HOMOGRAPHY, and the new algorithm. Below are the results.

Image Size 1024×8064 5000×30720 20000×80000
NONE's Range [-389 -729]x[-104 604] [-488 -1320]x[468 -1295] [-15532 2112]x[18325 8145]
NONE's Area 379905 23900 204259281
HOMOGRAPHY's Range [-7 -2]x[7 3] [-56 -10]x[57 11] [-16176 -342]x[18164 402]
HOMOGRAPHY's Area 70 2373 25548960
EPIPOLE's Range [-7 -2]x[7 3] [-58 -2]x[57 1] [-16228 -53]x[18128 287]
EPIPOLE's Area 70 345 11681040

In the MOC case, homography alignment produced the same results as epipolar alignement. But once the images started to get bigger, it becomes clear that epipolar alignment is a winning strategy. The next test to be performed is to see if the stereo correlator still works on this imagery. We might be introducing skew or scaling errors that the correlator can’t recover from. I have yet to finish that part though.

Another fun note, epipolar rectification is how you produce anaglyphs! Here are few examples of what I was able to do with GIMP. I followed the guide linked here.

So … it looks like I need to put this in ASP 2.2 now. That’s what I’m going to be doing after this blog post.


[1] Oh, Jaehong. Novel Approach to Epipolar Resampling of HRSI and Satellite Stereo Imagery-based Georeferencing of Aerial Images. Diss. The Ohio State University, 2011.
[2] Morgan, Michel, et al. “Epipolar resampling of space-borne linear array scanner scenes using parallel projection.” Photogrammetric engineering and remote sensing 72.11 (2006): 1255-1263.
[3] Wang, Mi, Fen Hu, and Jonathan Li. “Epipolar resampling of linear pushbroom satellite imagery by a new epipolarity model.” ISPRS Journal of Photogrammetry and Remote Sensing 66.3 (2011): 347-355.
[4] Hartley, Richard, and Andrew Zisserman. Multiple view geometry in computer vision. Vol. 2. Cambridge, 2000.
[5] Liansheng, Sui, Zhang Jiulong, and Cui Duwu. “Image rectification using affine epipolar geometric constraint.” Computer Science and Computational Technology, 2008. ISCSCT’08. International Symposium on. Vol. 2. IEEE, 2008.

Fly by Night application of ASP to Video

This is a video of horizontal disparity from a video stereo rig onboard the International Space Station. Specifically it was this camera. I used ffmpeg to split up the data into individual frames and then I applied ASP to the individual frames. I attempted solving for the focal length and convergence angle of this set but unfortunately I didn’t properly constrain focal length. (My algorithm cheated an brought its error to zero by focusing on infinity). Regardless, I’m pretty happy with the result from a Tuesday night at home.