Publicity video for our Zero G testing of SPHERE + Tango.
However I’ve been really busy working with Google’s project Tango. I encourage you to watch the video if you haven’t already.
What is NASA doing with project Tango? Well currently there is a very vague article available here. However the plan is to apply Tango to the SPHERES project to perform visual navigation. Lately, I’ve been overwhelmed with trying to meet the schedule of 0-g testing and all the hoops there are with getting hardware and software onboard the ISS. This has left very little time to write, let alone sleep. In a few weeks NASA export control will have gone over our collected data and I’ll be able to share here.
In the short term, project Tango represent an amazing opportunity to perform local mapping. The current hardware has little application to the large-scale satellite mapping that I usually discuss. However I think the ideas present in project Tango will have application in low-cost UAV mapping. Something David Shean of U of W has been pursuing. In the more immediate term I think the Tango hardware would have application to scientists wanting to perform local surveys of a glacial wall, cave, or anything you can walk all over. It’s ability to export its observations as a 3D model makes it perfect for sharing with others and perform long-term temporal studies. Yes the 3D sensor won’t work outside, however stereo observations and post processing with things like Photoscan are still possible with the daylight imagery. Tango will then be reduced to providing an amazing 6-DOF measurement of where each picture was taken. If this sounds interesting to you, I encourage you to apply for a prototype device! I’d be interested in helping you tackle a scientific objective with project Tango.
This picture is of Mark and I dealing with our preflight jitters of being onboard the “Vomit Comet” while 0-g testing the space-rated version of Project Tango. This shares my current state of mind. Also, there aren’t enough pictures of my ugly mug on this blog. I’m the guy on the right.
I’ve received a few emails asking me for the code to my implementation of Semi Global Matching. So here it is in the state that I last touched it. This is my icky dev code and I have no plans for maintaining it. In another month I’ll probably forget how it works. Before you look at the code, take to heart that this is not a full implementation of what Hirschmuller described in his papers. I didn’t implement 16-path integration or the mutual information cost metric.
TestSemiGlobalMatching is my main function. I developed inside a GTest framework while I was writing this code. [source]
The core of the algorithm is inside the following header and source file links, titled SemiGlobalMatching. As you can see it is all in VW-ese. Most of the math looks Eigen-like and there’s a lot of STL in action that I hope you can still read along. [header] [source]
Also, I haven’t forgotten that I promised to write another article on what I thought was a cool correlator idea. I’m still working on that code base so that I can show off cool urban 3D reconstruction using the satellite imagery I have of NASA Ames Research Center and Mountain View. (I think I have Google’s HQ in the shot.)
Someone emailed me to figure out what is the MOC imagery that the code keeps referring to? In this case MOC stands for the Mars Orbital Camera on board the 1996 Mars Global Surveyor mission. The stereo pair is epipolar rectified images of Hrad Vallis that I use regularly and represent a classical problem for me when performing satellite stereo correlation. A copy of the input imagery is now available here [epi-L.tif][epi-R.tif].
I do other things beside develop Ames Stereo Pipeline. I actually have to this month because my projects’ budgets are being used to pay for other developers. This is a good thing because it gets dug in developers like me out of the way for a while so that new ideas can come in. One of the projects I occasionally get to help out on is HET Spheres.
This is a picture of that robot. The orange thingy is the SPHERE robot designed by MIT. The blue puck is an air carriage so we can do frictionless testing in 1G. There have been 3 SPHERES robots onboard the ISS for quite some time now and they’ve been hugely successful. However we wanted to have an upgrade of the processing power available on the SPHERES. We also wanted better wireless networking, cameras, additional sensors, and a display to interact with the Astronauts. While our manager, lord, and savior Mark listed off all these requirements, we attentively played angry birds. That’s when it suddenly became clear that all we ever wanted was already available in our palms. We’ll use cellphones! So, though crude, we glued a cellphone to the SPHERE and called it a day.
Actually a lot more work happened then that and you can hear about that in Mark’s Google Tech Talk. I also wasn’t involved in any of that work. I tend to do other stuff that is SPHERES development related. But I spent all last week essentially auditing the console side code and the internal SPHERE GSP code. I remembered why I don’t like Java and Eclipse. (I have to type slower so Eclipse will autocomplete. :/) This all collimated into the following video of a test of having the SPHERE fly around a stuffed robot. We ran out of CO2 and our PD gains for orientation control are still out-of-whack, but it worked!
Congrats to Cosmogia and the PhoneSat team for hitching a ride on the first Antares flight into space!
I’ve been sick all last week. That hasn’t stopped me from trying to process World View imagery in bulk on NASA’s Pleiades supercomputer. Right now I’m just trying to characterize how big of a challenge it is to process this large satellite data on a limited memory system for an upcoming proposal. I’m not pulling out all the tricks we have to insure that all parts of the image correlate. Still that hasn’t stopped ASP from producing this interesting elevation model of a section of Antarctica’s coastline, just off of Ross Island. Supposedly Marble Point Heliport is in this picture (QGIS told me it was the blue dot at the bottom of the coastline).
I’m using homography alignment, auto search range, parabola subpixel, and no hole filling. The output DEMs were rasterized at 5 meters per pixel. The crosses or fiducials in the image are posted 5 km apart. This represents a composite of 10 pairs of WV01 stereo imagery from 2009 to 2011 and no bundle adjustment or registration has been applied. The image itself is just a render in QGIS where the colorized DEM has had a hillshade render of the same DEM overlayed at 75% transparency.
I haven’t investigated why more of the mountains didn’t come out. When it looks like a whole elevation contour has been dropped, that’s likely because auto search range didn’t guess correctly. When it looks like a side of the mountain didn’t resolve, that’s likely because there was shadow or highlight saturation in the image. Possibly it could also be that ASP couldn’t correlate correctly on such a steep slope.
Ross Beyer, my co-worker at NASA, performed a Google Tech Talk early this month. It is now available online and you can watch it above. I don’t want to give any spoilers, but a certain stereo something is mentioned in this talk.