Tango take the Wheel

Johnny Lee, my boss a couple links up, recently showed off at Google I/O a reel of recent research improvements on Tango. Possibly not as exciting as the other stuff in the video, but at the 1:19 mark in the video you’ll see some research by me and my coworkers to see how well Tango works on a car. As you can see it works really well and we even drove it 8 km in downtown San Francisco through tourist infested areas. Surprisingly or not, the mass number of people or the sun didn’t manage to blind out our tracking.

How did we do it? Well we took Tango phone and quick clamped it to my car. Seriously. Here’s a picture of a Lenovo Phab 2 Pro and a Asus Zenphone AR attach to my car and me in my driving glasses. We ran the current release of Tango and did motion tracking only and it just worked! … As long as you shut off the safeties that reset tracking once you exceed a certain velocity. Unfortunately you users outside of Google can’t access this ability as in a way these velocity restrictions are our own COCOM Limits.

Also, this is something really only achievable with the new commercial phones. The original Tango Development Kit didn’t include good IMU intrinsics calibration. The newly produced cellphones at the factory will solve for IMU scale, bias, and misalignment. At the end of each factory line, a worker places the phone in a robot for a calibration dance. Having this calibration is required for getting the low drift rates. Remember our IMUs are cheap 50 cent things and have a lot of wonkiness that the filter to needs to sort out.

Computer Vision and Cellphone

I’m applying to graduate school again. To help advertise myself, I decided to take a stab at reproducing a student’s work from one of the labs I want to join. Here’s a short video demonstrating the work:

The original author’s thesis was about the application of a new style of kalman filter. The computer vision measurements were just inputs to get his research going. Thus, this fiducial work is extremely simple and fun to implement. The only novel bit was that everything was working on a cellphone and I didn’t use any Matlab.

This entire project works by simply segmenting the circles from the image via a threshold operation. Then two passes of connect blob searches are applied, one for white and one for black. The code looks for blobs with aligned centers. If there are 4 in the scene, it calls it a success. Yes, there are many ways to break these assumptions.

After locating the 4 circles, it identifies them by their ratio of black to white area. The boldest circle is defined as the origin. The identified location of these circles, their measured locations on the paper, and the focal length of the camera are then fed to an implement of Haralick’s iterative solution for exterior orientation. That algorithm solves for translation and rotation of the camera by reducing projection error through the camera matrix.

Code for the blob detection and exterior solving are available here. The java user interface is simply a straight rip from the demos that come with Android OpenCV. So it shouldn’t take long to reproduce my work.

To see what inspired me, please investigate the following papers:

[1] B. E. Tweddle, “Relative Computer Vision Based Navigation for Small Inspection Spacecraft,” presented at the AIAA Guidance, Navigation and Control Conference and Exhibition, 2011.
[2] B. E. Tweddle, “Computer Vision Based Proximity Operations for Spacecraft Relative Navigation,” Master of Science Thesis, Massachusetts Institute of Technology, 2010.