Sunday, November 28, 2010



In the last few days Kinect, a product developed by Microsoft and oriented to the gaming industry, has been making quite an impression in the robotic world. 

This device was announced by Microsoft as the gadget that would change the history of the gaming industry. Well, I don't know if that would be true. Only time will tell. But, on the way, they created a tool with practically endless possibilities for robotic applications. I wonder if Microsoft saw it coming...

But, what makes it so appealing for robotics?

The device offers a RGB camera and an IR camera with a frame size of 640x480 pixels and a frame rate of 30FPS. But the really interesting feature is its  Depth camera. This depth camera enables to build a 3D view of the environment surrounding the Kinect, which allows many applications in robotics. 

Here is an example:

Another very interesting fact is its price: around $199. The price of other similar sensors is at least 10 times more expensive, for example the price of a Point Grey Bumblebee2 stereo camera is around $1900. Not everyone can afford to spend almost $2000 in a sensor, but almost every hobbyist can buy a Kinect (and when you get tired of playing with your robot you can use it to play with XBOX too :P)

Of course, a lot of people quickly realized the immense potential of this device and several initiatives appeared to build an Open Source driver that enabled people to use Kinect on a PC.  Being OpenKinect one of the most active projects at the moment.

Recently, the people at ROS (Robot Operating System) integrated OpenKinect on his system and as a result now anyone using ROS can easily take advantage of Kinect:

As long as Kinect is a device engineered to be used on indoor environments its behavior on outdoor environments can be predictably poor. And as you can see on the experiment carried out by the researchers at  Meiji University's AMSL Racing group the results outdoors in a sunny environment is quite poor, but the result in a shady environment is not too bad, though.

If I were Microsoft I would integrate Kinect into Microsoft Robotic Developer Studio right away.

Tuesday, November 16, 2010



Today I visited AIST JRL (Advance Institute of Science and Technology - Joint Japanese-French Robot Laboratory) where my fellow Pablo F. Alcantarilla and his supervisor, Dr. Olivier Stasse, kindly introduced me to this guy:

Its name is HRP2 Promet, a 1.6 meters tall, state of the art, humanoid robot. It was built by Kawada Industries, Inc and is used at AIST as a research platform. I had seen this robot on the internet before but nothing compared to seeing it in action in real life.

During the visit the robot performed several demos, including walking in a constrained environment (see next picture) and moving to a goal position by using Computer Vision.

On that same laboratory they are developing HRP4-C, the little sister of HRP2. As she is the youngest in the family she wants to be an artist, but unfortunately I have no pictures of HRP4-C, they still keep it under a lot of secrecy. But you can see this video taken at "Digital Contents Expo 2010":

 Any way, meeting HRP2 was fascinating.

Saturday, November 6, 2010



Japan is a country well known for being the leader on robotics field, but... how does a country get to be at the top of developing new robots and applications?

The answer is rather simple: challenging talented people all around the country.

This is the aim of Tsukuba Real World Robot Challenge (Tsukuba Challenge in short), it is a competition organized by the New Technology Foundation at Tsukuba city. During this competition teams of University Students from all around Japan and their robots gather together in order to test the skills of the robots in autonomous navigation on a real environment.

The target of the robots is to autonomously complete a circuit around a real environment and real obstacles, like humans walking or riding bikes. Usually the challenge takes place at Tsukuba's Central Park. For us, humans, the task is very simple but for a robot it supposes an Herculean effort.  

Today a trial session was held and here you can see some pictures of the robots that participated:

Please do not be fooled by the simple appearance of these robots, for the real deal is inside of them. As long as the robot has wheels so it can move, the important thing is the control algorithms. They mix GPS receivers, Laser Range Finders and cameras in order to solve the challenge.

Next you can see a short video showing a couple of robots at the start line, as you can see some of the robots perform better than others :)

Next Friday November 19th is the final. Good luck to all the teams!!