Friday, June 17, 2011

Cloud robotics and MicroCV

It all started a couple of years ago when some hackers started putting smartphones on augment-able Roombas (the iRobot Create). I guess they really just wanted a powerful processor that was light enough for the robot to carry around. And then they started using the phone's camera and GPS and wireless...

Meanwhile Google made Goggles. In the eternal war of Telephone vs. Camera, Goggles captures the "Object Recognition" flag for the Telephone. Images are sent to the "cloud" where (relatively) simple algorithms provide amazing results by spending huge amounts of both data and processing power.

Cloud robotics was born when Goggles started to be used as a vision system for robots that had smartphones on them. Here is a semi-recent article that explains Google's strategy and has links to Google's cellbot as well as other cloud robotics projects.

So where does that leave MicroCV? Could we outsource all vision tasks to the cloud?

It seems obvious to me that certain, real-time control tasks would always need to be done on-board. This is especially true if the device is robotic in nature. Otherwise, non-critical or static-platform based tasks could be achieved through the cloud.

Additionally, there is the issue of power. Even a static device would need to consume energy to transmit heavy doses of information over the network. How painful would the energy cost be for reasonable bit rates? I'm not sure, but my gut feeling is that sending out the whole pipeline to the cloud would be infeasible. Alternatively, on-board vision systems could pre-process the image data. Such systems would complement a cloud robotics framework, since they could minimize the power spent on data transmission.

Either way, cloud robotics seems set to affect vision at the smallest scales.

No comments:

Post a Comment