A COLLECTIVE OF BOFFINS from the Massachusetts Institute of Technology (MIT) have developed a robot that can teach itself the complex physics of Jenga.

According to the researchers, the manipulator arm of the robot relies on a machine learning algorithm, visual data, and tactile feedback to teach itself how to correctly move blocks in the classic block-stacking game.

Playing Jenga requires a soft touch to prevent the tower from tumbling down. It also demands mastery of other skills, such as pushing, pulling, probing, placing, and aligning the blocks in the tower.

In its latest research, the team equipped an industrial ABB IRB 20 robotic arm with a soft-pronged gripper, an external camera, and a force-sensing wrist cuff. To train the machine, it was directed to randomly select a block in the Jenga tower and also select a location on that block to push and move it.

Each time the arm pushed the block, a connected computer recorded the force and visual measurements and compared those to previous moves before categorising the attempt as successful or unsuccessful.

Rather than performing thousands of such attempts, the robot was trained on just 300 moves. The team grouped attempts of similar measurements and outcomes in different clusters, each signifying specific block behaviour. Within around 300 attempts, the robot learned to intelligently anticipate its moves, guessing which blocks would be more difficult to move than others, and which might cause the tower to collapse.

The performance of the robotic arm was also compared with the performance of human players. The team found the success rate of the robot in keeping the tower upright while removing the wooden blocks was almost on a par with that of human players.

The team believes this technology could be useful in production units where delicate touch and careful vision are needed, for example, assembly of cellular phones or other smaller parts in a, separating recycling items from waste, etc.

“In a cellphone assembly line, in almost every single step, the feeling of a snap-fit, or a threaded screw, is coming from force and touch rather than vision,” says Alberto Rodriguez, Assistant Professor in the Department of Mechanical Engineering at MIT, and the lead researcher of the study.

“Learning models for those actions is prime real-estate for this kind of technology.”

The findings of the research are published in journal Science Robotics.

 


Source
Author: Inquirer Staff
Image Credit

Here at Dollar Destruction, we endeavour to bring to you the latest, most important news from around the globe. We scan the web looking for the most valuable content and dish it right up for you! The content of this article was provided by the source referenced. Dollar Destruction does not endorse and is not responsible for or liable for any content, accuracy, quality, advertising, products or other materials on this page. As always, we encourage you to perform your own research!

Don’t forget to follow us on Twitter for all our Crypto, Financial & Technology related tweets.