Advertisement

This AI startup wants to solve the hard problem of robots picking things up

Advertisement

If there’s one simple skill roboticists would love to steal from humans, it’s our ability to pick things up. Apples or eggs, pens or power drills; it doesn’t matter to us. Our hands are dextrous, able to grasp a range of shapes, and — more importantly — we can calculate how to handle any object we’ve not seen before in a flash.

Robots, by comparison, are slow and clumsy butterfingers. Their hardware is capable of doing the actual grasping, but they get confused by even tiny changes in size, shape, or position. This is a big a problem if we ever want robots to be helpful around the house, or if we want to improve their utility in warehouses, factories, and other industrial settings.

A startup named Embodied Intelligence is one of a crop of new ventures tackling this problem with the latest AI. The company has been operating in stealth for while, but this week announced itself to the world. It’s formed from researchers from the Elon Musk-backed AI lab OpenAI and the University of Berkley, and has $7 million in venture capital funding to get it started. The company’s goal? Simply put, helping robots get a grip.


Peter Chen of Embodied Intelligence with a test robot and VR headset.
Image: Embodied Intelligence.

As Peter Chen, a former OpenAI researcher and Embodied Intelligence’s CEO, explains to The Verge, the key challenge in this domain is making robots that are adaptable and able to quickly learn new tasks. “If you go to trade shows you will see robots doing lots of very fancy things, but it’ll just be one thing, We want robots that can do a range of things,” Chen tells The Verge. “The conviction we had when we started this company was that in the next 10 to 20 years, all physical goods could be manufactured and processed autonomously … but in terms of the amount of work that still needs to be done to get there, it’s enormous.”

At the moment, if you want to make a robot that can pick up and manipulates objects, you have two main options. One is hard-coding everything: working out exactly where and how the robot needs to move, and programming each step by hand. This works for most tasks, but is expensive to engineer, and means that a robot’s environment need to be precise and unvarying. If a component is even a few millimeters out of place, it can throw an entire production line out of whack. And if you’re dealing with deformable objects, like fabric or wire, it’s pretty much impossible.

A new approach (with a different set of limitations) uses an AI method known as reinforcement learning. This essentially lets robot teach themselves using trial-and-error. The programmer doesn’t explicitly tell a bot how to solve a problem, but gives it incentives to figure it out for itself. This requires less human resources but is time-consuming when applied to physical robots, with machines often taking weeks to figure out a single maneuver. Plus, like hard-coding, it’s not very flexible when faced with unseen and surprising objects.

So, the team at Embodied Intelligence is turning to another method known as imitation learning. This is where robots watch humans completing a task, and then learn to copy them. Crucially, they don’t mimic exact movements, but try to generalize what they’ve seen, turning it into an abstract set of instructions that still works with different variations of the same task.

There’s been solid progress in this area in recent years, particularly in the Berkeley lab of professor Pieter Abbeel, a former OpenAI researcher, and now Embodied Intelligence’s president and chief scientist. In the video above from 2013 you can see a robot in Abbeel’s lab learning to tie a rope. The instructor in the video has to physically move the robot around himself, but the machine doesn’t just mimic these motions; it can apply them to variations of the same task.

Embodied Intelligence wants to use this same learning-by-demonstration method, but with controllers operating the machines in virtual reality. Chen describes this approach as “surprisingly effective,” and says there’s a “rich learning signal that comes from humans” that robots can learn.

“We can essentially teach a wide range of skills from just under thirty minutes of demonstration,” he says, referring to a recent paper published by the startup on the pre-print server arXiv. “This is not just teaching the robot a fixed trajectory. It’s teaching it to recognize where a ball is, pick it up, and place it in a location — in different scenarios.” The goal is to have robots that can be brought into pretty much any assembly or production environment, and taught how to complete a job in just a few hours by someone with minimal technical experience. Crucially, Embodied Intelligence would only build one version of its learning software which could then be applied in all sorts of scenarios.

This would be a transformative change for how robots are used in manufacturing and assembly, allowing companies to automate tasks with ease. Consequently, lots of companies are looking into this. Amazon holds a yearly “picking challenge” to try and find new approaches for its own warehouses. Google has experimented with chaining together robot arms to see if they can learn from one another. And another robotics startup, named Kindred, is tackling the problem by having robots hand off to remote VR operators whenever they get stuck.


A robotic arm owned by startup Kindred picks up some tinned goods.
Photo by Vjeran Pavic / The Verge

There’s still a lot of work to be done. Chen says Embodied Intelligence’s current methods are only 90 percent reliable, and doing something right nine times out of ten isn’t good enough for a factory. It would lead to too many shut-downs and bottlenecks. Chen says the startup needs to be closer to 99.99 percent reliability before it starts selling its services, but that could take a while. Any engineer will tell you it’s that last 10 percent of improvements that tend to be the trickiest.

The company is confident it will make swift gains, though, especially with the recent improvements in AI. Chen compares the field to the self-driving industry a few years ago, saying the basic principles are in place and it’s a case of doing the hard part (the very hard part) of making things work in real life.

“In the last couple of years we’ve seen a huge accumulation of robotic capabilities, that are just … working,” he says. “You need that last push to make things reliable and fast, so they’re actually useful. Someone needs to do that. And we are doing that.”



We are very happy for your visiting to this web page This AI startup wants to solve the hard problem of robots picking things up. We hope the contents of this article can give more information to you.
Thanks…!!! 🙂

The Source link of this post.