Humanoid robots developed by a team of 6 Chinese individuals showcase street tricks while dancing alongside humans

Pay attention, when Xiaoshuai turned around, there was actually something inside his sweater and hat Empty:

Okay, this is not a thriller, but the latest research results from an all-Chinese team at UCSD (University of California, San Diego).

Advertisement

They proposed a special whole-body control strategy (ExBody) that can control the whole body of humanoid robots.StrategyMainly trains the upper body of humanoid robotsthe lower body is focused on maintaining stability.

The humanoid robot trained in this way can achieve robust motion and dynamic motion tracking. in short,I know a lot of things and my expressiveness is strong.

For example, dancing with humans to enhance the relationship between humanoid robots and humans:

Advertisement

Wearing a fluorescent vest, you can immediately go to work on the street to direct people and vehicles:

Research paper data shows thatThere are 6 people in this research teammore than half of whom are doctoral students at UCSD.

Why do we need to do this kind of training for humanoid robots? Xuxin Cheng, the co-author of the paper, gave an explanation while promoting it on Twitter.

Robots are always asked to become workers in all walks of life! We just want to explore the road in another direction with it~

When humanoid robots become “expressive”

The team's research is called “Expressive Whole-Body Control for Humanoid Robots” and its research goal is to enable humanoid robots to produce rich, diverse and expressive movements in the real world.

After being trained by the team, what kind of behavior can the humanoid robot do?

Meeting friends on the roadhigh fivethis is a piece of cake.

I can imagine it shouting “Hey Man”…

Be kind,I meet a brother on the road, give me a hug:

It’s a bit funny. Whether it’s a high-five or a hug, the stomping behavior of the robot’s lower body will not stop, but will slow down slightly.

Sharp-eyed friends may have noticed that the above high-five experiments were conducted in different environments and on different surfaces.

The team also made it clear that the humanoid robot trained through new research can walk quickly on a variety of different terrains.

In addition to the grass and stone paths shown above,beachIt’s also a piece of cake for it:

Flat office floorIt can also be easily dealt with:

In more demonstrations given by the team, moreMove freely when encountering external resistancedemo.

Pull it hard:

Hit it with a big ball:

Also knowRaise your hand to signal“Hey, you can help me carry my little schoolbag.”

Various operations made everyone stunned for a while.

An assistant professor of computer science at New York University tweeted in support, saying it was “unbelievable” that such a high-level research result on control and expression was the product of an academic team of six people!

More netizens chose to use “Cool” to describe this work:

“Nothing else, according to anthropology”

So, how can we make a robot show its teeth and claws like the above and have human-like expressiveness? There is no other way of thinking: according to anthropology.

The learning materials include various human motion capture data sets, as well as simulation data given by the generative model and video2pose model.

By doing it in a reinforcement learning frameworkwhole body controlWith large-scale training, the robot can generalize its actions in the real world.

However, this Sim2Real ideaIn fact, we still encountered problems.

According to the authors, the human model in a typical data set has 69 degrees of freedom, but the robot they used only has 19. Beyond that, the theoretical and practical torque limits are different.

This is very embarrassing, it is equivalent toThe knowledge learned cannot actually be used at all.

What to do?

then do ita small change:

Only the upper body is allowed to imitate and is responsible for various expressions, while the lower body is only responsible for controlling the stability of the two legs at any speed.

The author will just call this method “Expressive Whole-Body Control (Exbody)”.

From this, the overall frame of the robot looks like this:

First, after obtaining various data sets, the system will have a motion redirection to obtain a series of motion segments that conform to the kinematic structure of the robot.

Then, the expression targets and root motion targets are extracted from these fragments, reinforcement learning training of the “Exbody” strategy is performed, and the instructions are finally deployed to the real robot.

Among them, the expression goal is to be completed by the upper body of the robot, and the root movement goal belongs to the lower body (of course, this part can also be given directly using remote control commands).

▲Dataset used

In the end, compared with various baseline methods, the robot achieved the following results: There are several outstanding indicators, and the overall performance is not bad.

(MELV: Mean Episode Linear Velocity Tracking Reward, linear velocity tracking reward MEK: Mean episode key body tracking reward, key body tracking reward)

As seen from the picture below, Exbody's strategy can also make the robot bend its knees more when performing (such as high-fiving), and raise its feet higher from the ground. The implication is that the movements are harder and more expressive – and of course, more stable.

Produced by an all-Chinese team

There are 6 authors in this study, all of whom are Chinese and all from the University of California, San Diego (UCSD).

There are two co-authors:

Xuxin Chenga PhD student at UCSD, a master's degree in robotics from CMU, and a bachelor's degree in automation from Beijing Institute of Technology.

Yandong Jiis currently studying for his first Ph.D. at UCSD. He has a master's degree in mechanical engineering from UC Berkeley and a bachelor's degree in electronic computer engineering from Nankai University.

The corresponding author is their supervisor Xiaolong WangAssistant Professor, Department of Electrical Engineering, UCSD.

He graduated from CMU with a Ph.D., and his current research interests focus on CV and robotics, etc. Google Scholar shows that the number of citations of his papers is 23,000+.

Oh, and finally, the team members also include the robot used in this research: the robot from Yushu Technology Unitree H1.

One More Thing

There are quite a lot of recent advances in robotics.

First, OpenAI and Microsoft bet on Figure It was just announced that a new round of financing has raised approximately US$675 million, with a pre-financing valuation of approximately US$2 billion.

Immediately afterwards, a video was released, introducing the latest progress of its humanoid robot Figure 01, saying that “everything is autonomous”:

Then there is the one with extremely rich facial expressions, sometimes stunning and sometimes terrifying. Amecathe latest announced to have visual capabilities.

She can observe the entire situation of the room she is in, and then describe it to you vividly in a variety of voices and tones (including but not limited to Musk and Spongebob).

It’s so interesting hhhhhh

Reference links:

  • (1)https://expressive-humanoid.github.io/resources/Expressive_Whole-Body_Control_for_Humanoid_Robots.pdf

  • (2)https://expressive-humanoid.github.io/

  • (3)https://twitter.com/xiaolonw/status/1762528106001379369

Advertisement