Trendinginfo.blog > Science & Environment > How dogs are helping robots understand what humans really want |

How dogs are helping robots understand what humans really want |

1773752853 photo.jpg 1773752853 photo.jpg

Thank you for reading this post, don't forget to subscribe!

As per a recent study conducted by researchers at Brown University, robots are being trained to locate items in messy spaces by imitating dogs. When we want to find an object, we simply point to where it is. When a robot is given a point to go to, it is able to complete the task easily. However, when trying to point to a specific object with lots of other objects in the vicinity, the robot can be confused. In order to help the robot interpret its owner’s hand movement correctly and locate an item accurately, researchers have developed a new robotic system, called LEGS-POMDP, that calculates a pointing cone. This is achieved by evaluating a human’s location through their line of sight relative to their wrist and elbow. The success rate of locating an item by utilising the LEGS-POMDP robot was compared against prior research, and a success rate of nearly 89 per cent was achieved. Ultimately, researchers determined that dogs possess the capacity to understand human intention, and this is helpful in assisting robots in completing their programmed tasks.

How dogs map human intent

The researchers determined that robots are usually unable to accurately identify what a human is pointing to due to a lack of sufficient geometric data, with only a finger to work with. To overcome this problem, they worked in conjunction with the Brown Dog Lab to analyze how dogs interpret human intention by utilizing a technique employed by dogs whereby they determine ‘pointing cone’ from the relationship between a person’s eye gaze, elbow angle and wrist alignment, and then utilize this geometric relationship in their robot’s software-allowing the robot to view the gesture as positional probabilities instead of a single often inaccurate coordinate system.

The AI framework that navigates like a dog

Significant strides were made in the study with the design of the LEGS-POMDP framework. According to Ivy He, the lead researcher, robots that are in disorganised or cluttered environments may frequently face ‘partial observable’ challenges where they will not be able to clearly view everything in their line of sight at all times. The LEGS-POMDP allows robots to perform reasoning despite having uncertainties, and if there is ambiguity as to which object the human is indicating and they cannot identify where it is relative to everything else, the robot will instead use the human’s gaze and the position of their limbs to determine a better location to observe from. By utilising a multimodal approach where they both verbally gave verbal commands while simultaneously using gestures, robots were able to complete 89% of those tasks.

From lab to living room, bridging the gap with canine collaboration

In a study being presented today, Tuesday, March 17, during the International Conference on Human-Robot Interaction in Edinburgh, Scotland. The project aims to transition robots from strictly-ordered, controlled laboratory environments to typical residential and clinical buildings with their ‘messy’ and disorganised surroundings (i.e., homes, hospitals). The researchers have hypothesised that through imitating the natural collaborative way that humans work with dogs, they will be able to develop an effective robotic assistant that will enable the robot to accurately retrieve objects in intricate, unstructured (i.e. chaotic) environments.

Source link