publive-image

The researchers at the University of Michigan have recently come up with a bi-directional model using machine learning and AI that can predict if humans or robots can be trusted while human-robot collaboration or not. This model was presented in a paper that was published in IEEE Robotics and Automation Letters, which can help in allocating tasks to humans or robots depending upon their reliability and efficiency. 

Herbert Azevedo-Sa, one of the researchers of this study said that there is much research going on to understand why humans should do certain tasks and why not robots, but unfortunately we know very little about it. In collaborative work, trust needs to be on both sides. With this in mind, we wanted to build robots that can interact and build trust in humans as co-workers that collaborate, he added. 

When humans collaborate on any given task, they first observe those they are collaborating with and try to understand what and how the tasks need to be done. They also think if they are capable enough to complete the task efficiently or not. And so by getting to know one another and learning in the collaborative process can establish a rapport to work coordinately. 

Azevedo-Sa further explains that this is how trust comes into play, one builds trust in the co-worker to do a few kinds of tasks but does not do other tasks. This happens the same with both the workers, who build in you for some tasks but not in others.  

As part of their study, Azevedo-Sa and his colleagues tried to replace the process through which humans learn, on what tasks they are collaborating with and this may or may not trust a computational model. The model which they developed can represent both a human and a robot’s trust, thus it can predict both how much humans could trust the robots and how the robots do.

“One of the big differences between trusting a human vs robot is that humans can have the ability to perform a task well but lack the integrity and benevolence to perform the task. For example, a human co-worker could also be capable of performing a task well, but show up for work when they simply don't care about the job. A robot should thus incorporate this into its estimation of trust in the human, while a human only needs to consider whether the robot can perform the task well”, said Azevedo-Sa. 

The model designed by the researchers can provide information such as abilities, integrity, and other similar factors, by considering the requirements of tasks meant to execute. This representation of an agent’s capabilities is then compared to the requirements of the task meant to complete. 

The agent's capabilities representations can also change over time, depending on how well the task has been executed.  These representations and task performance are advantageous as it captures how it can be compared with the different agents. 

In contrast with how much agents can be trusted, a model developed by the team of researchers applies to both humans as well as for robots.  After evaluating the model, Azevedo-Sa and his teammates found that it was capable of predicting trust far more reliably than other models which were already existing.

“Previous approaches tried to predict trust transfer by assessing how similar tasks were, based on their verbal description”, says Azevedo-Sa in this context. By representing the task in terms of the requirements, can avoid errors occurring with verbal descriptions. 

In the future, the new bi-directional model could be used to enhance human-robot collaboration in various settings. It could help in allocating tasks more reliably among the teams of humans as well as robots. 

Finally, Azevedo-Sa added saying, “If a robot and a human are working together executing the same tasks, the agents can probably also negotiate which tasks should be assigned to each of them, but their opinions will depend on the lines of trust in each other. And so we want to know how we can build upon our trust model to allocate tasks among humans and robots”.