Presentation
Compromise in Human-Robot Collaboration for Threat Assessment
DescriptionAdvancements in AI are rapidly increasing the capabilities of autonomous robots. However, increasing robot intelligence can generate conflict between human and AI-powered collaborators as machines take over functions previously reserved for humans. Humans and machines may arrive at differing reasoned evaluations due to their differing analytic competencies and data sources, especially in dynamic, unpredictable environments. There is a need to understand the design and person factors that promote constructive interaction between human and machine to resolve “reasonable disagreements” constructively.
The current study supports a project funded by the AFOSR Trust and Influence Program to investigate factors influencing compromise when human and robot disagree, in the context of threat assessment. We addressed scenarios in which a human-robot dyad was tasked with determining whether a series of urban scenes were threatening or safe, based on initial mission briefs, visual inspection of the scene, and analysis of information from robot sensors. Study goals included:
• Investigating scene characteristics that influenced trust in the robot.
• Determining the optimal form of transparency for promoting compromise and optimizing trust.
• Identifying individual difference measures that predicted compromise and trust in the robot.
Previous work (Matthews et al., 2019) suggested that trust in a security robot partner reflected whether the operator’s mental model defined the robot as an advanced tool or a humanlike collaborator. Figure 1 shows the conceptual model in which the mental model influences operator willingness to engage in dialogue with the robot, which in turn affects trust and compromise. Specific research questions were as follows:
How does trust vary across differing threat scenarios? We manipulated (1) whether the human would initially judge the scenario as safe or threatening, (2) whether the scenario would promote conflicting evaluations by the human and robot, and (3) whether the robot was willing to compromise by changing its threat evaluation in cases of disagreement. It was hypothesized that disagreement and lack of compromise by the robot would reduce trust, in both safe and threatening scenarios.
How do different types of transparency influence trust and compromise? We hypothesized that trust and willingness to compromise would be elevated by transparency information that created a sense of the robot being a humanlike mentalizing agent. To this end, we manipulated whether or not the participant could engage in dialogue with the robot to understand the reasoning supporting its evaluations. We also included a condition in which the robot’s dialogue was accompanied by inner speech that demonstrated its reasoning in real time. Previous research (Pipitone & Chella, 2021) has shown that robot inner speech promotes attributions of a Theory of Mind (ToM) to the robot, along with elevated trust. We hypothesized that dialogue and inner speech would elevate trust and compromise, relative to control conditions (one-time transparency report, expression of benevolence).
What individual difference factors predict trust and compromise? People vary in their tendencies to treat machines as humanlike. Scales for trust in intelligent machines (Lin et al., 2022; Matthews et al., 2019) were administered to test whether dispositional trust influenced compromise.
The current study supports a project funded by the AFOSR Trust and Influence Program to investigate factors influencing compromise when human and robot disagree, in the context of threat assessment. We addressed scenarios in which a human-robot dyad was tasked with determining whether a series of urban scenes were threatening or safe, based on initial mission briefs, visual inspection of the scene, and analysis of information from robot sensors. Study goals included:
• Investigating scene characteristics that influenced trust in the robot.
• Determining the optimal form of transparency for promoting compromise and optimizing trust.
• Identifying individual difference measures that predicted compromise and trust in the robot.
Previous work (Matthews et al., 2019) suggested that trust in a security robot partner reflected whether the operator’s mental model defined the robot as an advanced tool or a humanlike collaborator. Figure 1 shows the conceptual model in which the mental model influences operator willingness to engage in dialogue with the robot, which in turn affects trust and compromise. Specific research questions were as follows:
How does trust vary across differing threat scenarios? We manipulated (1) whether the human would initially judge the scenario as safe or threatening, (2) whether the scenario would promote conflicting evaluations by the human and robot, and (3) whether the robot was willing to compromise by changing its threat evaluation in cases of disagreement. It was hypothesized that disagreement and lack of compromise by the robot would reduce trust, in both safe and threatening scenarios.
How do different types of transparency influence trust and compromise? We hypothesized that trust and willingness to compromise would be elevated by transparency information that created a sense of the robot being a humanlike mentalizing agent. To this end, we manipulated whether or not the participant could engage in dialogue with the robot to understand the reasoning supporting its evaluations. We also included a condition in which the robot’s dialogue was accompanied by inner speech that demonstrated its reasoning in real time. Previous research (Pipitone & Chella, 2021) has shown that robot inner speech promotes attributions of a Theory of Mind (ToM) to the robot, along with elevated trust. We hypothesized that dialogue and inner speech would elevate trust and compromise, relative to control conditions (one-time transparency report, expression of benevolence).
What individual difference factors predict trust and compromise? People vary in their tendencies to treat machines as humanlike. Scales for trust in intelligent machines (Lin et al., 2022; Matthews et al., 2019) were administered to test whether dispositional trust influenced compromise.
Event Type
Lecture
TimeWednesday, September 11th8:30am - 8:59am MST
LocationFlagstaff
Human AI Robot Teaming (AI)