OIL & GAS
S&T researcher examines if AI have a mind of their own
- Written by: Tyler O'Neal, Staff Editor
- Parent Category: TOPICS
Most people encounter artificial intelligence (AI) every day in their personal and professional lives. Without giving it a second thought, people ask Alexa to add soda to a shopping list, drive with Google Maps and add filters to their Snapchat – all examples of AI use. But a Missouri University of Science and Technology researcher is examining what is considered evidence of AIs having a “mind,” which will show when a person perceives AI actions as morally wrong.
Dr. Daniel Shank, an assistant professor of psychological science at Missouri S&T, is building on a theory that if people perceive entities to have a mind, that outlook will determine what moral rights and responsibility they attribute to it. His research would show when a person perceives AI actions as morally wrong and possibly serve to reduce smart device rejection and improve the devices.
“I want to understand the social interactions in which people perceive a machine to have mind and the situations they perceive it to be a moral agent or victim,” says Shank.
Shank’s behavioral science work applies the theory to advanced machines such as AI agents and robots.
“The times when we do perceive a mind behind the machine tells us something about the technologies, their capacities and their behaviors, but they ultimately reveal more about us as humans,” Shank explains. “In these encounters, we emotionally process the gap between nonhuman technologies and having a mind, essentially feeling our way to machine minds.”
Shank is in the middle of a three-year project, funded by the Army Research Office (ARO), to better understand people’s perception of AI. ARO is an element of the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory.
In his first year of research, he collected qualitative descriptions of the personal interactions people had with AIs that either involved a moral wrong or involved the person perceiving the AI to have “a lot of mind.” Shank’s research found that 31 percent of respondents reported exposure of personal information and 20 percent reported exposure to undesirable content – both of which Shank argues are reported due to their frequent occurrence on personal and home devices.
“Dr. Shank’s work is generating new understandings of human-agent teaming by systematically integrating longstanding social psychological theories of cognition and emotion with research on human-agent interaction,” says Dr. Lisa Troyer, program manager for social and behavioral sciences at the ARO. “His research is already generating scientific insights on the role of moral perceptions of autonomous agents and how those perceptions impact effective human-agent teaming.”
Currently in his second year of the research, he is conducting controlled experiments where the level of mind in the AI is varied and then the AI is the perpetrator or victim of a moral act. Shank hopes this will allow him to draw more direct comparisons between AI and humans. So far, his research finds that while some AIs such as social robots can assume greater social roles, human acceptance of an AI in those roles enhanced both perception of mind and emotional reactions.
The final phase of his research will use surveys and simulations to understand if levels of morality can be predicted by the impressions people have of the AI.
“Technologies connected with the web, trained on big data and operating across social networking platforms are now commonplace in our culture,” says Shank. “These technologies, whether they are proper artificial intelligence or not, are routine in people's personal lives, but not every use of these technologies causes us to see them as having a mind.”
The question of whether virtue or vice can be attributed to AI still depends on if humans are willing to judge machines as possessing moral character. And as research into AI ethics and psychology continues, new subjects are being considered such as AI rights and AI morality.