A team led by USF’s John Licato studied how AI systems respond when they are assigned a belief and then encounter opposing views. The team assigned different agents specific beliefs and confidence levels. For example, one agent might argue with high confidence that solar energy is the most reliable renewable power source, while a second agent argues with lower confidence that wind energy is more reliable.
The team observed that agents assigned lower confidence levels were more open to revising their beliefs, while those with higher confidence tended to be more persuasive. When several agents disagreed with a single agent, that agent was more likely to change its position. These human-like behaviors emerged without retraining. “As AI systems are increasingly used to support planning, analysis and decision-making, understanding how beliefs form and change becomes critical,” Licato said. “If we want AI systems to reason together reliably, we need to think beyond surface-level prompts.”
View Related Expert Profiles: Go to Source