UNIVERSITY PARK, Pa — A $299,992 National Science Foundation grant to Penn State, together with a $299,176 grant to Iowa State, could lead to an interactive, computer-aided, decision-support tool that can help groups of people make better choices.
Computer programs for individual decision-making are already on the market, but sorting out the preferences and priorities of even small groups of people — or multi-stakeholder — add layers of complexity, according to Vasant Honavar, professor of Information Sciences and Technology; Huck Chair in Biomedical Data Sciences and Artificial Intelligence; director, Center for Artificial Intelligence Foundations and Scientific Applications and associate director, Institute for Computational and Data Sciences, who is the Penn State lead investigator for the project.
Honavar and his collaborators aim to develop computer languages and tools for reasoning with complex qualitative preferences of multiple stakeholders and for identifying the most preferred alternatives. They note that the system could be useful across a broad range of applications including product design, public policy, health care, information security and privacy, among others.
“Classic work on preferences has focused on settings where preferences are quantitative. However, in many applications, people find it more natural to express their preferences in qualitative terms. Our work is aimed at developing computational tools for working with qualitative preferences,” said Honavar. “Consider for example, the task of deciding on care plan for a critically ill patient. The stakeholders in this case may include the patient concerned with their health outcome; the physician, with deep knowledge of the benefits and drawbacks of alternative care plans; family members with an interest in the patient’s well-being; and perhaps an insurance provider seeking to minimize the cost of care. The challenge is to identify viable alternatives and justify them in the context of the stakeholders’ preferences.”
According to the researchers, their tool will be able to represent and reason with the complex, often conflicting preferences of multiple stakeholders, including in settings where the stakeholders exist within organizational or social structures or legal and regulatory environments that give precedence to the preferences of some stakeholders over those of others. The tool will also be able to explain why some alternatives are preferred to others, and help stakeholders understand the implications of preferences, and perhaps refine them, in a collaborative fashion, to reach a consensus regarding the most preferred alternative.
“We see it as an interactive sort of decision-making tool that helps a group of stakeholders wrestle with the implications of their preferences,” said Honavar. “It will be able to detect when people have preferences that are just inconsistent or even irrational. Pointing out such inconsistencies as well showing the interplay between one’s own preferences and those of others can help individuals to not only understand the rationale for particular choices but also, when possible, revise individual preferences to achieve outcomes that achieve social good.
“It is important to note that the tool is not intended to automate decisions, but rather, to assist and empower diverse stakeholders to collaboratively arrive at alternatives that optimally accommodate their expressed preferences," Honavar added. "A key benefit of such a tool is increased transparency and hence accountability of decision making in settings that impact multiple stakeholders.”
The project brings together a team of researchers with expertise in artificial intelligence, knowledge representation, formal methods, and preference reasoning at Penn State, Iowa State and Lafayette College.
The open-source implementations of the multi-stakeholder decision support tools resulting from the project will significantly lower the barrier to the applications of AI in multi-stakeholder decision making in a number of domains. The project also provides enhanced opportunities for research-based training of graduate and undergraduate students in AI.