Keynote Speakers
Louise Dennis
University of Manchester
Agents, Ethics and Explanations
Abstract: Moral uncertainty is the problem of deciding upon the correct way to determine an ethical course of action. Many well-known ethical problems (e.g., the trolley problems) are decided differently depending upon the ethical theory or preference order over values that is used. In many situations there is uncertainty over which is the best decision mechanism, and an interest in accepting any outcome that is consistent with some appropriate theory. This has led to interest in the link between ethical and explainable reasoning. A decision is not ethical because is complies with some benchmark set, but is instead considered ethical because it can be justified in relation to some ethical theory. In this talk, I will survey, some of the work I have done on both ethical reasoning in agents, and agent explainability.
Munindar P. Singh
North Carolina State University
Norms and Ethics in Large Language Models
[Joint work with Jiaqing Yuan, Pradeep Murukannaiah, Fardin Saad]
Abstract: Generative AI in the shape of Large Language Models (LLMs) has shown surprising ability, though also surprisingly brittle ability, to deal with high-level concepts.
This talk will summarize some of our recent work on the relationship between LLMs, norms, and ethics.
In one study, we evaluated some leading LLMs with respect to their ability to deal with “right versus right” dilemmas, including what moral postures they seem to embody and how responsive they are to adapting to a user’s prompt.
In another study, we considered how LLMs can engage in cooperative dialogue in a task-oriented setting.
We found that LLMs are competitive with humans in theory of mind (ToM) reasoning; proficiency at ToM reasoning is a precursor for an agent to be socially intelligent, including identifying norms and adopting or deviating from them.
We close with a discussion of the prospects for developing agents based on generative AI.