Informing AI Accountability with Public Perceptions
By studying perceptions of risk, benefit, and moral alignment, we can design policies that reflect collective values and assign responsibility in a legitimate way.
Defining the norms and standards of behavior for AI systems is a core facet of the AI Accountability Problem. One important way to understand expectations for AI system use and behavior is to ask the public. This is critical especially for calls for accountability in social or media forums since they are most exposed to a plurality of opinions about appropriateness or acceptability of behavior. In a democratic system we should also expect that standards for legal, political, and administrative forums be institutionalized downstream of public perspectives. Public perception of AI acceptance is a valuable input for policymakers to help prioritize areas for intervention, and shape the formalization of expectations.
A growing number of surveys consider the public perception and acceptance of AI across different use cases, such as health care, surveillance, and automation (Eom et al, 2024), personal health and labor replacement (Mun et al, 2025), AI in tax fraud detection (Kieslich et al, 2022), media, health, and justice domains (Araujo et al, 2020) and others. One study showed that overall judgments of the value of AI across a wide range of use cases is strongly shaped by perceived benefits, with perceived risks also playing a significant role (Brauner et al, 2025).
A recurring result in many of these survey studies is that there is variance in user acceptance of AI across people of different backgrounds. Factors such as the knowledge, literacy, education, or even political orientation of respondents, as well as their age and gender can play a role in the perception of risk, benefit, and acceptance of AI. For instance, younger respondents often view AI as less risky and more beneficial than older respondents (Brauner at al, 2025). A critical factor in individual perception is the level of AI knowledge the person has (and their confidence in that knowledge), where higher knowledge can lead to lower risk assessment, i.e. “risk blindness” (Said et al, 2023). Because of these differences, policy should ideally be informed by representative population samples, or perhaps population samples weighted by those who might bear the greater risk.
Kieslich et al (2021) take the perspective that we also need to understand public perception of the principles underlying AI systems. This in effect is a measure of whether the system is “aligned” with the perspectives and values of the person evaluating it. They measure perceptions of principles like explainability, fairness, security, accountability, accuracy, privacy, and limited machine autonomy for a scenario related to use of AI in tax fraud detection. For their representative sample of respondents from Germany they find that accountability was perceived as the most important principle. This underscores the idea that accountability is a critical property of AI systems that the public cares about.
Mun et al (2025) pairs a quantitative survey of various AI use cases together with open-ended follow-up questions where respondents elaborate on why they think a use case should or shouldn’t be developed, and what would need to change for them to switch their opinion of the use case. As with Brauner et al (2025) they find that cost-benefit reasoning dominates, but that in some cases virtue-based reasoning is somewhat more prevalent, such as for the Elementary School Teacher or Digital Medical Advice scenarios. They further analyze these rationale through the lens of Moral Foundations Theory and find that Care (i.e. dislike of pain of others or feelings of empathy and compassion) was the most prevalent reason mentioned overall, but fairness also dominated some use cases (e.g. Lawyer). This finding about how a moral foundation or value towards something like care aligns with one of the surveys reported on by Eom et al (2024) where 64% of respondents thought it was a bad idea for “robotic nurses for bedridden patients that can diagnose situations and decide when to administer medicine.” In other words, use cases where care is an underlying moral proposition seem to make people less accepting of the use of AI. In terms of accountability, then, we need to consider not only perceived risk, but also whether there is some kind of underlying value in society that is being violated.
One of the gaps identified by Araujo et al (2020) is that public perception of AI acceptance in a use case doesn’t necessarily tell us if people would personally accept a specific AI decision, or reject it and instead call for accountability. Important work remains to be done to understand this ego-centric retrospective case. On the other hand, for prospective accountability, research has begun to explore public perceptions around which stakeholders should be responsible for taking action to prevent negative outcomes (Barnett et al, 2025). This research uses written scenarios depicting harm from AI in the media ecosystem as a basis for a survey to gather public input about which stakeholders are in a position to take action to prevent the harm. Participants assigned responsibility to any of 12 different stakeholders that emerged from the data, including government, tech companies, news publishers, schools, social media platforms, independent third parties, local communities, public health officials, media companies, NGOs, employers, and unions. Specific actions that these stakeholders could take were then rated in terms of whether they should be taken, and also whether the action should be prioritized, resulting in rich data that could inform policy on how to assign responsibility for prevention, though ideally this process would be re-run with a representative sample.
Public opinion plays a critical role in shaping legitimate norms and standards for AI behavior. Policymakers should recognize that expectations of AI systems — including what is considered “acceptable” — are rooted in social perceptions. Surveys show that these perceptions vary based on demographic or other individual factors such as knowledge, and that there is variance across use case contexts. Policy should therefore be grounded in representative and inclusive data that is tailored to the specific use case contexts to be governed. Although cost-benefit reasoning dominates rationale for AI acceptance, value-based reasoning also needs to be considered. Finally, there is still much open research to do by drilling further into perceptions of who is responsible for what across a variety of situations.
References
Araujo T, Helberger N, Kruikemeier S, et al. (2020) In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & SOCIETY 35(3): 611–623.
Barnett J, Kieslich K, Helberger N, et al. (2025) Envisioning Stakeholder-Action Pairs to Mitigate Negative Impacts of AI: A Participatory Approach to Inform Policy Making. Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency: 1424–1449.
Brauner P, Glawe F, Liehner GL, et al. (2025) Mapping public perception of artificial intelligence: Expectations, risk–benefit tradeoffs, and value as determinants for societal acceptance. Technological Forecasting and Social Change 220: 124304.
Eom D, Newman T, Brossard D, et al. (2024) Societal guardrails for AI? Perspectives on what we know about public opinion on artificial intelligence. Science and Public Policy 51(5): 1004–1013.
Kieslich K, Keller B and Starke C (2022) Artificial intelligence ethics by design. Evaluating public perception on the importance of ethical design principles of artificial intelligence. Big Data & Society 9(1): 20539517221092956.
Mun J, Yeong WBA, Deng WH, et al. (2025) Why (Not) Use AI? Analyzing People’s Reasoning and Conditions for AI Acceptability. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 8(2): 1771–1784.
Said, N., et al. (2023) ‘An Artificial Intelligence Perspective: How Knowledge and Confidence Shape Risk and Benefit Perception’, Computers in Human Behavior, 149: 107855
