Prospective Accountability
A focus on prevention rather than blame offers a prudent alternative to classic conceptions of accountability
Discourse on AI accountability is often focused on the idea of accountability as a retrospective activity of blame identification and assignment. It’s reactive and backward-looking in time: How can we find someone to blame for something that already happened and have them explain it? It traces cause and assigns fault after a failure, whether that’s a biased hiring algorithm or an autonomous vehicle crash.
Retrospective accountability is certainly important for achieving justice for impacted individuals. If someone was harmed we ideally want to be able to identify who is to blame and have them explain what happened and face the consequences. But identifying causality in complex networks is no easy task, especially given information gaps around AI system behavior related to access or opacity. It may not always even be possible.
In contrast, prospective accountability is proactive. Instead of asking “Who caused this?” after the fact, we ask “Who is responsible for preventing this?” before deployment. It’s about assigning forward-looking responsibilities to stakeholders in order to take steps to avoid undesirable outcomes [1]. Informed by past events, it looks ahead to devise clear expectations for behavior and performance, so that actors have explicit duties to prevent or mitigate anticipated risks and harms. The quality of the plan for how to act in the future is what an actor might have to answer for and explain in an accountability forum. We may also expect actors to explain how their plans are adaptive and responsive so that plans improve as new methods and technologies for preventing harms become available.
A prospective approach to AI accountability would identify actors in the system and assign them specific responsibilities for avoiding bad outcomes and ensuring good outcomes. A central challenge is then normative: What are the bad outcomes we want to avoid? What are the prescribed plans for preventing those bad outcomes? Are these forward-looking responsibility assignments fair? To address these questions we need reliable methods for anticipatory risk and impact assessment [2] as well as robust stakeholder maps which detail which actors are in the best position and have the capacity (and resources) to act to reliably see to certain outcomes.
The distinction between retrospective and prospective accountability resembles that between outcome and process accountability [3]. We can either hold someone accountable for an outcome (i.e. retrospective blame for something that happened), or we can hold them accountable for a standard of how they enacted an outcome (i.e. prospective accountability for their plan to act to achieve some outcome). If ChatGPT produces unsafe outputs that negatively impact users’ health, we can and should hold OpenAI accountable for that bad output, but we can also hold them accountable for the content moderation processes they implement to try to prevent that. Failure to implement an accepted process to protect users might indicate negligence. Some harm event could trigger a call for an explanation of this process which might then feed into prospection on how to improve the plan in the future.
For AI policymakers and analysts, shifting from retrospective to prospective accountability means embedding forward-looking responsibilities (not necessarily only for AI system developers) into enforceable governance frameworks. But both types of accountability rely on addressing many of the same underlying questions, such as: How do we set the standards of preventative plans?; and How do we monitor and know about the implementation of those plans? If we move away from identifying and assigning blame as the goal of policy, perhaps because it’s sometimes impossible given sociotechnical complexity, butts up against jurisdictional issues, or triggers fears of user surveillance, prospective accountability can be a useful alternative.
References
[1] Johnson, Deborah G. 2011. “Software Agents, Anticipatory Ethics, and Accountability.” In G.E. Marchant et al. (eds.), The Growing Gap Between Emerging Technologies and Legal-Ethical Oversight, The International Library of Ethics, Law and Technology 7, DOI 10.1007/978-94-007-1356-7_5
[2] Kieslich, K., Diakopoulos, N. & Helberger, N. Anticipating impacts: using large-scale scenario-writing to explore diverse implications of generative AI in the news environment. AI Ethics 1–23 (2024) doi:10.1007/s43681-024-00497-4.
[3] Shefali Patil, Ferdinand Vieider, and Philip Tetlock, “Process versus Outcome Accountability,” in The Oxford Handbook of Public Accountability, ed. Mark Bovens, Robert. E. Goodin, and Thomas Schillemans (Oxford: Oxford University Press, 2014);