Introducing the AI Accountability Review
The AI Accountability Review translates research to bridge the gap between knowledge and practice for AI policymakers, practitioners, and fellow researchers.
Research on the topic of AI Accountability has proliferated over the past decade. Thousands and thousands of articles mention “AI Accountability” or “Algorithmic Accountability” on Google Scholar. And there’s an increasing pace of information too: the 2025 edition of the Fairness, Accountability, and Transparency (FAccT) Conference published more than 200 papers culled from more than 800 submissions. Besides the flagship conferences research is also scattered across a range of disciplinary and interdisciplinary technical, social science, and legal journals. Yet as the research base builds, an important question looms: what does it all mean for what we should actually do to govern AI in society?
Inspired by the idea of a “living” literature review, the AI Accountability Review (AIAR) aims to address this question by translating research into up-to-date guidance around what to do about the problem(s) of AI Accountability. I hope to help bridge the gap between all of the knowledge produced on the topic with the practice of governance confronted by both public and private sector policymakers. The end goal is to support evidence-based policy and well-informed critical interpretations of policy proposals based on the latest knowledge. Researchers can also benefit from this by leveraging the review to understand and map the landscape, identify where systematic reviews or additional research are needed, and gain exposure to areas of the interdisciplinary literature where they may not be as focused.
AI Accountability is a multi-facetted issue with many related problems and proposed solutions. A few of the topics I’m looking forward to writing about include: defining AI Accountability as a problem, assigning responsibility in distributed systems, the differences between prospective and retrospective accountability, transparency and explainability, auditing and red teaming, the role of norms in setting expectations for AI behavior, risk and impact assessment, responsibilities throughout the AI supply chain, how levels of agency relate to accountability, the role of different accountability forums, and more. There’s a lot of terrain to cover!
As a living review I’ll be tracking and synthesizing the latest research, but I’ll also be going back to some of the foundational texts on the topic to examine the deep context of how technologists, ethicists, and others have grappled with accountability over the years. As I write more, learn more, and revise posts, I’ll also create a continuously updated academically citable version of the review. For more details on the project see the about page.
If you want to follow along, please subscribe to the AI Accountability Review.