About The AI Accountability Review

Inspired by the idea of a “living” literature review [1] the goal of the AI Accountability Review (AIAR) is to translate research into actionable and up-to-date guidance around the problems and potential solutions to AI Accountability, bridging the gap between knowledge and practice for AI policymakers, practitioners, and fellow researchers.

I’m Nick Diakopoulos, a Professor at Northwestern University who studies AI, Ethics, and Society, often with an intersection to online news media. I’ve been studying AI Accountability (which I originally referred to as “algorithmic” accountability) since 2013. In the last decade research on the topic has exploded, with many thousands of academic articles using those terms. This project steps back from the masses of relevant studies to synthesize them in a deep and ongoing way that can inform policy initiatives both for public policy as well as for internal organizational policies around AI.

This newsletter will include posts about both individual articles as well as posts about thematic clusters of articles. Individual article posts will describe a new or foundational paper with relevance to AI Accountability, discussing the implications of the research for policy problems or solutions. Thematic posts will draw from several papers that address a specific policy problem or solution to synthesize across the research. As per the “living” premise, thematic posts will be updated and revised over time as new research is found and integrated, including based on relevant individual posts. As more thematic posts are established I will also link to a citable version of the review, likely on arXiv, which will be similarly updated.

AI Accountability is a sociotechnical issue and so demands a multi-faceted treatment by synthesizing across technical, social science, legal, and ethics literatures. I’ll follow the research where it leads and will feature relevant, high-quality papers wherever they’re found. At a minimum I plan to track research from leading conferences and journals including AIES, FAccT, NeurIPS, AI and Ethics, AI and Society, CHI, and Nature Machine Intelligence. I’ll also have my eye on an extended set of sources including Data & Policy, Ethics and Information Technology, Minds and Machines, Science and Engineering Ethics, the Journal of Responsible Technology, ICLR, ICML, AAAI, arXiv, and other “gray” literature I come across.

Given my other interests in AI-driven media, I’ll also be exploring ways to leverage AI to support my effort here, particularly to help with monitoring and finding new research as well as assessing the relevance of research for the intended audience. From time to time I may write posts on AI-supported methodologies and tools, or reflect on how the experience of writing this review could inform the design of new tools that might enable others to write analogous reviews in their own fields.

References

  1. Elliott, J. et al. Decision makers need constantly updated evidence synthesis. Nature 600, 383–385 (2021). https://www.nature.com/articles/d41586-021-03690-1

Disclosure

The AI Accountability Review is supported by a grant from Open Philanthropies.

User's avatar

Subscribe to AI Accountability Review

The AI Accountability Review translates research to help bridge the gap between knowledge and practice for AI policymakers, practitioners, and fellow researchers.

People

Professor at Northwestern University. I study Computational Journalism as well as AI, Ethics, and Society.