<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[AI Accountability Review]]></title><description><![CDATA[The AI Accountability Review translates research to help bridge the gap between knowledge and practice for AI policymakers, practitioners, and researchers.]]></description><link>https://www.ai-accountability-review.com</link><generator>Substack</generator><lastBuildDate>Sun, 17 May 2026 04:18:17 GMT</lastBuildDate><atom:link href="https://www.ai-accountability-review.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Nicholas Diakopoulos]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[ndiakopoulos@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[ndiakopoulos@substack.com]]></itunes:email><itunes:name><![CDATA[Nick Diakopoulos]]></itunes:name></itunes:owner><itunes:author><![CDATA[Nick Diakopoulos]]></itunes:author><googleplay:owner><![CDATA[ndiakopoulos@substack.com]]></googleplay:owner><googleplay:email><![CDATA[ndiakopoulos@substack.com]]></googleplay:email><googleplay:author><![CDATA[Nick Diakopoulos]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Designing an AI Whistleblower Office]]></title><description><![CDATA[One of the recurring puzzles for AI governance is how regulators will ever learn about noncompliance inside firms whose behavior is difficult to observe from the outside. A new empirical report published on arXiv by Beri and Baker (2026) argues that a dedicated whistleblower office could be a &#8220;force multiplier&#8221; for AI regulation, and offers a set of concrete design recommendations grounded in a dataset of 30 historical whistleblower case studies spanning 1978&#8211;2020 across 15 industries.]]></description><link>https://www.ai-accountability-review.com/p/designing-an-ai-whistleblower-office</link><guid isPermaLink="false">https://www.ai-accountability-review.com/p/designing-an-ai-whistleblower-office</guid><dc:creator><![CDATA[Nick Diakopoulos]]></dc:creator><pubDate>Mon, 27 Apr 2026 06:01:03 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mocI!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1ac346b-0173-43bc-a79a-09cea34ea61a_288x288.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One of the recurring puzzles for AI governance is how regulators will ever learn about noncompliance inside firms whose behavior is<a href="https://www.ai-accountability-review.com/p/closing-information-gaps-via-ai-transparency"> difficult to observe from the outside</a>. A<a href="https://arxiv.org/abs/2603.01245"> new empirical report published on arXiv</a> by Beri and Baker (2026) argues that a dedicated whistleblower office could be a &#8220;force multiplier&#8221; for AI regulation, and offers a set of concrete design recommendations grounded in a dataset of 30 historical whistleblower case studies spanning 1978&#8211;2020 across 15 industries.</p><p>From the 30 cases analyzed the authors report that in about 87% of cases the whistleblowers were motivated, at least in part, by moral considerations, with 27% indicating some kind of financial motivation. At least 90% were insiders at the offending organization and roughly 80% were mid-level employees or executives. But stepping forward was costly: 57&#8211;67% faced retaliation (e.g. harassment or unjust termination), 43&#8211;57% suffered negative career consequences, and 13% received death threats. Only 13% sought anonymity&#8212;although the authors caution that this likely reflects sampling bias toward famous cases. In short, whistleblowers in the dataset tended to be morally motivated insiders who paid a steep personal price.</p><p>Based on these patterns and their own observations the authors develop a design sketch for an AI whistleblower office. They claim it should: (1) financially reward tipsters with a percentage of sanctions, in the spirit of the SEC and CFTC programs, given that this can be a motivator ; (2) prohibit retaliation and offer witness protection (plus S visas for international tipsters); (3) enable anonymous tipping via lawyers or a secure online platform; (4) be adequately staffed and funded for effective &#8220;tip-sifting&#8221;; and (5) invest in messaging to raise awareness for the office and an advisory body to help would-be whistleblowers determine whether they have reasonable cause.</p><p>For AI accountability, this work adds a new dimension to transparency. Mandated disclosures and external audits will always leave gaps and insider reporting is one of the few channels likely to surface willful concealment. The recommendations align with a <a href="https://www.ai-accountability-review.com/p/prospective-accountability">prospective accountability</a> frame: supporting protection, anonymity, and an advice body for potential whistleblowers are forward-looking responsibilities that might make insider reporting a more viable option before harms have occurred. A sample of 30 is small, and the cases skew U.S., famous, and successfully-tipped&#8212;but as a starting point for thinking about policy the empirical grounding is valuable.</p><p><strong>Note:</strong> <em>This post was drafted by Claude Opus 4.7 under the prompting, supervision, and further editing by the author.</em></p><p></p>]]></content:encoded></item><item><title><![CDATA[From Explanation to Accountability]]></title><description><![CDATA[A decade of explainable AI research has produced important techniques for understanding AI models, but less clarity on who those explanations are for and what accountability goals they actually serve]]></description><link>https://www.ai-accountability-review.com/p/from-explanation-to-accountability</link><guid isPermaLink="false">https://www.ai-accountability-review.com/p/from-explanation-to-accountability</guid><dc:creator><![CDATA[Nick Diakopoulos]]></dc:creator><pubDate>Mon, 20 Apr 2026 06:01:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mocI!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1ac346b-0173-43bc-a79a-09cea34ea61a_288x288.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the event of an incident where an AI system causes harm, the people responsible for that system may be expected to render an explanation to various accountability forums, such as the media, administrative bodies, or in a courtroom. This relationship between an actor and a forum where the actor is obliged to explain their conduct to the forum is a <a href="https://www.ai-accountability-review.com/p/the-problem-of-ai-accountability">key aspect of accountability</a>.</p><p>There is an array of research on the topic of explainable AI (i.e. XAI) extending back for nearly a decade (Lipton, 2018; Mittelstadt et al, 2018; Wachter et al, 2017), with a focus on making the technical components of AI systems more understandable. Explanation in this context has been defined as &#8220;<em>the ability to articulate why a model produced a given output in a way that is accessible to human users</em>&#8221; (Dhar et al, 2025). But recent research (Dhar et al, 2025; Alpsancar et al, 2025) has raised the critique that the literature on XAI hasn&#8217;t always been clear about the goals of AI explanations. Who and what are they for, really? If explanations are meant to support accountability, there is still somewhat of a gap in the literature showing exactly how, particularly if we parse out the different goals of <a href="https://www.ai-accountability-review.com/p/prospective-accountability">retrospective vs. prospective accountability</a>.</p><p>Dhar et al present a framework for thinking about the goals of AI explanation in terms of <em>who </em>explanations are designed for, <em>what</em> information is conveyed, and <em>how</em> an explanation presents that information (2025). Different stakeholders such as AI system developers, operators, validators, and subjects may have different needs for explanations, including different modalities of presentation (e.g. visual, textual, interactive) that match information needs. Each stakeholder here might need something a bit different from an AI system explanation in order to contribute to accountability. For instance, a system developer might benefit from a highly technical interactive explanation that helps them debug an issue with the model and so help prevent future bias or fairness issues in decisions. A decision-subject might need something more accessible to help them understand why they got the outcome they did and potentially contest it if they think it&#8217;s wrong. A validator (e.g. an auditor) may need to verify input features used by the model to ensure they are accurate and appropriate. And an operator needs to be informed about how their actions lead to probable consequences with the system in order to be a responsible human-in-the-loop (Baum et al, 2022).</p><p>Besides who explanations are for, there are important dimensions about <em>what</em> should be explained (Dhar et al, 2025). <em>Local</em> explanations focus on individual outputs and are well-aligned to the goals of retrospective accountability, which focus on identifying and assigning blame for a specific individual decision. On the other hand <em>global</em> explanations, which orient towards overall patterns of output from a model across a range of inputs, are better suited to supporting goals of prospective accountability where a birds-eye view is needed to inform how to prevent anticipated harms at the system level. Post-hoc explanations of system behavior which track how inputs influence outputs are the key for both retrospective and prospective accountability, while mechanistic explanations that trace functional model internals are more narrowly useful for informing developers towards preventing unintended outcomes. In other words, while the classic view of accountability as retrospective doesn&#8217;t hinge on explaining model internals, a prospective view could additionally benefit from explanations of those internals to debug model failures and ensure better outcomes in the future.</p><p>An early paper to make a connection between AI explanation and the goals of accountability comes from Doshi-Velez et al (2017). The authors appropriately point out the potential for explanations to prevent or rectify errors in AI systems, helping to discern the appropriate or inappropriate use of criteria by a system. They note two key types of explanation that can play an important role in supporting accountability: f<em>eature importance</em> and <em>counterfactuals</em>.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/p/from-explanation-to-accountability?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading AI Accountability Review! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/p/from-explanation-to-accountability?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.ai-accountability-review.com/p/from-explanation-to-accountability?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p>Feature importance/relevance explanations provide information about the weighting and priority of inputs to specific outputs, or to the overall distribution of outputs. Should some features be unacceptably correlated to outputs (e.g. race) overall, this can inform prospective accountability so that the model or its training data can be rectified. If features are correlated to outputs in ways that contradict a scientific causal account of what <em>should</em> be predictive of outcomes, this could be grounds for prospective accountability to align the model with scientific expectations. If the scientific account is well-established such a contradiction could also contribute to retrospective accountability for negligence.</p><p>Counterfactual explanations include a &#8220;statement of how the world would have to be different for a desirable outcome to occur&#8221; and &#8220;describe a dependency on the external facts that led to that decision&#8221; (Wachter et al, 2017) and have also been framed as a form of feature relevance explanation (Speith, 2022). If a decision-subject had their mortgage application denied and the counterfactual explanation indicated that they would have been approved if their race had been different, that would be clear grounds for that individual to contest the output.</p><p>While Doshi-Velez got it mostly right when it comes to supporting retrospective accountability, another explanation type elaborated in the literature (Speith, 2022)&#8212;<em>model surrogates</em> (e.g. linear approximations)&#8212;may also be narrowly useful for prospective accountability. What a model surrogate explanation can offer is a clear and interpretable feature importance explanation of a more complex model (e.g. a neural net, or other black-box model). If that feature importance explanation indicates an inappropriate bias this could be grounds for a developer to be prospectively responsible for addressing the apparent behavioral bias of their model. Even if the model itself doesn&#8217;t use inappropriate data, if its behavior appears to be inappropriate that might be grounds to call for it to change. Where model surrogates are not so useful is for retrospective accountability as they don&#8217;t reflect the actual decision-logic impacting a specific individual.</p><p>A recent paper from Alpsancar et al (2025) makes an explicit connection between AI explanation and the needs of assigning responsibility to support AI governance. The authors recount the classical model of moral responsibility which hinges on fulfilling three criteria to hold someone responsible for their actions: (1) <em>causality</em> (i.e. the person influenced the outcome), (2) <em>freedom</em> (i.e. the person was not coerced in their action), and (3) <em>epistemic</em> (i.e. the person is aware of the consequences of their actions). The authors also review what they term the trans-classical model of responsibility which is a systemic view of responsibility that helps cope with unintended and unforeseen consequences. In this view, the epistemic condition instead relates to knowledge of the potential for and probability of various outcomes in the system (i.e. risk) and responsibility is assigned for managing that risk.</p><p>In the classical view, the goal of explanation for accountability is clear: to help fulfill the three conditions so that responsibility can be assessed. AI system explanations should indicate causality, including who (or what) took what actions that were critical to the outcome. They should indicate the autonomy of entities and their actions, including how individuals may be influenced by AI systems in their judgements. And they should show whether individuals in the system were appropriately informed about the consequences of their actions. On the other hand, in the trans-classical view the goal of explanation should be to support the understanding of the risk (i.e. severity and prevalence) of outcomes. But it could also be important for explanations to show that there is <em>not</em> a direct causal actor responsible in the system, since otherwise we might revert to the classical model. Regardless of view there is a need for a sociotechnical approach to AI explanation. Explanations of technical models as discussed above are important for supporting the knowledge needs for either view.</p><p>There are several policy-relevant implications that can be derived here. First, explanation requirements for AI systems should specify the audience for the explanation. A disclosure rule that works for a decision-subject contesting a denied loan looks very different from one aimed at auditors verifying model inputs or developers debugging bias. Second, any explanation requirements should tie back to the accountability purpose being served. Retrospective accountability calls for local, post-hoc explanations including counterfactuals and feature importance explanations, while prospective accountability calls for global explanations about patterns across outputs. Third, policymakers should consider both the classical and trans-classical view of responsibility and how and whether they may want to blend or distinguish the two in assigning responsibility. Finally, standards bodies should resist technical definitions of explainability and consider sociotechnical elements related to the human use of AI systems and their explanations.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Accountability Review! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><h4><strong>References</strong></h4><p>Alpsancar S, Buhl HM, Matzner T, et al. (2025) Explanation needs and ethical demands: unpacking the instrumental value of XAI. AI and Ethics 5(3): 3015&#8211;3033.</p><p>Baum, K., Mantel, S., Schmidt, E. &amp; Speith, T. From Responsibility to Reason-Giving Explainable Artificial Intelligence. Philos. Technol. 35, 12 (2022).</p><p>Dhar R, Brandl S, Oldenburg N, et al. (2025) Beyond Technocratic XAI: The Who, What &amp; How in Explanation Design. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 8(1): 745&#8211;759.</p><p>Doshi-Velez F, Kortz M, Budish R, et al. (2017) Accountability of AI Under the Law: The Role of Explanation. arXiv. DOI: 10.48550/arxiv.1711.01134.</p><p>Lipton, Z. C. 2018. The mythos of model interpretability:In machine learning, the concept of interpretability is both important and slippery. Queue, 16(3): 31&#8211;57</p><p>Mittelstadt B, Russell C and Wachter S (2019) Explaining Explanations in AI. Proceedings of the Conference on Fairness, Accountability, and Transparency: 279&#8211;288.</p><p>Speith T (2022) A Review of Taxonomies of Explainable Artificial Intelligence (XAI) Methods. 2022 ACM Conference on Fairness Accountability and Transparency: 2239&#8211;2250.</p><p>Wachter, S.; Mittelstadt, B.; and Russell, C. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL &amp; Tech., 31: 841</p><p></p>]]></content:encoded></item><item><title><![CDATA[LLMs Can’t Provide Faithful Explanations Needed for AI Accountability]]></title><description><![CDATA[A growing array of research points out that the explanations produced by LLMs are not accurate.]]></description><link>https://www.ai-accountability-review.com/p/llms-cant-provide-faithful-explanations</link><guid isPermaLink="false">https://www.ai-accountability-review.com/p/llms-cant-provide-faithful-explanations</guid><dc:creator><![CDATA[Nick Diakopoulos]]></dc:creator><pubDate>Tue, 24 Mar 2026 12:17:01 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mocI!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1ac346b-0173-43bc-a79a-09cea34ea61a_288x288.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A growing array of research points out that the explanations produced by LLMs are not accurate. In the literature this is referred to as <em>explanation</em> <em>faithfulness </em>(Agarwal et al, 2024; Jacovi and Goldberg, 2020) and accurately measuring it is an area of active research (Lyu et al, 2024). Agarwal and colleagues (2024) articulate it as: &#8220;An explanation is considered faithful if it accurately represents the reasoning of the underlying model.&#8221; A less anthropomorphic way of talking about &#8220;reasoning&#8221; here would be to say that an explanation is faithful if it accurately describes how the system or model processes an input into an output. Some explanations may be more faithful than others (Jacovi and Goldberg, 2020), with certain interpretable models able to produce more faithful explanations than black-box models (Rudin, 2019).</p><p>Explanations rendered by and about AI systems need to be as faithful as epistemically possible in order to support accountability. Buijsman describes the role of explanation in supporting accountability: &#8220;when a mistake has been made, the challenge is to find a reason why that mistake happened and the people responsible for fixing it.&#8221; (2026). A faithful explanation might help understand whether there may be an issue with faulty data, missing information, or incorrect reasoning, and ultimately help improve the system over time. Explanations that are not faithful could misdirect decision-making about how to assign blame or prevent future harms, frustrate attempts to contest a decision or diagnose mistakes and logical errors so they can be corrected, and ultimately to appropriately sanction actors if the explanation is unacceptable.</p><p>Faithfulness is especially relevant to questions of <a href="https://www.ai-accountability-review.com/p/prospective-accountability">process accountability</a>, where the goal is to hold an actor in the AI system accountable for <em>how</em> an outcome was computed. Explanations are a diagnostic tool for accountability, describing how inputs lead to the outcome and helping to trace instances of potential negligence or faulty logic in the system. If an unfaithful explanation of a mortgage decision says that you were rejected because your income is too low but the model decision was actually influenced by your race or zip code this undermines your ability to challenge the decision as unacceptably including protected characteristics.</p><p>LLMs are not able to provide faithful explanations, such as self-explanations generated by the model to render the &#8220;reasoning&#8221; behind their output in human-understandable language (Madsen et al, 2024; Mayne et al, 2025; Mutton et al, 2025). Madsen and colleagues (2024) show that larger models with more parameters generally produce more faithful explanations but that there is high variance across tasks. Mayne and colleagues (2025) focus on self-generated counterfactual explanations (SCEs) and indicate that their findings &#8220;suggest that SCEs are, at best, an ineffective explainability tool and, at worst, can provide misleading insights into model behaviour.&#8221; While models may be able to provide counterfactual explanations (e.g. if you change variables X and Y it will flip the decision outcome), these may be trivially true rather than articulating minimal changes to the input that would actually shed light on the decision.</p><p>The main implication here is that when accountability matters, such as for high-stakes situations where there is potential for severe impacts, faithful explanations are critical, but LLMs cannot provide such explanations. Policymakers may consider when AI providers need to demonstrate faithfulness of model explanations and establish thresholds around when models can be used in high-stakes contexts. Administrative bodies will also need to develop standardized benchmarks and measurements for faithfulness to support such policies.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Accountability Review! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h4><strong>References</strong></h4><p>Agarwal C, Tanneru SH and Lakkaraju H (2024) Faithfulness vs. Plausibility: On the (Un)Reliability of Explanations from Large Language Models. arXiv. DOI: 10.48550/arxiv.2402.04614.</p><p>Buijsman S (2026) Accuracy is not all you need! The Reasons to Require AI Explainability. Minds and Machines 36(1): 14.</p><p>Jacovi, A. &amp; Goldberg, Y. Towards Faithfully Interpretable NLP Systems: How Should We Define and Evaluate Faithfulness? Proc. 58th Annu. Meet. Assoc. Comput. Linguistics 4198&#8211;4205 (2020) doi:10.18653/v1/2020.acl-main.386.</p><p>Lyu, Q., Apidianaki, M. &amp; Callison-Burch, C. Towards Faithful Model Explanation in NLP: A Survey. Computational Linguistics 50, 657&#8211;723 (2024).</p><p>Madsen A, Chandar S and Reddy S (2024) Are self-explanations from Large Language Models faithful? In: Findings of the Association for Computational Linguistics: ACL, 2024.</p><p>Mayne H, Kearns RO, Yang Y, et al. (2025) LLMs Don&#8217;t Know Their Own Decision Boundaries: The Unreliability of Self-Generated Counterfactual Explanations. In: EMNLP, 2025.</p><p>Matton K, Ness RO, Guttag J, et al. (2025) Walk the Talk? Measuring the Faithfulness of Large Language Model Explanations. In: ICLR, 2025.</p><p>Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1, 206&#8211;215 (2019).</p>]]></content:encoded></item><item><title><![CDATA[Experimenting with AI in a Living Literature Review]]></title><description><![CDATA[From automating conference scrapes to stress-testing synthesis: a look at how AI tools like NotebookLM and OpenAI&#8217;s Agent mode support&#8212;and struggle with&#8212;the workflow of a living literature review.]]></description><link>https://www.ai-accountability-review.com/p/experimenting-with-ai-in-a-living</link><guid isPermaLink="false">https://www.ai-accountability-review.com/p/experimenting-with-ai-in-a-living</guid><dc:creator><![CDATA[Nick Diakopoulos]]></dc:creator><pubDate>Mon, 09 Feb 2026 11:02:45 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!1f0T!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F439eef79-f745-471e-97ca-79db536f2c51_1353x1401.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The AI Accountability Review (AIAR) is a living literature review with the goal of tracking literature on the topic of AI accountability for an audience of researchers and policymakers. I write posts that either focus on translating a single piece of literature or that synthesize several pieces of literature towards policy implications. How could AI help with this process?</p><p>I recently came across a paper by Fok et al (2025) that&#8217;s been useful in helping me organize the various AI experiments I&#8217;ve been trying. Based on interviews with researchers who have written literature reviews the paper helps to understand their overall process and some of the ways they conceptualize the use of AI in that process.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MJkO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd6839d2-39bf-4bbf-914e-56daaebbd2f3_1600x268.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MJkO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd6839d2-39bf-4bbf-914e-56daaebbd2f3_1600x268.png 424w, https://substackcdn.com/image/fetch/$s_!MJkO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd6839d2-39bf-4bbf-914e-56daaebbd2f3_1600x268.png 848w, https://substackcdn.com/image/fetch/$s_!MJkO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd6839d2-39bf-4bbf-914e-56daaebbd2f3_1600x268.png 1272w, https://substackcdn.com/image/fetch/$s_!MJkO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd6839d2-39bf-4bbf-914e-56daaebbd2f3_1600x268.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MJkO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd6839d2-39bf-4bbf-914e-56daaebbd2f3_1600x268.png" width="1456" height="244" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cd6839d2-39bf-4bbf-914e-56daaebbd2f3_1600x268.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:244,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MJkO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd6839d2-39bf-4bbf-914e-56daaebbd2f3_1600x268.png 424w, https://substackcdn.com/image/fetch/$s_!MJkO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd6839d2-39bf-4bbf-914e-56daaebbd2f3_1600x268.png 848w, https://substackcdn.com/image/fetch/$s_!MJkO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd6839d2-39bf-4bbf-914e-56daaebbd2f3_1600x268.png 1272w, https://substackcdn.com/image/fetch/$s_!MJkO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd6839d2-39bf-4bbf-914e-56daaebbd2f3_1600x268.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p>The findings articulate a set of four phases to the literature review process that participants engaged in: <em>search, appraisal, synthesis, and interpretation</em>. The paper also identifies some of the ways AI can support updating of reviews, namely through automation and in providing a second opinion. My own use of AI for AIAR has been most useful for automating (with oversight) some of the appraisal aspects of the process, and in providing second opinions on appraisal and synthesis. I have also dabbled in some use cases that more directly do synthesis and interpretation, but these have been less successful. And I haven&#8217;t really tried any AI use cases for search, because I think that setting the search scope for the review is something that needs to be closely managed by me. Let&#8217;s walk through some of the different things I&#8217;ve tried.</p><h4><strong>Scraping, Formatting, and Promotion</strong></h4><p>Probably the most time-saving use case I&#8217;ve found is to use OpenAI&#8217;s Agent mode to help collect conference proceedings papers that I want to review. Some conferences have non-standard presentations of information, but Agent mode is pretty adept at navigating websites to collect papers and format them as RSS feeds. I plug those feeds into my triage workflow on <a href="https://www.inoreader.com/">InoReader</a>, which streamlines the appraisal process of papers. It can help to be explicit in the prompt and identify a structured data (e.g. JSON) version of the proceedings. And while this process is mostly automated I find that I do still need to double-check the outputs to make sure it was a comprehensive scrape.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1f0T!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F439eef79-f745-471e-97ca-79db536f2c51_1353x1401.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1f0T!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F439eef79-f745-471e-97ca-79db536f2c51_1353x1401.png 424w, https://substackcdn.com/image/fetch/$s_!1f0T!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F439eef79-f745-471e-97ca-79db536f2c51_1353x1401.png 848w, https://substackcdn.com/image/fetch/$s_!1f0T!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F439eef79-f745-471e-97ca-79db536f2c51_1353x1401.png 1272w, https://substackcdn.com/image/fetch/$s_!1f0T!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F439eef79-f745-471e-97ca-79db536f2c51_1353x1401.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1f0T!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F439eef79-f745-471e-97ca-79db536f2c51_1353x1401.png" width="1353" height="1401" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/439eef79-f745-471e-97ca-79db536f2c51_1353x1401.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1401,&quot;width&quot;:1353,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:544780,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1f0T!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F439eef79-f745-471e-97ca-79db536f2c51_1353x1401.png 424w, https://substackcdn.com/image/fetch/$s_!1f0T!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F439eef79-f745-471e-97ca-79db536f2c51_1353x1401.png 848w, https://substackcdn.com/image/fetch/$s_!1f0T!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F439eef79-f745-471e-97ca-79db536f2c51_1353x1401.png 1272w, https://substackcdn.com/image/fetch/$s_!1f0T!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F439eef79-f745-471e-97ca-79db536f2c51_1353x1401.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I have also experimented with using Agent mode to gather email addresses for each of the primary authors of papers cited by one of my posts, and to then draft a short personalized note notifying the person about the post. I wasn&#8217;t intrepid enough to automate the actual emailing, but I did manually copy and send some of the emails (after light editing) and even got a response from one. Promoting AIAR on social media could be a full time job, but having the AI do some of the grunt work of getting email addresses and drafting emails lowers the barrier a bit.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_27i!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68d7ee78-1126-40ae-9696-3b8986ebea1e_1032x396.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_27i!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68d7ee78-1126-40ae-9696-3b8986ebea1e_1032x396.png 424w, https://substackcdn.com/image/fetch/$s_!_27i!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68d7ee78-1126-40ae-9696-3b8986ebea1e_1032x396.png 848w, https://substackcdn.com/image/fetch/$s_!_27i!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68d7ee78-1126-40ae-9696-3b8986ebea1e_1032x396.png 1272w, https://substackcdn.com/image/fetch/$s_!_27i!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68d7ee78-1126-40ae-9696-3b8986ebea1e_1032x396.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_27i!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68d7ee78-1126-40ae-9696-3b8986ebea1e_1032x396.png" width="1032" height="396" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/68d7ee78-1126-40ae-9696-3b8986ebea1e_1032x396.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:396,&quot;width&quot;:1032,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_27i!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68d7ee78-1126-40ae-9696-3b8986ebea1e_1032x396.png 424w, https://substackcdn.com/image/fetch/$s_!_27i!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68d7ee78-1126-40ae-9696-3b8986ebea1e_1032x396.png 848w, https://substackcdn.com/image/fetch/$s_!_27i!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68d7ee78-1126-40ae-9696-3b8986ebea1e_1032x396.png 1272w, https://substackcdn.com/image/fetch/$s_!_27i!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68d7ee78-1126-40ae-9696-3b8986ebea1e_1032x396.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h4><strong>Article Appraisal</strong></h4><p>One of the nice built-in features of InoReader is that for any item in a feed I am tracking, I can trigger a custom prompt to an LLM. Using this feature I can get a quick second opinion from the LLM on whether the item might be relevant to my audience. Admittedly I don&#8217;t use this all that often, but I do occasionally engage it. One of the issues is that not all the RSS feeds I follow have full abstract text and so this limits the applicability. I do think there&#8217;s real potential in having AI help think through what items have implications for your intended audience, and there&#8217;s probably a lot more sophistication that could be applied in how to do this computationally beyond the integrated prompting in InoReader, such as by simulating ideal audience members and what they would want to know about an item.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bv_Q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5589b93b-c9a7-4585-81fa-3369dd1acb14_1370x360.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bv_Q!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5589b93b-c9a7-4585-81fa-3369dd1acb14_1370x360.png 424w, https://substackcdn.com/image/fetch/$s_!bv_Q!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5589b93b-c9a7-4585-81fa-3369dd1acb14_1370x360.png 848w, https://substackcdn.com/image/fetch/$s_!bv_Q!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5589b93b-c9a7-4585-81fa-3369dd1acb14_1370x360.png 1272w, https://substackcdn.com/image/fetch/$s_!bv_Q!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5589b93b-c9a7-4585-81fa-3369dd1acb14_1370x360.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bv_Q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5589b93b-c9a7-4585-81fa-3369dd1acb14_1370x360.png" width="1370" height="360" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5589b93b-c9a7-4585-81fa-3369dd1acb14_1370x360.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:360,&quot;width&quot;:1370,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bv_Q!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5589b93b-c9a7-4585-81fa-3369dd1acb14_1370x360.png 424w, https://substackcdn.com/image/fetch/$s_!bv_Q!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5589b93b-c9a7-4585-81fa-3369dd1acb14_1370x360.png 848w, https://substackcdn.com/image/fetch/$s_!bv_Q!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5589b93b-c9a7-4585-81fa-3369dd1acb14_1370x360.png 1272w, https://substackcdn.com/image/fetch/$s_!bv_Q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5589b93b-c9a7-4585-81fa-3369dd1acb14_1370x360.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Some articles on AIAR reflect the synthesis of a cluster of literature. As a <em>living</em> literature review the goal is to update these over time with other literature relevant to the cluster. I&#8217;ve been experimenting with LLMs to support this process. Using a Google Colab notebook I input the URL of the base article to be updated and scrape the full text. Then I prompt an LLM to evaluate a stream of literature for relevance to that article. The prompt is critical here. What I&#8217;m looking for are new papers that might directly update, change, or provide new context to any of the claims in the original article, to find new papers that might actually make a difference.</p><p>Each paper is rated for relevance, and that rating is paired with a table listing claims from the original article and ideas from the new paper that might bear on those claims. The table facilitates my appraisal of the new paper. The output looks like this:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XdKp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2d071dc-7f30-486b-a1f8-c2bb1dc4c276_829x1600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XdKp!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2d071dc-7f30-486b-a1f8-c2bb1dc4c276_829x1600.png 424w, https://substackcdn.com/image/fetch/$s_!XdKp!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2d071dc-7f30-486b-a1f8-c2bb1dc4c276_829x1600.png 848w, https://substackcdn.com/image/fetch/$s_!XdKp!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2d071dc-7f30-486b-a1f8-c2bb1dc4c276_829x1600.png 1272w, https://substackcdn.com/image/fetch/$s_!XdKp!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2d071dc-7f30-486b-a1f8-c2bb1dc4c276_829x1600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XdKp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2d071dc-7f30-486b-a1f8-c2bb1dc4c276_829x1600.png" width="829" height="1600" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d2d071dc-7f30-486b-a1f8-c2bb1dc4c276_829x1600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1600,&quot;width&quot;:829,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!XdKp!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2d071dc-7f30-486b-a1f8-c2bb1dc4c276_829x1600.png 424w, https://substackcdn.com/image/fetch/$s_!XdKp!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2d071dc-7f30-486b-a1f8-c2bb1dc4c276_829x1600.png 848w, https://substackcdn.com/image/fetch/$s_!XdKp!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2d071dc-7f30-486b-a1f8-c2bb1dc4c276_829x1600.png 1272w, https://substackcdn.com/image/fetch/$s_!XdKp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2d071dc-7f30-486b-a1f8-c2bb1dc4c276_829x1600.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>So far this is promising, but there&#8217;s still work to do to evaluate it and set it up as an ongoing monitoring process that fully integrates with my InoReader appraisal workflow. In principle I&#8217;d set this up for each of the base articles in AIAR, and then monitor literature from something like <a href="https://openalex.org/">OpenAlex</a> to create a continuously updated feed of potentially relevant papers.</p><h4><strong>Grounded Synthesis</strong></h4><p>Google&#8217;s NotebookLM has turned into an increasingly powerful tool that can be used to interactively synthesize curations of articles. For my article on <a href="https://www.ai-accountability-review.com/p/ai-ethics-principles-and-accountability">AI Ethics Principles and Accountability</a>, I even published a <a href="https://notebooklm.google.com/notebook/79d4ec54-0009-4f91-b939-26de93387453">notebook</a> with all of the sources I had used to write the article. While the original goal with creating the notebook here was to allow readers to interactively explore the literature, I also realized that I could also use this to provide a second opinion on my own synthesis. Using Gemini, you can refer to a notebook of curated sources in NotebookLM and so I prompted it to create a table listing the supporting evidence for every claim in the post. In the absence of an editor, this can be a useful double-check to make sure you&#8217;re staying honest to the underlying literature in your synthesis. I think this kind of approach could potentially also be useful in an article update process to assess whether claims in new papers support or refute the existing claims you&#8217;ve written.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!DL3S!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa3282612-6661-4060-b050-aff055b9a30c_1258x1538.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!DL3S!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa3282612-6661-4060-b050-aff055b9a30c_1258x1538.png 424w, https://substackcdn.com/image/fetch/$s_!DL3S!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa3282612-6661-4060-b050-aff055b9a30c_1258x1538.png 848w, https://substackcdn.com/image/fetch/$s_!DL3S!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa3282612-6661-4060-b050-aff055b9a30c_1258x1538.png 1272w, https://substackcdn.com/image/fetch/$s_!DL3S!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa3282612-6661-4060-b050-aff055b9a30c_1258x1538.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!DL3S!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa3282612-6661-4060-b050-aff055b9a30c_1258x1538.png" width="1258" height="1538" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a3282612-6661-4060-b050-aff055b9a30c_1258x1538.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1538,&quot;width&quot;:1258,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:523000,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!DL3S!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa3282612-6661-4060-b050-aff055b9a30c_1258x1538.png 424w, https://substackcdn.com/image/fetch/$s_!DL3S!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa3282612-6661-4060-b050-aff055b9a30c_1258x1538.png 848w, https://substackcdn.com/image/fetch/$s_!DL3S!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa3282612-6661-4060-b050-aff055b9a30c_1258x1538.png 1272w, https://substackcdn.com/image/fetch/$s_!DL3S!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa3282612-6661-4060-b050-aff055b9a30c_1258x1538.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Still, I am a bit cautious about relying on LLMs, even closely grounded ones, in helping to synthesize literature for AIAR. In an early experiment, I loaded up NotebookLM with the entirety of the Fairness, Accountability, and Transparency Conference proceedings from 2025. I <a href="https://gemini.google.com/share/7b886d454ea7">asked Gemini</a> (with access to the Notebook) to look for clusters of papers that were thematically related to each other and to the topic of the blog. While some of these clusters seemed relevant and overlapped with my own perception of themes, others seemed more tenuous in the solidity of the theme and its relevance to AIAR. Synthesis is to a large degree about framing and finding a consistent thread, and I don&#8217;t think even the best LLMs are able to do this in a way that is satisfying.</p><h4><strong>As a Writing Aid</strong></h4><p>I have attempted to use LLMs (primarily Gemini, sometimes directly in NotebookLM) to help draft five of the posts for AIAR, three of which were based on translating a single paper, and two of which were based on clusters of papers.</p><p>I found that for the articles based on clusters the LLM was wholly unsuited to the task of synthesis: I ended up using none of the generated text. Even including all of the paper texts and my notes on those papers in the prompt, I was left feeling that the synthesized text didn&#8217;t capture what was interesting or important about the cluster. This again goes back to the idea of framing, structuring, and finding the aspects of relevance that I think are important within the field and to my intended audience. But this also relates to the interpretation phase and the &#8220;identification of key challenges, future trends, and open research opportunities&#8221; (Fok et al, 2025). All of this is consistent with <a href="https://www.science.org/content/blog-post/can-chatgpt-help-science-writers">what some editors at Science found</a> when they tried to use ChatGPT to translate research papers.</p><p>For the three articles that were more direct translations of individual research papers I had slightly more success with incorporating AI generated text. In <a href="https://www.ai-accountability-review.com/p/robotstxt-as-a-lever-for-ai-accountability">this post</a>, I used almost 50% of the generated text in the final piece, which warranted a disclosure at the bottom of the post: &#8220;Some text in this post was adapted based on suggestions from AI.&#8221; I think this was somewhat successful because I prompted the model with details on the aspects of the paper I wanted the post to focus on, and that the post itself was more descriptive than synthetic or interpretive. The parts of the post that I wrote were the more interpretive aspects, putting the research into a broader context and considering its relevance to the audience. In another <a href="https://www.ai-accountability-review.com/p/reflexive-prompt-engineering-as-a">post</a> (excepted below), I was also able to use some chunks of descriptive text that were generated by the LLM.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VEPK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5128b0b6-f879-4117-bc83-df88b87be67b_872x1094.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VEPK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5128b0b6-f879-4117-bc83-df88b87be67b_872x1094.png 424w, https://substackcdn.com/image/fetch/$s_!VEPK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5128b0b6-f879-4117-bc83-df88b87be67b_872x1094.png 848w, https://substackcdn.com/image/fetch/$s_!VEPK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5128b0b6-f879-4117-bc83-df88b87be67b_872x1094.png 1272w, https://substackcdn.com/image/fetch/$s_!VEPK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5128b0b6-f879-4117-bc83-df88b87be67b_872x1094.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VEPK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5128b0b6-f879-4117-bc83-df88b87be67b_872x1094.png" width="872" height="1094" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5128b0b6-f879-4117-bc83-df88b87be67b_872x1094.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1094,&quot;width&quot;:872,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!VEPK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5128b0b6-f879-4117-bc83-df88b87be67b_872x1094.png 424w, https://substackcdn.com/image/fetch/$s_!VEPK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5128b0b6-f879-4117-bc83-df88b87be67b_872x1094.png 848w, https://substackcdn.com/image/fetch/$s_!VEPK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5128b0b6-f879-4117-bc83-df88b87be67b_872x1094.png 1272w, https://substackcdn.com/image/fetch/$s_!VEPK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5128b0b6-f879-4117-bc83-df88b87be67b_872x1094.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h4><strong>In Closing</strong></h4><p>Much like everything else on AIAR, this post will be a work-in-progress and is subject to update. The most compelling use-case I&#8217;ve found so far for AI is in automating the collection and formatting of references into my RSS workflow as this lets me do something that I might not otherwise make time for. I also find the article appraisal workflow compelling and plan to keep pushing on that to integrate it more into my regular workflow for keeping AIAR posts updated. I may also revisit use cases related to grounded synthesis and writing though I&#8217;m generally less optimistic about AI providing a real lift there. The work of framing and making connections in the literature, contextualizing findings, and thinking about what matters to an audience seem like they really need an expert eye, though perhaps LLMs can assist by offering a second opinion.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Accountability Review! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h4><strong>References</strong></h4><p>Fok R, Siu A and Weld DS (2025) Toward Living Narrative Reviews: An Empirical Study of the Processes and Challenges in Updating Survey Articles in Computing Research. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems: 1&#8211;10.</p>]]></content:encoded></item><item><title><![CDATA[A Critique of Transparency Provisions in NY’s RAISE Act (1.0)]]></title><description><![CDATA[Following in California&#8217;s footsteps, New York&#8217;s RAISE Act attempts to mandate AI transparency]]></description><link>https://www.ai-accountability-review.com/p/a-critique-of-transparency-provisions</link><guid isPermaLink="false">https://www.ai-accountability-review.com/p/a-critique-of-transparency-provisions</guid><dc:creator><![CDATA[Nick Diakopoulos]]></dc:creator><pubDate>Mon, 26 Jan 2026 11:02:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mocI!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1ac346b-0173-43bc-a79a-09cea34ea61a_288x288.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The US has a couple of state laws&#8212;from <a href="https://www.ai-accountability-review.com/p/how-californias-new-ai-law-supports">California</a>, and now <a href="https://www.nysenate.gov/legislation/bills/2025/S6953/amendment/B">New York</a>&#8212;that address the risk of frontier AI models. Both broadly operate by specifying some information about frontier AI models that must be disclosed for the purposes of oversight. In this post I&#8217;ll review New York&#8217;s &#8220;Responsible AI Safety and Education&#8221; (RAISE) act through the lens of <a href="https://www.ai-accountability-review.com/p/closing-information-gaps-via-ai-transparency">the quality of transparency information called for in the law</a>. (Note: I examine the act <a href="https://www.nysenate.gov/legislation/laws/GBS/A44-B">as signed into law</a>, and will likely write another post when the <a href="https://reinventalbany.org/2023/10/everything-you-ever-wanted-to-know-about-chapter-amendments/">chapter amendments</a> proposed by Governor Hochul are passed by the NY legislature, likely in early 2026).</p><p>Probably the biggest issue I see with the law is in the definitions. The RAISE act is geared towards regulating &#8220;frontier models&#8221; which it defines as: &#8220;an artificial intelligence model trained using greater than 10&#186;26 computational operations (e.g., integer or floating-point operations), the compute cost of which exceeds one hundred million dollars&#8221; (or a different definition that applies to models produced through knowledge distillation). This is a bad definition because it merges two criteria that are arbitrary, shifting over time, and, most importantly, which <em><strong>model developers are not required to disclose</strong></em>. They are arbitrary because there&#8217;s no reason to think that 10^26 computing operations is a magical threshold at which danger suddenly materializes. Based on <em>estimates </em>from <a href="https://epoch.ai/blog/training-compute-of-frontier-ai-models-grows-by-4-5x-per-year">Epoch</a>, none of the current breed of frontier models surpasses this threshold, so it&#8217;s not clear that the law applies to anything in the real world. And, again, even if OpenAI, Google, or Anthropic has exceeded this threshold in its training there&#8217;s no way for us to know because the law doesn&#8217;t make them tell anyone. It&#8217;s a sort of scouts honor, opt-in system. Also, the definition of compute is in conjunction with a cost greater than $100 million. Because the definition has to match <em>both</em> criteria, a model could use more than the compute threshold but be done for less than $100 million and then the law wouldn&#8217;t apply. But compute costs are always getting cheaper, and some model developers like Google control the market price of their computing and so can game this, not to mention that the value of the dollar could change.</p><p>The main mechanism for specifying transparency in the law is that large developers of models create a &#8220;safety and security protocol&#8221; &#8212;a form of transparency report&#8212;before deployment of the model. There are also provisions to require the reporting of &#8220;safety incidents&#8221; that might be a case of or increased risk of critical harm. The protocol report is shared with administrative accountability forums such as the attorney general and division of homeland security, as well as being made publicly available in redacted form for media or social forums. The unredacted protocol plus additional information about the tests and test results that inform the protocol need to be maintained for however long the model is deployed plus five years, presumably so that those records are potentially available for discovery by legal forums in the event they are needed. In this sense, the law does pretty well in providing for <em>accessibility</em> of the safety and security protocol to various accountability forums.</p><p>The overall aim of the law towards &#8220;frontier models&#8221; and &#8220;critical harm&#8221; scopes and sets limits on the <em>relevance</em> of the information in the protocol. Critical harm is defined as causing $1 billion or more in damage or loss of 100+ human lives. But with that scope in mind the definition of the protocol is reasonable as it specifies what should be included, including organizational procedures and sociotechnical measures meant to mitigate the potential for critical harm, as well as the testing procedures used to &#8220;evaluate if the frontier model poses an unreasonable risk of critical harm&#8221;. The protocol must also designate a <em>person</em> that is responsible for compliance &#8212; this is a critical component that ensures accountability for overseeing the protocol. The <em>timeliness</em> of the report is also referenced and calls for the developer to update the protocol on an annual basis as per any changes.</p><p>An area where the protocol falls short is in either specifying or auditing the <em>accuracy</em> of the information in the protocol. An earlier version of the law had provisions requiring 3rd party auditing, but those were removed from the final version signed into law. That would have strengthened the law considerably by having an independent entity checking the validity of procedures and the accuracy of provided information in the protocol. What&#8217;s left is the comparatively weaker request that large developers not lie, i.e. &#8220;shall not knowingly make false or materially misleading statements or omissions.&#8221; We can&#8217;t really assess whether the information in protocols would be <em>understandable</em> and fit for the purpose of accountability. A stronger law would have created a standard for the protocol that would be considered adequate.</p><p>The law provides reasonable carve outs to address typical criticisms and stakeholder pushback about transparency, including that disclosures might undermine privacy, confidentiality, trade secrets, or be used to game the system. Redactions to public safety and security protocols can be undertaken to protect these other interests. The law also protects fundamental innovation by not applying to academic research done at accredited colleges and universities. In addressing the tensions between transparency and other interests at stake, the law probably does about as well as it could, especially because administrative forums like the attorney general can gain access to copies of the protocol that are less redacted, i.e. where redactions only need to respect federal law, and fully unredacted reports must be maintained for possible discovery in legal forums.</p><p>Overall, much like its Californian counterpart, New York&#8217;s RAISE act is geared towards <a href="https://www.ai-accountability-review.com/p/prospective-accountability">prospective accountability</a> &#8212; trying to prevent future harm. Its scope is narrow around &#8220;critical harms&#8221;. While it does well to specify the accessibility of the transparency information it calls for, and align that information so it is relevant and timely to its scope, it lacks provisions for ensuring the accuracy of the information, and leaves the understandability of that information up to the large developers who&#8217;ll be creating the reports. But it&#8217;s not a powerful law because it doesn&#8217;t apply to anything in the real world (yet), and it&#8217;s unclear whether model developers will ever raise their hand and say that the law actually applies to them. It does provide an example of AI governance through transparency that can inform future legislation. The next version of the law, proposed by the governor&#8217;s office and under consideration by the state legislature, is already drastically different in many ways.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Accountability Review! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Closing Information Gaps via AI Transparency]]></title><description><![CDATA[Policymakers need to establish rigorous standards that prioritize information quality and the specific needs of accountability forums]]></description><link>https://www.ai-accountability-review.com/p/closing-information-gaps-via-ai-transparency</link><guid isPermaLink="false">https://www.ai-accountability-review.com/p/closing-information-gaps-via-ai-transparency</guid><dc:creator><![CDATA[Nick Diakopoulos]]></dc:creator><pubDate>Mon, 05 Jan 2026 15:15:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!pME0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Febf292d8-8845-4de3-a3da-4dfd82547ce9_1516x484.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Before anyone can be held accountable for an AI system&#8217;s behavior we&#8217;re going to need some information about that system. What was the system&#8217;s behavior and was its performance unexpected? What are the underlying values and goals of its designers? Did the developers take appropriate steps to test for and prevent harmful outcomes? How are organizational policies designed and implemented for the ongoing operation of the system? Transparency is the umbrella idea of closing these kinds of knowledge gaps, and should be differentiated from explanation which is a more specific approach (Corbett and Denton, 2025; Hayes et al, 2023). More formally, transparency can be defined as &#8220;<em>the availability of information about an actor allowing other actors to monitor the workings or performance of this actor</em>&#8221; (Meijer et al, 2014). And while transparency in itself cannot ensure accountability, it often plays a critical supporting role, providing the informational substrate for understanding AI system behavior that can then filter into various forums that might seek to hold actors in an AI system accountable.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pME0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Febf292d8-8845-4de3-a3da-4dfd82547ce9_1516x484.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pME0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Febf292d8-8845-4de3-a3da-4dfd82547ce9_1516x484.png 424w, https://substackcdn.com/image/fetch/$s_!pME0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Febf292d8-8845-4de3-a3da-4dfd82547ce9_1516x484.png 848w, https://substackcdn.com/image/fetch/$s_!pME0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Febf292d8-8845-4de3-a3da-4dfd82547ce9_1516x484.png 1272w, https://substackcdn.com/image/fetch/$s_!pME0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Febf292d8-8845-4de3-a3da-4dfd82547ce9_1516x484.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pME0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Febf292d8-8845-4de3-a3da-4dfd82547ce9_1516x484.png" width="456" height="145.63186813186815" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ebf292d8-8845-4de3-a3da-4dfd82547ce9_1516x484.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:465,&quot;width&quot;:1456,&quot;resizeWidth&quot;:456,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pME0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Febf292d8-8845-4de3-a3da-4dfd82547ce9_1516x484.png 424w, https://substackcdn.com/image/fetch/$s_!pME0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Febf292d8-8845-4de3-a3da-4dfd82547ce9_1516x484.png 848w, https://substackcdn.com/image/fetch/$s_!pME0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Febf292d8-8845-4de3-a3da-4dfd82547ce9_1516x484.png 1272w, https://substackcdn.com/image/fetch/$s_!pME0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Febf292d8-8845-4de3-a3da-4dfd82547ce9_1516x484.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p>Transparency sets up a relationship between two entities&#8212;here an AI system and a <a href="https://www.ai-accountability-review.com/p/networked-ai-accountability">forum</a>&#8212;where information about the AI system becomes available to the forum. Because AI systems are sociotechnical this includes information about both the data and technical model in the system, as well as the human components such as organizational policies, procedures or practices, and user behaviors (Diakopoulos, 2020). For the sake of accountability, provided information should help the forum determine congruence with relevant values, goals, and normative or legal expectations of behavior (Hayes et al, 2023; Fleischmann and Wallace, 2005). Transparency information can be <em>voluntary</em> (e.g. a blog post), <em>obligatory</em> (e.g. legally mandated disclosure to an administrator), or <em>involuntary</em> (e.g. external audits, or leaks), though recent research has underscored the <a href="https://www.ai-accountability-review.com/p/gaps-in-first-party-and-third-party">inadequacy</a> of volunteered &#8220;first-party&#8221; transparency information compared to external &#8220;third-party&#8221; evaluations of social impacts (Reuel et al, 2025).</p><p>To be useful for accountability, transparency information needs to reflect high <em>information quality</em>. At a minimum it needs to be accessible, understandable, relevant, and accurate (Hayes et al, 2023; Diakopoulos, 2020; Turilli and Floridi, 2009). Beyond just availability, information needs to be <em>accessible</em> so that it can be easily found by audiences such as various accountability forums. It also needs to be <em>understandable</em> or usable by those audiences and aligned to their information processing capabilities and capacities. It needs to be <em>relevant</em> to diagnosing some behavior of interest whether that be in shedding light on some negative outcome for retrospective accountability, or providing critical context to inform <a href="https://www.ai-accountability-review.com/p/prospective-accountability">prospective accountability</a>. Information also needs to be <em>accurate</em> such that it is valid, reliable, and free of error (Turilli and Floridi, 2009), since otherwise it can suffer from strategic activities that shape or distort information, leading to <a href="https://www.ai-accountability-review.com/p/sec-10-k-disclosures-as-a-route-to">uninformative or boilerplate disclosures</a> (Marin et al, 2025). Other aspects of information quality that are pertinent include the <em>currency</em> or timeliness of the information, and its <em>comprehensiveness</em>. AI transparency will typically fall short when the above factors and attributes aren&#8217;t adequately addressed.</p><p>A reoccurring pattern we see in the literature is a failure to clearly articulate the intended audience or forum for transparency information, with implications for how the information would be maximally accessible, understandable, and relevant for that audience. For instance, in the 2025 Foundation Model Transparency Index (Wan et al, 2025), the authors establish a set of 100 indicators that they apply to various models to evaluate how transparent they are in terms of data, training, compute usage, modeling, and downstream impacts and use policies. But the audience for all of this information&#8212;and its utility for accountability&#8212;is anything but clear. What transparency initiatives like this one need to do is clearly articulate the public interest and accountability purpose of each indicator, helping to connect over to the audience or forum that would then use that information for accountability. Similarly, a recent <a href="https://www.ai-accountability-review.com/p/transparency-needs-for-ai-agent-accountability">proposal for AI agent transparency</a> (Ezell et al, 2025) appears oriented somewhat towards technical developers &#8220;debugging&#8221; agent incidents. If the information in that framework could be made available to administrative or judicial forums, it&#8217;s likely they would benefit from at least some of the information. But the ideal would be a more parsimonious framework that more closely tracks the needs of those forums for specific issues they may need to assess for accountability.</p><p>While I would argue that transparency is a necessary pre-condition for accountability, critics point out that transparency is not an unalloyed positive force. It shouldn&#8217;t be assumed to always enable accountability (Corbett and Denton, 2023), though policies that shape adherence to the attributes of quality transparency information described above should increase the likelihood of its utility. Transparency can also come into tension with other values, such as privacy, freedom of expression, or intellectual property (Ananny and Crawford, 2018; Diakopoulos, 2020; Turilli and Floridi, 2009) leading to situations where tradeoffs need to be made in highly context-specific ways. One of the most frequent counter arguments to more transparency is that it could enable gaming or manipulation of the system (van Bekkum and Borgesius, 2021), though careful context-specific engineering, threat modeling, and consideration to forum-specific access provisions should alleviate this issue (Diakopoulos, 2020). We might also consider the idea that social forums may use manipulation as a way to sanction a system&#8212;in other words manipulating a system may in some contexts and situations be considered a component of holding a system accountable for unwanted behavior. Ultimately, the choices around what, when, and how AI systems are made transparent are political (Corbett and Denton, 2023).</p><p>The role of policy here is to thread the needle through these criticisms to scope transparency and shape it towards positive outcomes for society. Policy must create obligations for actors within AI systems to produce the information needed by any given forum (e.g. administrative, legal, etc.) to make the relevant assessment of system performance. This information needs to meet accessibility, understandability, relevance, accuracy, currency, and comprehensiveness quality criteria. One way to do this is to be more specific about standards for AI system transparency information production: what standard processes and practices should be evidenced by actors making transparency information available? Public sector policy makers <em>cannot</em> leave this unspecified, otherwise there is too much room for strategic and performative behavior. Another role for policy makers is to engage in the politics of where and how to make tradeoffs with other values such as privacy; looking to <a href="https://www.ai-accountability-review.com/p/informing-ai-accountability-with">public attitudes</a> should probably inform this. Transparency policies need to be user-centered (e.g. towards whatever forum the information is intended for) and context-specific, and would benefit from human-centered engineering and evaluation to refine their scope, meet user needs, and maximize their utility for accountability.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Accountability Review! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h4><strong>References</strong></h4><p>Ananny M and Crawford K (2018) Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media &amp; Society 20(3): 973&#8211;989.</p><p>Corbett E and Denton R (2023) Interrogating the T in FAccT. Conference on Fairness, Accountability, and Transparency: 1624&#8211;1634.</p><p>Diakopoulos N. (2020) Transparency. Oxford Handbook of Ethics and AI. Eds. Markus Dubber, Frank Pasquale, Sunit Das.</p><p>Ezell C, Roberts-Gaal X and Chan A (2025) Incident Analysis for AI Agents. Proc. AI, Ethics, and Society (AIES) DOI: 10.48550/arxiv.2508.14231.</p><p>Fleischmann KR and Wallace WA (2005) A covenant with transparency. Communications of the ACM 48(5): 93&#8211;97.</p><p>Hayes P, Poel I van de and Steen M (2023) Moral transparency of and concerning algorithmic tools. AI and Ethics 3(2): 585&#8211;600</p><p>Marin, L. G. U.-B., Rijsbosch, B., Spanakis, G. &amp; Kollnig, K. Are Companies Taking AI Risks Seriously? A Systematic Analysis of Companies&#8217; AI Risk Disclosures in SEC 10-K forms. arXiv (2025). https://arxiv.org/abs/2508.19313</p><p>Meijer A, Bovens M and Schillemans T (2014) Transparency. The Oxford Handbook of Public Accountability. Oxford University Press.</p><p>Reuel A, Ghosh A, Chim J, et al. (2025) Who Evaluates AI&#8217;s Social Impacts? Mapping Coverage and Gaps in First and Third Party Evaluations. arXiv. https://arxiv.org/abs/2511.05613</p><p>Turilli, M., Floridi, L.: The ethics of information transparency. Ethics Inform. Technol. 11, 105&#8211;112 (2009)</p><p>Bekkum M van and Borgesius FZ (2021) Digital welfare fraud detection and the Dutch SyRI judgment. European Journal of Social Security 23(4): 323&#8211;340.</p><p>Wan A, Klyman K, Kapoor S, et al. (2025) The 2025 Foundation Model Transparency Index. arXiv. DOI: 10.48550/arxiv.2512.10169.</p>]]></content:encoded></item><item><title><![CDATA[Gaps in First-Party and Third-Party AI Model Evaluations]]></title><description><![CDATA[AI accountability would be supported by more consistent and comprehensive model transparency]]></description><link>https://www.ai-accountability-review.com/p/gaps-in-first-party-and-third-party</link><guid isPermaLink="false">https://www.ai-accountability-review.com/p/gaps-in-first-party-and-third-party</guid><dc:creator><![CDATA[Nick Diakopoulos]]></dc:creator><pubDate>Tue, 02 Dec 2025 15:47:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mocI!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1ac346b-0173-43bc-a79a-09cea34ea61a_288x288.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A group of researchers with the <a href="https://evalevalai.com/">EvalEval Coalition</a> recently published a new paper on arXiv: &#8220;<em><a href="https://arxiv.org/abs/2511.05613">Who Evaluates AI&#8217;s Social Impacts? Mapping Coverage and Gaps in First and Third Party Evaluations</a></em>&#8221; where they present an analysis of evaluations of AI models with respect to social impacts. The analysis exposes gaps between evaluations run by model developers themselves versus third party evaluations, highlighting a need for transparency reporting standards and regulations.</p><p>The crux of the analysis is in comparing 186 first-party reports that were part of model releases by model developers to 183 post-release evaluations that were run by various third parties. These reports were assessed based on the level of detail provided in evaluations of any of seven social impact dimensions as identified by Solaiman et al (2023). The seven dimensions assessed were Bias and Harm, Sensitive Content (e.g. outputting hate speech), Performance Disparity (e.g. unequal results across subpopulations), Environmental Costs and Emissions, Privacy and Data, Financial Costs, and Moderation Labor (e.g. working conditions of data annotators). The rating scale ranged from a 0 (no evaluation present), 1 (vague mention), 2 (concrete results but limited clarity on methods and context), and 3 (sufficient detail to understand and contextualize the evaluation). All the ratings are available <a href="https://huggingface.co/datasets/evaleval/social_impact_eval_annotations">here</a>.</p><p>The main take-away is that <strong>third party evaluations were considerably more detailed, on average, than first-party evaluations </strong>(2.62 vs. 0.72 on the 0-3 scale). The implication is that the tech companies and other organizations training models are not releasing as much detail about their evaluations of social impacts in comparison to third parties who run evaluations. The authors note that the most popular models from the US (and to a lesser extent China) tend to attract the most third party evaluations, exposing <strong>a gap in evaluation of less-popular models</strong>. They also note that certain impact types such as data and content moderation impacts (as well as some others like environmental impacts) are not prevalent at all and are almost entirely absent from third-party evaluations, exposing the reality that <strong>third-parties just do not have access to the information they would need to properly evaluate certain issues</strong>.</p><p>The take-aways for policy here seem pretty clear. First-party evaluations of models by model providers are insufficient when it comes to evaluations of social impacts. There is a fair bit of variance in what level of attention different models receive and what dimensions of social impact are evaluated at all. Transparency standards are needed to provide more consistency and expectations for what evaluations need to be run and how, or which data needs to be disclosed so that third parties can cover more terrain with their evaluations. In addition, there need to be standards around which models demand a full evaluation. And there needs to be sufficient capacity in the evaluation landscape of third parties to be comprehensive. Advancing consistent transparency standards for AI models would support AI accountability by providing the information needed by different accountability forums.</p><h4><strong>References</strong></h4><p>Reuel A, Ghosh A, Chim J, et al. (2025) Who Evaluates AI&#8217;s Social Impacts? Mapping Coverage and Gaps in First and Third Party Evaluations. arXiv. <a href="https://arxiv.org/abs/2511.05613">https://arxiv.org/abs/2511.05613</a></p><p>Solaiman I, Talat Z, Agnew W, et al. (2023) Evaluating the Social Impact of Generative AI Systems in Systems and Society. arXiv. <a href="https://arxiv.org/abs/2306.05949v2">https://arxiv.org/abs/2306.05949v2</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.ai-accountability-review.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[AI Ethics Principles and Accountability]]></title><description><![CDATA[To move from high-level values to effective accountability, we still need to bridge the gap between abstractions and quantifiable, data-driven metrics.]]></description><link>https://www.ai-accountability-review.com/p/ai-ethics-principles-and-accountability</link><guid isPermaLink="false">https://www.ai-accountability-review.com/p/ai-ethics-principles-and-accountability</guid><dc:creator><![CDATA[Nick Diakopoulos]]></dc:creator><pubDate>Tue, 25 Nov 2025 14:45:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mocI!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1ac346b-0173-43bc-a79a-09cea34ea61a_288x288.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Establishing norms and behavioral standards for AI systems is central to the <a href="https://www.ai-accountability-review.com/p/the-problem-of-ai-accountability">AI Accountability Problem</a>. Over the years, private companies, government agencies, non-profits, and other organizations have put forth a number of AI ethics principles to serve this purpose. A principle acts as a behavioral guideline&#8212;essentially a <em>value</em> defining what is &#8220;good&#8221; or &#8220;desirable&#8221; (van de Poel, 2020). In assessing AI behavior, such principles help define what is (in)appropriate and thus what behavior might call for accountability, either retrospectively for some observed AI failure, or <a href="https://www.ai-accountability-review.com/p/prospective-accountability">prospectively</a> towards preventing undesirable outcomes.</p><p>Early analyses of the numerous published AI guidelines have identified a few core principles. These include privacy, fairness/justice, accountability/responsibility/explicability, transparency, beneficence, non-malfeasance/safety, and human autonomy (Jobin et al, 2019; Hagendorff 2020; Floridi et al, 2018). Despite mostly stable underlying ideas the exact terminology can vary and leads to a lack of clarity (Morley et al, 2021). A longer tail of principles includes ideas like trust, sustainability, dignity, and solidarity (Jobin et al, 2019).</p><p>Principles can come from different sources and so be <em>biased</em> in different ways, such as towards ideas in dominant geographies or from power holders such as experts or companies (Hickok, 2021). They can come from researchers and experts in the field (Floridi, 2018), from professional codes of conduct in domains of practice (Diakopoulos et al, 2024), from broad consensus documents like the UN declaration of human rights (Latonero, 2018), and be further informed from public evaluations (Kieslich et al, 2024). What&#8217;s the most legitimate source of principles for AI accountability? While a treaty like the <a href="https://www.coe.int/en/web/artificial-intelligence/the-framework-convention-on-artificial-intelligence">Framework Convention on Artificial Intelligence</a> has reached broad consensus, large swaths of the world still haven&#8217;t signed on. Achieving truly global principles will require ongoing political work.</p><p>Besides their potential to reflect biases, AI principles are also hard to actually implement in practice. Big abstractions need to be translated into concrete operationalizations (Hagendorff, 2020) if they are going to be used to measure AI system failures or guide AI system design to support prevention. Moreover, abstractions like fairness can hide contested ideas with conflicting perspectives (Mittelstadt, 2019) underlining the need to consider context-specific tradeoffs.</p><p>Prem (2023) analyzed more than 100 approaches from the literature for bridging the gap between principles and implementation. These include things like AI ethics criteria/checklists, metrics, process models, codes of practice, etc. He distinguishes approaches used during the design of a system (ex-ante), and those that are applied to an AI system after development or perhaps iteratively during development (ex-post). Ex-ante methods are relevant to prospective accountability, whereas ex-post methods are geared towards retrospective accountability (and also prospective if used iteratively during development). He notes that &#8220;Generally, there is a strong focus on those aspects for which technical solutions can be built,&#8221; exposing a further bias in the research on this topic.</p><p>Whereas designers and developers can adopt approaches to help prevent negative outcomes, AI system behavior itself should also be measured to assess adherence to principles. The idea of Ethics Based Auditing (EBA) applies the logic of auditing to the challenge of assessing system behavior &#8220;for consistency with relevant principles or norms.&#8221; (M&#246;kander et al, 2021). This starts to get at a core issue of operationalizing principles into <em>metrics</em> that can evaluate (mis)alignment with a value. Principles just set the direction; effective accountability requires quantifiable performance metrics. This in turn requires supporting data access to inform those measurements.</p><p>Rismani and colleagues (2025) reviewed hundreds of these measures in the literature as they relate to different system components, hazards, harms, and principles. 90% of the measures they found were related to just four principles: fairness, transparency, privacy, and trust. To be useful for accountability metrics need to define some <em>threshold</em> of the metric which indicates the principle has been violated, that the system may create a hazard, and therefore warrants a call for accountability. Thresholds may be context-dependent, vary based on domain, and are subject to the risk tolerance of different stakeholders, but are rarely discussed in the literature (Rismani et al, 2025). This returns us to the normative question: How do you define an acceptable vs. unacceptable level of a measure of a principle? At what level might reasonable people agree there should be accountability? <a href="https://www.ai-accountability-review.com/p/informing-ai-accountability-with">Public perceptions of acceptability</a> may play a role here.</p><p>Principles serve as orienting ideas for what is valued. They can be used to determine what constitutes inappropriate behavior, necessitating accountability either retrospectively (blame for failure) or prospectively (prevention of harm). Bringing them into formal accountability forums (e.g. administrative, legal) hinges on mitigating biases in their enumeration and reaching a high degree of consensus. But implementing them in practice remains a challenge. They need to be translated into <em>practices</em> that designers and developers can use to mitigate the hazards created by an AI system, or to <em>metrics</em> with clear <em>thresholds</em> that can measure AI system behavior for signs of deviation. Policy should support the development of context- and domain-specific operationalizations of metrics and thresholds that are indicative of violations of principles by AI systems, as well as the data access provisions that would enable those measurements by the relevant accountability forums.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Accountability Review! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h4><strong>References</strong></h4><p>&#8203;&#8203;Diakopoulos N, Trattner C, Jannach D, et al. (2024) Leveraging Professional Ethics for Responsible AI. Communications of the ACM.&#8203;&#8203;</p><p>Floridi L, Cowls J, Beltrametti M, et al. (2018) AI4People&#8212;An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines 28(4): 689&#8211;707.</p><p>Hagendorff T (2020) The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines 30(1): 99&#8211;120.</p><p>Hickok M (2021) Lessons learned from AI ethics principles for future actions. AI and Ethics 1(1): 41&#8211;47.</p><p>Jobin A, Ienca M and Vayena E (2019) The global landscape of AI ethics guidelines. Nature Machine Intelligence 1(9): 389&#8211;399</p><p>Kieslich K, Helberger N and Diakopoulos N (2024) My Future with My Chatbot: A Scenario-Driven, User-Centric Approach to Anticipating AI Impacts. Conference on Fairness, Accountability, and Transparency: 2071&#8211;2085.</p><p>Latonero M (2018) Governing Artificial Intelligence: Upholding Human Rights &amp; Dignity. Data &amp; Society. <a href="https://datasociety.net/library/governing-artificial-intelligence/">https://datasociety.net/library/governing-artificial-intelligence/</a></p><p>Mittelstadt B (2019) Principles alone cannot guarantee ethical AI. Nature Machine Intelligence 1(11): 501&#8211;507.</p><p>Morley J, Kinsey L, Elhalal A, et al. (2021) Operationalising AI ethics: barriers, enablers and next steps. AI &amp; SOCIETY 38(1): 411&#8211;423.</p><p>M&#246;kander J, Morley J, Taddeo M, et al. (2021) Ethics-Based Auditing of Automated Decision-Making Systems: Nature, Scope, and Limitations. Science and Engineering Ethics 27(4): 44.</p><p>Poel I van de (2020) Embedding Values in Artificial Intelligence (AI) Systems. Minds and Machines 30(3): 385&#8211;409.</p><p>Prem E (2023) From ethical AI frameworks to tools: a review of approaches. AI and Ethics 3(3): 699&#8211;716.</p><p>Rismani S, Shelby R, Davis L, et al. (2025) Measuring What Matters: Connecting AI Ethics Evaluations to System Attributes, Hazards, and Harms. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 8(3): 2199&#8211;2213.</p>]]></content:encoded></item><item><title><![CDATA[Informing AI Accountability with Public Perceptions]]></title><description><![CDATA[By studying perceptions of risk, benefit, and moral alignment, we can design policies that reflect collective values and assign responsibility in a legitimate way.]]></description><link>https://www.ai-accountability-review.com/p/informing-ai-accountability-with</link><guid isPermaLink="false">https://www.ai-accountability-review.com/p/informing-ai-accountability-with</guid><dc:creator><![CDATA[Nick Diakopoulos]]></dc:creator><pubDate>Wed, 05 Nov 2025 13:00:33 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mocI!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1ac346b-0173-43bc-a79a-09cea34ea61a_288x288.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One important way to understand standards and expectations for AI system use and behavior is to ask the public. This is critical especially for calls for accountability in social or <a href="https://www.ai-accountability-review.com/p/the-media-as-accountability-forum">media</a> <a href="https://www.ai-accountability-review.com/p/networked-ai-accountability">forums</a> since they are most exposed to a plurality of opinions about appropriateness or acceptability of behavior. In a democratic system we should also expect that standards for legal, political, and administrative forums be institutionalized downstream of public perspectives. Public perception of AI acceptance is a valuable input for policymakers to help prioritize areas for intervention, and shape the formalization of expectations.</p><p>A growing number of surveys consider the public perception and acceptance of AI across different use cases, such as health care, surveillance, and automation (Eom et al, 2024), personal health and labor replacement (Mun et al, 2025), AI in tax fraud detection (Kieslich et al, 2022), media, health, and justice domains (Araujo et al, 2020) and others. One study showed that overall judgments of the value of AI across a wide range of use cases is strongly shaped by perceived <em>benefits</em>, with perceived <em>risks</em> also playing a significant role (Brauner et al, 2025).</p><p>A recurring result in many of these survey studies is that there is variance in user acceptance of AI across people of different backgrounds. Factors such as the <em>knowledge</em>, <em>literacy</em>, <em>education, </em>or even <em>political</em> orientation of respondents, as well as their <em>age</em> and <em>gender</em> can play a role in the perception of risk, benefit, and acceptance of AI. For instance, younger respondents often view AI as less risky and more beneficial than older respondents (Brauner at al, 2025). A critical factor in individual perception is the level of AI knowledge the person has (and their confidence in that knowledge), where higher knowledge can lead to lower risk assessment, i.e. &#8220;risk blindness&#8221; (Said et al, 2023). Because of these differences, policy should ideally be informed by representative population samples, or perhaps population samples weighted by those who might <em>bear</em> the greater risk.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Accountability Review! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Kieslich et al (2021) take the perspective that we also need to understand public perception of the <em>principles</em> underlying AI systems. This in effect is a measure of whether the system is &#8220;aligned&#8221; with the perspectives and values of the person evaluating it. They measure perceptions of principles like explainability, fairness, security, accountability, accuracy, privacy, and limited machine autonomy for a scenario related to use of AI in tax fraud detection. For their representative sample of respondents from Germany they find that accountability was perceived as the most important principle. This underscores the idea that accountability is a critical property of AI systems that the public cares about.</p><p>Mun et al (2025) pairs a quantitative survey of various AI use cases together with open-ended follow-up questions where respondents elaborate on why they think a use case should or shouldn&#8217;t be developed, and what would need to change for them to switch their opinion of the use case. As with Brauner et al (2025) they find that cost-benefit reasoning dominates, but that in some cases virtue-based reasoning is somewhat more prevalent, such as for the Elementary School Teacher or Digital Medical Advice scenarios. They further analyze these rationale through the lens of <a href="https://en.wikipedia.org/wiki/Moral_foundations_theory">Moral Foundations Theory</a> and find that <em>Care</em> (i.e. dislike of pain of others or feelings of empathy and compassion) was the most prevalent reason mentioned overall, but fairness also dominated some use cases (e.g. Lawyer). This finding about how a moral foundation or value towards something like care aligns with one of the surveys reported on by Eom et al (2024) where 64% of respondents thought it was a bad idea for &#8220;robotic nurses for bedridden patients that can diagnose situations and decide when to administer medicine.&#8221; In other words, use cases where care is an underlying moral proposition seem to make people less accepting of the use of AI. In terms of accountability, then, we need to consider not only perceived risk, but also whether there is some kind of underlying value in society that is being violated.</p><p>One of the gaps identified by Araujo et al (2020) is that public perception of AI acceptance in a use case doesn&#8217;t necessarily tell us if people would <em>personally</em> accept a <em>specific</em> AI decision, or reject it and instead call for accountability. Important work remains to be done to understand this ego-centric retrospective case. On the other hand, for <a href="https://www.ai-accountability-review.com/p/prospective-accountability">prospective accountability</a>, research has begun to explore public perceptions around which stakeholders should be responsible for taking action to prevent negative outcomes (Barnett et al, 2025). This research uses written scenarios depicting harm from AI in the media ecosystem as a basis for a survey to gather public input about which stakeholders are in a position to take action to prevent the harm. Participants assigned responsibility to any of 12 different stakeholders that emerged from the data, including government, tech companies, news publishers, schools, social media platforms, independent third parties, local communities, public health officials, media companies, NGOs, employers, and unions. Specific actions that these stakeholders could take were then rated in terms of whether they <em>should</em> be taken, and also whether the action should be prioritized, resulting in rich data that could inform policy on how to assign responsibility for prevention, though ideally this process would be re-run with a representative sample.</p><p>Public opinion plays a critical role in shaping legitimate norms and standards for AI behavior. Policymakers should recognize that expectations of AI systems &#8212; including what is considered &#8220;acceptable&#8221; &#8212; are rooted in social perceptions. Surveys show that these perceptions vary based on demographic or other individual factors such as knowledge, and that there is variance across use case contexts. Policy should therefore be grounded in representative and inclusive data that is tailored to the specific use case contexts to be governed. Although cost-benefit reasoning dominates rationale for AI acceptance, value-based reasoning also needs to be considered. Finally, there is still much open research to do by drilling further into perceptions of who is responsible for what across a variety of situations.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/p/informing-ai-accountability-with?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.ai-accountability-review.com/p/informing-ai-accountability-with?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h4><strong>References</strong></h4><p>Araujo T, Helberger N, Kruikemeier S, et al. (2020) In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI &amp; SOCIETY 35(3): 611&#8211;623.</p><p>Barnett J, Kieslich K, Helberger N, et al. (2025) Envisioning Stakeholder-Action Pairs to Mitigate Negative Impacts of AI: A Participatory Approach to Inform Policy Making. Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency: 1424&#8211;1449.</p><p>Brauner P, Glawe F, Liehner GL, et al. (2025) Mapping public perception of artificial intelligence: Expectations, risk&#8211;benefit tradeoffs, and value as determinants for societal acceptance. Technological Forecasting and Social Change 220: 124304.</p><p>Eom D, Newman T, Brossard D, et al. (2024) Societal guardrails for AI? Perspectives on what we know about public opinion on artificial intelligence. Science and Public Policy 51(5): 1004&#8211;1013.</p><p>Kieslich K, Keller B and Starke C (2022) Artificial intelligence ethics by design. Evaluating public perception on the importance of ethical design principles of artificial intelligence. Big Data &amp; Society 9(1): 20539517221092956.</p><p>Mun J, Yeong WBA, Deng WH, et al. (2025) Why (Not) Use AI? Analyzing People&#8217;s Reasoning and Conditions for AI Acceptability. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 8(2): 1771&#8211;1784.</p><p>Said N, Potinteu AE, Brich I, et al. (2023) An artificial intelligence perspective: How knowledge and confidence shape risk and benefit perception. <em>Computers in Human Behavior</em> 149: 107855.</p>]]></content:encoded></item><item><title><![CDATA[Transparency Needs for AI Agent Accountability ]]></title><description><![CDATA[A new framework proposes a detailed approach to incident analysis, outlining the specific data developers should log to close the accountability gap for AI agents.]]></description><link>https://www.ai-accountability-review.com/p/transparency-needs-for-ai-agent-accountability</link><guid isPermaLink="false">https://www.ai-accountability-review.com/p/transparency-needs-for-ai-agent-accountability</guid><dc:creator><![CDATA[Nick Diakopoulos]]></dc:creator><pubDate>Tue, 28 Oct 2025 10:02:36 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mocI!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1ac346b-0173-43bc-a79a-09cea34ea61a_288x288.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>AI incident monitoring databases like the OECD <a href="https://oecd.ai/en/incidents">AI Incident Monitor</a> and the <a href="https://incidentdatabase.ai/">AI Incident Database</a> track cases where AI has created harm. But by sourcing incidents from public news articles they&#8217;re limited in the detail they include. So there&#8217;s a lot of information lacking when it comes to trying to understand and hold complex agentic AI systems accountable. A new paper entitled &#8220;<a href="https://ojs.aaai.org/index.php/AIES/article/view/36596">Incident Analysis for AI Agents</a>&#8221; published at AIES this year tries to tackle this problem by outlining a transparency framework for what pieces of information should be collected about AI agent incidents (Ezell et al, 2025).</p><p>The paper outlines three factors that contribute to AI agent incidents: (1) <strong>system factors</strong>, (2) <strong>contextual factors</strong>, and (3) <strong>cognitive errors</strong>. System factors are things like training and feedback data, learning methods, the system prompt, and scaffolding software around the agent. Contextual factors include aspects of the task definition, tools that the agent uses, and information the agent uses or needs to perform tasks. Finally, &#8220;cognitive&#8221; errors are basically flaws in how the AI agent functions leading to failure, which result from faulty observation of the environment, understanding of inputs, decision-making, and action execution to achieve a goal.</p><p>Based on these classes of factors the authors go on to outline a range of information that would be helpful to disclose as part of an AI agent incident. They organize this information into three categories: (1) <strong>activity logs</strong>, (2) <strong>system documentation and access</strong>, and (3) <strong>tool-related information</strong>. Activity logs would include a record of <em>all inputs and outputs to the agent</em> including system and user prompts, external information included in inputs, model reasoning traces, model outputs and actions taken, and necessary metadata like timestamps to contextualize all of this. System documentation and access refers to information about the AI model such as any model or system cards, version information (and change logs), and other parameters (e.g. temperature, random seeds) that might inform an incident reconstruction. Tool information is there to document any tools that agents use including identifying them, their version, the actions the tool enables, and any information about how the tool might adapt to the user.</p><p>This paper goes a long way toward outlining the necessary information that should be included in an incident report. But from a policy perspective there are some open questions about incident reporting. For one, <em>how long</em> should a developer maintain an activity log? This might depend on the risk profile of the use case, as well as whether there are any privacy considerations and how those might be handled. Another key question is <em>who gets access to an incident report</em> including any activity logs as well as system and tool-related information? The severity of the incident may create different tiers of access. <a href="https://www.ai-accountability-review.com/p/networked-ai-accountability">Administrative and judicial forums</a> might need access to the detailed information outlined in this paper for root-cause analysis and for assessing accountability, but it&#8217;s unclear that it should be made fully public due to privacy or trade secrecy issues. Still, secure infrastructure and access control will be needed and policy should consider how to create a shared and standardized infrastructure that AI developers can report into.</p><p>There are a few issues that the authors don&#8217;t address but which I also think will be important to policy. Related to the access control dimension, a common critique of providing transparency information is that it can enable <em>gaming and manipulation</em> (Diakopoulos, 2020). The many information factors that the authors outline need to be stress tested against how an adversary might be able to manipulate the agent if they were made public. This can also inform which pieces of information need to be withheld for specific closed-door forums, like administrative agencies or judicial cleanrooms. Another open question relates to AI agents using tools that use other tools. If tool use is implicated in an incident, then presumably we would want to <em>recursively evaluate all the tools</em> it may have in turn relied on. This then creates additional monitoring and activity logging demands on tools that are made available to agents. Finally, from a sociotechnical standpoint I think there could be aspects of AI agent transparency that disclose more about the <em>human context </em>around an incident, such as the roles and activities of supervisors, users, or other humans in the loop that may have had access or authority over intermediate results for the agent.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Accountability Review! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h4><strong>References</strong></h4><p>Diakopoulos N (2020) Transparency. <em>Oxford Handbook of Ethics and AI.</em> Eds. Markus Dubber, Frank Pasquale, Sunit Das.</p><p>Ezell C, Roberts-Gaal X and Chan A (2025) Incident Analysis for AI Agents. <em>Proc. AI, Ethics, and Society (AIES)</em> DOI: 10.48550/arxiv.2508.14231.</p>]]></content:encoded></item><item><title><![CDATA[Networked AI Accountability ]]></title><description><![CDATA[How different forums contributed to producing accountability in the Dutch welfare scandal.]]></description><link>https://www.ai-accountability-review.com/p/networked-ai-accountability</link><guid isPermaLink="false">https://www.ai-accountability-review.com/p/networked-ai-accountability</guid><dc:creator><![CDATA[Nick Diakopoulos]]></dc:creator><pubDate>Tue, 14 Oct 2025 04:59:25 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!PjTA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57143e96-84f9-4d01-b04b-9bdb355969b7_1600x1043.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>An accountability relationship between an actor and a<em> </em>forum means that the actor has to answer to that forum for some conduct (Bovens, 2007). There are a range of types of forums that might have accountability relationships with AI systems including <em>political</em> (e.g. parliamentary hearings, democratic elections), <em>legal</em> (e.g. courts), <em>administrative</em> (e.g. auditors or inspectors from official agencies), <em>professional</em> (e.g. professional societies, industry working groups), <em>social</em> (e.g. civil society organizations, interest groups), or <em>media</em> (e.g. news media, social media).</p><p>Different forums operate in different ways, have different capacities for obtaining information or explanation, and may have different standards of expected behavior or ways to sanction the actor. There are also differences in how their authority is constituted, with legal or administrative authority <em>formally</em> flowing from the state, while professional, social, and media forums gain their authority through other <em>informal</em> social processes. These distinctions correspond to <em>vertical</em> accountability, where a forum formally holds power over the actor often due to a hierarchical relationship between them, and <em>horizontal</em> accountability which is essentially voluntary and where there is no formal obligation to provide an account. Forums can also be <em>public</em> as is the case for political, legal, professional, social, and media forums, while others like administrative forums may be partially public or <em>non-public</em>.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Accountability Review! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Because of their different capacities to know, act, and sanction, forums often work in concert to hold an actor accountable. In a <em>networked</em> view of accountability the interplay between forums is a necessary feature of how accountability is ultimately rendered (Wieringa, 2020). For instance a forum with informal power and a horizontal relationship to the actor in question (e.g. media) may contribute knowledge that is publicized and which informs a forum with formal power and a vertical relationship (e.g. a relevant governing agency) that can further pursue accountability, if needed in a non-public space that accommodates issues such as trade secrecy or privacy. Different forums respond and react to one another.</p><p>Wieringa (2023) provides a detailed description of how networked accountability works, illustrating it with the case of the Dutch welfare fraud system, SyRI (System Risk Indication). Briefly, SyRI was a system implemented by the Dutch government and used by municipalities from 2015-2019 to try to detect potential fraud based on welfare beneficiary data. In 2020 a Dutch court ruled that the law authorizing the creation of SyRI was unlawful because it conflicted with the right to privacy ensured by the European Convention on Human Rights (van Bekkum and Borgesius, 2021). While there had been some administrative forums early in the development leading up to the law which tried to pump the brakes, those forums were ultimately not successful in shaping what became the law before parliament passed it.</p><p>How was accountability achieved here? The following figure illustrates many of the various relationships described by Wieringa in the case.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!PjTA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57143e96-84f9-4d01-b04b-9bdb355969b7_1600x1043.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!PjTA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57143e96-84f9-4d01-b04b-9bdb355969b7_1600x1043.png 424w, https://substackcdn.com/image/fetch/$s_!PjTA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57143e96-84f9-4d01-b04b-9bdb355969b7_1600x1043.png 848w, https://substackcdn.com/image/fetch/$s_!PjTA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57143e96-84f9-4d01-b04b-9bdb355969b7_1600x1043.png 1272w, https://substackcdn.com/image/fetch/$s_!PjTA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57143e96-84f9-4d01-b04b-9bdb355969b7_1600x1043.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!PjTA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57143e96-84f9-4d01-b04b-9bdb355969b7_1600x1043.png" width="724.53125" height="472.23911830357144" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/57143e96-84f9-4d01-b04b-9bdb355969b7_1600x1043.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:949,&quot;width&quot;:1456,&quot;resizeWidth&quot;:724.53125,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!PjTA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57143e96-84f9-4d01-b04b-9bdb355969b7_1600x1043.png 424w, https://substackcdn.com/image/fetch/$s_!PjTA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57143e96-84f9-4d01-b04b-9bdb355969b7_1600x1043.png 848w, https://substackcdn.com/image/fetch/$s_!PjTA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57143e96-84f9-4d01-b04b-9bdb355969b7_1600x1043.png 1272w, https://substackcdn.com/image/fetch/$s_!PjTA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57143e96-84f9-4d01-b04b-9bdb355969b7_1600x1043.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>It was ultimately the legal forum that provided the formal accountability and authority to overrule the law authorizing the creation of SyRI. In essence the case was about holding accountable the legislators who delegated authority to create the AI system to risk rate people using private personal information. There were clear limits here though as the legal forum was unable to compel disclosure of detailed information about how the SyRI algorithm actually works, with the government arguing that disclosure of that information could enable fraudsters to game or evade the system (van Bekkum and Borgesius, 2021). As Wieringa (2023) writes, the court indicated that the State &#8220;needed to explain how the algorithmic system was designed, tested, applied, and how it operates&#8221; but failed to do so. As the court opinion wrote, &#8220;[w]ithout insight into the risk indicators and the risk model, or at least without further legal safeguards to compensate for this lack of insight, the SyRI legislations provides insufficient points of reference for the conclusion that by using SyRI the interference with the right to respect for private life is always proportional and therefore necessary&#8230;&#8221; (Meuwese, 2020). This highlights that even a formal forum such as a courtroom may not be able to bridge knowledge gaps about an AI system, and that insufficient transparency about such systems is a core impediment to accountability.</p><p>While the legal forum was able to provide the formal accountability to stop the use of SyRI, both social and media forums also played critical roles in achieving that outcome, and the political forum was further activated in the process as well. Indeed the impetus for the court case originally came from a collection of civil society actors, &#8220;The Privacy Coalition&#8221;, which in 2016 filed a public records request to find out more about the system (Wieringa, 2023). The critical issue in the public records response was that &#8220;crucial information, such as audit reports and PIAs [Privacy Impact Assessments], needed to evaluate the proportionality of the system was withheld&#8221;. There simply wasn&#8217;t enough information to assess whether the privacy violations at stake might be warranted. In short, the legitimacy of the system couldn&#8217;t be established on the basis of the information provided: the state hadn&#8217;t provided a sufficient account to the social actor. Unsatisfied with the level of detail provided, The Privacy Coalition then sued the state in 2018, moving into the legal forum.</p><p>The lawsuit also stimulated some activity in the political forum, with two ministers of parliament (MPs) filing to make the SyRI system transparent, which was denied by the state. Around this time The Privacy Coalition activated the media forum through a campaign to educate the public about SyRI and shape public attention, opinion, and awareness of the system and the issues it exposed. This had the apparent effect of also stimulating more social actors in the form of citizen demonstrations, which were then covered and amplified by the media further. The media forum also participated by scrutinizing SyRI and developing arguments against it through published editorials and commentaries, and by asking members of parliament or of municipal councils to account for the system.</p><p>Accountability is not a clean process. It involves lots of relationships, connections, and back and forth as different forums gain information and trigger or reinforce each other. Forums with informal, horizontal accountability relationships are needed to mobilize information, however at the end of the day there needs to be formal accountability from a forum with the power to change the situation and sanction actors, in this case by overturning a law. That means we need laws that define what AI behavior is permissible (or as in this case, what values like privacy need to be preserved in AI system behavior), and that other forums need to have capabilities to gain knowledge of AI behavior such that they can potentially activate formal accountability in a legal (judicial) forum. To the extent that the state would want to defend or reimplement a system akin to SyRI that system would need to offer more algorithmic transparency to clearly demonstrate how the government interest in efficiency of fraud detection is balanced against relevant fundamental rights.</p><h4><strong>References</strong></h4><p>Bekkum M van and Borgesius FZ (2021) Digital welfare fraud detection and the Dutch SyRI judgment. <em>European Journal of Social Security</em> 23(4): 323&#8211;340.</p><p>Bovens M (2007) Analysing and Assessing Accountability: A Conceptual Framework. <em>European Law Journal</em> 13(4): 447&#8211;468.</p><p>Meuwese A (2020) Regulating algorithmic decision-making one case at the time: A note on the Dutch &#8220;SyRI&#8221; judgment. European Review of Digital Administration &amp; Law 1(1).</p><p>Wieringa M (2020) What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability. <em>FAT* &#8217;20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency</em>: 1&#8211;18.</p><p>Wieringa M (2023) &#8220;Hey SyRI, tell me about algorithmic accountability&#8221;: Lessons from a landmark case. <em>Data &amp; Policy</em> 5.</p>]]></content:encoded></item><item><title><![CDATA[How California’s New AI Law Supports Accountability]]></title><description><![CDATA[California's new AI law takes a "better safe than sorry" approach, creating prospective accountability for catastrophic risks from the world's most powerful "frontier" AI models.]]></description><link>https://www.ai-accountability-review.com/p/how-californias-new-ai-law-supports</link><guid isPermaLink="false">https://www.ai-accountability-review.com/p/how-californias-new-ai-law-supports</guid><dc:creator><![CDATA[Nick Diakopoulos]]></dc:creator><pubDate>Wed, 01 Oct 2025 10:02:49 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mocI!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1ac346b-0173-43bc-a79a-09cea34ea61a_288x288.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>California&#8212;home to some of the largest AI developers in the world&#8212;has a <a href="https://www.gov.ca.gov/2025/09/29/governor-newsom-signs-sb-53-advancing-californias-world-leading-artificial-intelligence-industry/">new AI law on the books</a>. Known as the &#8220;Transparency in Frontier Artificial Intelligence Act&#8221; (i.e. Senate Bill 53, or just SB53) the law is an important example of how legislative authority can strengthen the capacity for AI accountability. It provides a series of provisions that call for the release of information that help society know about potentially risky AI behaviors.</p><p>The scope of the <a href="https://legiscan.com/CA/text/SB53/id/3270002/California-2025-SB53-Enrolled.html">law</a> is quite narrow, however, as it only applies to &#8220;catastrophic risk&#8221; and &#8220;critical safety incidents&#8221; related to &#8220;frontier foundation models&#8221;. Unlike the wider scope of something like the EU&#8217;s AI Act, SB53 is really targeted. A &#8220;frontier&#8221; model is defined as one that is trained with more than a threshold number of numerical operations. What makes something &#8220;catastrophic&#8221; according to the law is that more than 50 people are seriously harmed, or damages amount to at least $1B from a single incident. The risks in focus here are hypothetical including AIs assisting with making or releasing chemical, biological, radiological, or nuclear weapons; unsupervised AI&#8217;s that engage in conduct that you might recognize as murder, assault, extortion, or theft; or evading control of the developer or user.</p><p>Because perhaps none of these risks have ever actually materialized, it&#8217;s appropriate to see this law as an implementation of the <a href="https://en.wikipedia.org/wiki/Precautionary_principle">precautionary principle</a>, the idea that action should be taken to prevent potential harm, even when scientific proof of the risk is incomplete or uncertain. Basically, better safe than sorry. The law is a nice example of creating <a href="https://www.ai-accountability-review.com/p/prospective-accountability">prospective accountability</a> &#8212; assigning responsibilities for preventing outcomes that are in this case still quite uncertain but which we want to minimize the chance of coming about. In this case this puts additional onus on the processes implemented that should prevent such risks from materializing, and for monitoring the system for indicators of such risks.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Accountability Review! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Transparency can be defined as &#8220;the availability of information about an actor allowing other actors to monitor the workings or performance of this actor&#8221; [1] and is recognized as a enabler for accountability [2]. We can&#8217;t hold accountable what we don&#8217;t know about. The law addresses the <a href="https://www.ai-accountability-review.com/p/the-problem-of-ai-accountability">knowledge dimension of the AI Accountability problem</a> using three mechanisms to increase the supply of information about catastrophic risks from frontier foundation models: (1) transparency reports with a wide range of information on risk prevention processess and assessments, (2) incident reporting of critical safety incidents or risks found from internal uses, and (3) reinforcing whistleblower protections so that insiders with direct experience can raise an alarm.</p><p>Reflecting a prospective accountability perspective the law requires frontier model developers to publish on their website a &#8220;frontier AI framework&#8221; that includes a lot of details meant to create processess that prevent catastrophic risks. The framework needs to include information about standards incorporated, thresholds for assessing catastrophic risk, mitigations taken for those risks, reviews of the adequecy of those mitigations, use of 3rd party assessments, updating the framework, security of model weights, the identification and response to critical safety incidents, internal governance practices to ensure these processes are implemented, and assessment of catastrophic risks from internal use. All of this is meant to establish a sound process for preventing these risks, and creates the conditions for accountability if the developer doesn&#8217;t have an adequate process.</p><p>Beyond the many bits of information that need to be disclosed in the frontier AI framework, developers also have to post a transparency report that includes a range of information about the model including, importantly, the <em>intended</em> uses and <em>restrictions</em> or conditions on uses of the model. These bits of information are important for accountability because they help define the appropriate behavior for users of the models. This transparency report also has to include additional information about the implementation of the frontier AI framework including specific assessments of catastrophic risk, the extent to which 3rd party evaluators (e.g. red teamers) were involved, and any other steps taken to implement the framework. In other words, the developer has to say how they&#8217;re fulfilling the process they outline in their framework.</p><p>Developers also have to disclose the results of assessments of catastrophic risks resulting from internal use of its models as well as any critical safety incidents to an administrative office of the state. So not only is the developer accountable to the public by way of posting a lot of information about its framwork and assessments to its website, but it is also accountable to an administrative forum where they report additional assessments and incidents. Interestingly, the incident reporting mechanism will also be open to the public, which is important insofar as one of the risks of concern is loss of control of the model by users.</p><p>The last bit here is that the law strengthens whistleblower protections. It basically clarifies that employees at frontier AI model developers who are responsible for &#8220;assessing, managing, or addressing risk of critical safety incidents&#8221; should be allowed to and not retaliated against if they disclose information to certain actors about whether activities of the frontier developer might pose danger resulting from a catastrophic risk.</p><p>The bill attaches clear consequences for failing to meet the obligations it lays out. The Attorney General of California can impose a civil penalty of up to $1,000,000 per violation for failing to publish required documents, making false statements, failing to report incidents, or not complying with its own frontier AI framework. This provides the sanctions that make the accountability relationship meaningful, though this only applies to large developers with more than $500M in annual gross revenue.</p><p>SB53 is generally a solid example of how you might go about legislating prospective AI accountability to prevent risks that are uncertain. It has provisions for updating over time that will be important for keeping it relevant as technology advances. Perhaps one of the biggest weaknesses I see is that the threshold of training compute used to define a &#8220;frontier model&#8221; (10^26 numerical calculations) is not required to be disclosed. At the end of the day it&#8217;s up to the model developer to raise their hand and say that this law applies to them and to identify which models it applies to. And by <a href="https://epoch.ai/gradient-updates/why-gpt5-used-less-training-compute-than-gpt45-but-gpt6-probably-wont">one estimate</a> GPT-5 wouldn&#8217;t even fall in the remit of the law. We shall see how and whether the frontier model developers engage.</p><h4><strong>References</strong></h4><p>[1] Albert Meijer, &#8220;Transparency,&#8221; in The Oxford Handbook of Public Accountability, ed. Mark Bovens, Robert E. Goodin, and Thomas Schillemans (Oxford: Oxford University Press, 2014)</p><p>[2] Nicholas Diakopoulos, &#8220;Transparency,&#8221; in The Oxford Handbook of Ethics and AI. Eds. Markus Dubber, Frank Pasquale, Sunit Das. (Oxford: Oxford University Press, 2020)</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/p/how-californias-new-ai-law-supports?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading AI Accountability Review! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/p/how-californias-new-ai-law-supports?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.ai-accountability-review.com/p/how-californias-new-ai-law-supports?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p>]]></content:encoded></item><item><title><![CDATA[Robots.txt as a Lever for AI Accountability]]></title><description><![CDATA[Could common law around contracts and negligence provide for legal accountability?]]></description><link>https://www.ai-accountability-review.com/p/robotstxt-as-a-lever-for-ai-accountability</link><guid isPermaLink="false">https://www.ai-accountability-review.com/p/robotstxt-as-a-lever-for-ai-accountability</guid><dc:creator><![CDATA[Nick Diakopoulos]]></dc:creator><pubDate>Tue, 23 Sep 2025 10:02:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mocI!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1ac346b-0173-43bc-a79a-09cea34ea61a_288x288.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The rise of generative AI has been fueled by a huge appetite for data, with AI developers deploying bots to scrape internet content to train their models. But this data collection often ignores a long-standing internet norm: the robots.txt file. For decades, this <a href="https://www.rfc-editor.org/rfc/rfc9309.html">standard</a> has been the primary way website owners communicate rules for automated access of their content. Can such a standard for bot behavior also serve as a legal basis for accountability?</p><p>A new article in the <em>Computer Law &amp; Security Review</em> [1], <a href="https://arxiv.org/pdf/2503.06035">argues that</a> the robots.txt standard can operate as more than a polite suggestion. The authors propose that common law principles, specifically in contract and tort law, offer a viable path to hold AI developers accountable for how their bots access content on websites.</p><p>In case you&#8217;re not familiar, the robots.txt file is a public file that a website owner places on their server to set bounds on web crawlers and scraper bots. It specifies what parts of a website are off limits to different bots, helping to manage server load and control access to private or sensitive parts of the site. Major search engines generally respect it and an increasing number of sites online use it to control access by different AI bots [2], but its effectiveness relies on good faith.</p><p>The first argument in the article is that robots.txt actually functions as a <em>contract</em>. A webmaster makes an &#8220;offer&#8221; for the contract by having the robots.txt file on their site. In essence it conveys, "You may access my site under these specific conditions." An AI operator accepts this offer not with words, but with action. When it sends its bot to access the website's content, that action signifies acceptance of the terms laid out in the robots.txt file. The bot&#8217;s continued operation on the site demonstrates a deliberate engagement with the website's conditions. While this contract can be implied, it can be further strengthened by referring to robots.txt in the site's Terms of Use. This argument sets up contract law as a path for accountability of AI bot behavior in accessing websites.</p><p>In cases where the website blocks all bot access there can&#8217;t technically be a contract because no &#8220;offer&#8221; was made. In these cases the authors argue that the tort of negligence could be used to create legal accountability of AI bot behavior. The authors propose that AI operators owe a duty of care to website owners. Ignoring a robots.txt file is a breach of that duty because respecting the file is a well-established community norm. And when this breach causes harm&#8212;such as reputational damage from an AI model misrepresenting a site's content or consequential economic loss&#8212;the AI developer could be found liable for negligence.</p><p>For policymakers, this research offers a clear message: robots.txt can be treated as more than an informal guideline for AI behavior. But it still needs to be tested in court. Clarifying its legal standing could be the next step towards accountability in a legal forum. More generally, it&#8217;s worth considering whether contracts or civil claims of negligence should be a preferred route for governing and holding accountable AI system behavior.</p><h2><strong>References</strong></h2><p>[1] Chang, C.-Y. &amp; He, X. The liabilities of robots.txt. Computer Law Security Review. 58, 106176 (2025). <a href="https://arxiv.org/abs/2503.06035">https://arxiv.org/abs/2503.06035</a></p><p>[2] Longpre, S. et al. Consent in Crisis: The Rapid Decline of the AI Data Commons. NeurIPS (2024) doi:10.48550/arxiv.2407.14933.</p><p><em>Disclosure</em>: <em>Some text in this post was adapted based on suggestions from AI. </em></p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/p/robotstxt-as-a-lever-for-ai-accountability?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading AI Accountability Review! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/p/robotstxt-as-a-lever-for-ai-accountability?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.ai-accountability-review.com/p/robotstxt-as-a-lever-for-ai-accountability?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div>]]></content:encoded></item><item><title><![CDATA[SEC 10-K Disclosures as a Route to Corporate AI Accountability?]]></title><description><![CDATA[There's some value to them, but policy needs to call for more specific statements]]></description><link>https://www.ai-accountability-review.com/p/sec-10-k-disclosures-as-a-route-to</link><guid isPermaLink="false">https://www.ai-accountability-review.com/p/sec-10-k-disclosures-as-a-route-to</guid><dc:creator><![CDATA[Nick Diakopoulos]]></dc:creator><pubDate>Tue, 16 Sep 2025 10:01:52 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mocI!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1ac346b-0173-43bc-a79a-09cea34ea61a_288x288.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If society <a href="https://ndiakopoulos.substack.com/p/the-problem-of-ai-accountability">doesn&#8217;t know about</a> how AI was used or contributed to some outcome there can be no accountability. This is where transparency can be a useful enabler. Transparency&#8212;defined as &#8220;the availability of information about an actor allowing other actors to monitor the workings or performance of this actor&#8221; [1]&#8212;comes in many different shapes and sizes. Here I want to talk about it in terms of corporate disclosures made to the U.S. Securities and Exchange Commission (SEC) in 10-K filings.</p><p>A 10-K filing is documentation that public companies need to submit annually to the SEC. It provides a comprehensive overview of the business including operations, financial performance, and any significant risks. In recent years the <a href="https://www.alston.com/en/insights/publications/2024/07/navigating-ai-related-disclosure-challenges">SEC has become concerned with</a> &#8220;AI Washing&#8221; around the risks of AI, essentially that businesses might be making false claims by over-hyping the technology or underindexing the risks. This interest has even <a href="https://www.intelligize.com/secs-ai-stance-holds-steady-under-new-leadership/">continued</a> under the new administration. Filings are legally binding, and insufficient disclosures can lead to litigation or other enforcement <a href="https://www.sec.gov/enforcement-litigation/administrative-proceedings/33-11352-s">actions</a>.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Accountability Review! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>These disclosures can act as a set of expectations around corporate perceptions of AI risks. If the public knows the company knows there is a risk then we might expect them to do something to try to mitigate it. It also provides a little ray of light that might help accountability forums, such as <a href="https://ndiakopoulos.substack.com/p/the-media-as-accountability-forum">the media,</a> ask the company about what it&#8217;s doing about the risk.</p><p>So, what exactly are companies disclosing about AI risks in these filings? A recent paper on <a href="https://arxiv.org/abs/2508.19313">arXiv</a> presented an analysis of more than 30,000 10-K filings from more than 7,000 companies made between 2020 and 2024 [2]. Analysis shows that just about half the companies by 2024 mentioned AI somewhere in their disclosure, which was up from only about 1 in 8 in 2020.</p><p>The researchers qualitatively analyzed a sample of 50 companies, including 10 of the top tech companies. In that sample they found a wide range of societal risks from AI being cited, including discrimination, privacy, misinformation, malicious use, interactional harms, and so on. The risks were also framed in particular ways to dodge responsibility: &#8220;The top-tech firms often seem to externalise societal AI risks, attributing them to third-party misuse (e.g., faulty datasets or misuse of their models) while rarely acknowledging their own role in developing and deploying systems that may contribute to these risks&#8230;&#8221;</p><p>Oftentimes companies rely on vague or broad boilerplate language when they talk about risks, though there are at times more specific statements. In the paper the researchers quote the disclosure from Cognizant Technology Solutions: &#8220;The uncertainty around the safety and security of new and emerging AI applications requires significant investment to test for security, accuracy, bias, and other variables - efforts that can be complex, costly, and potentially impact our profit margins.&#8221; That&#8217;s the kind of statement that might be useful for accountability purposes.</p><p>Perhaps just as interesting are the risks the researchers didn&#8217;t observe in the sub-sample, which included environmental harms of AI, socioeconomic displacements, dangerous AI capabilities, multi-agent risks, and information ecosystem pollution. These are the risks that it seems companies haven&#8217;t yet recognized are anything they need to worry about. That may also limit accountability proceedings if companies don&#8217;t think these are issues they need to address.</p><p>There are clear limitations for informing AI accountability from 10-K filings both due to vague language and responsibility shirking. At the same time, this study does show that there can <em>sometimes</em> be bits of useful transparency included in these disclosures. Still, a more effective policy might more clearly indicate the types and specificity of AI risk information that are expected in these kinds of filings.</p><h4><strong>References</strong></h4><p>[1] Albert Meijer, &#8220;Transparency,&#8221; in The Oxford Handbook of Public Accountability, ed. Mark Bovens, Robert E. Goodin, and Thomas Schillemans (Oxford: Oxford University Press, 2014)</p><p>[2] Marin, L. G. U.-B., Rijsbosch, B., Spanakis, G. &amp; Kollnig, K. Are Companies Taking AI Risks Seriously? A Systematic Analysis of Companies&#8217; AI Risk Disclosures in SEC 10-K forms. arXiv (2025). <a href="https://arxiv.org/abs/2508.19313">https://arxiv.org/abs/2508.19313</a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Accountability Review! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Media as Accountability Forum ]]></title><description><![CDATA[How policymakers could help support the media's role in fostering a more accountable AI ecosystem.]]></description><link>https://www.ai-accountability-review.com/p/the-media-as-accountability-forum</link><guid isPermaLink="false">https://www.ai-accountability-review.com/p/the-media-as-accountability-forum</guid><dc:creator><![CDATA[Nick Diakopoulos]]></dc:creator><pubDate>Tue, 26 Aug 2025 14:02:58 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mocI!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1ac346b-0173-43bc-a79a-09cea34ea61a_288x288.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The news media is one of the forums that can enact accountability on AI systems, though it&#8217;s also important to keep in mind the <a href="https://open.substack.com/pub/ndiakopoulos/p/networked-ai-accountability?r=x4te6&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=true">networked view</a> of how the media forum connects to and interacts with others.</p><p>Jacobs and Schillemans present a typology for how the media contribute to accountability of public institutions, outlining four distinct functions: <em>spark</em>, <em>forum</em>, <em>amplifier</em>, and <em>trigger</em> (Jacobs and Schillemans, 2019). As a spark, the ordinary activity of news reporting (&#8220;just asking questions&#8221;) may cause organizations to reconsider their behavior or role in a process. As a forum, the media act as a space where investigations uncover unwanted behaviors leading to critical questions that are posed to the actor for explanation. The media can also amplify the impact of other accountability forums, for example, by bringing more attention to congressional hearings. The last role, trigger, is where the media contributes to enabling other accountability forums by producing relevant information that spurs formal accountability in other forums.</p><p>Unlike legal or administrative forums the media is an informal forum and has no real authority to enforce infractions from the actors they address. Media forums wield power by drawing public attention to issues, with the consequences being largely <em>reputational</em> in nature. An actor who fails to provide a satisfactory account of an outcome may appear negligent in the public eye or draw the disapproval of the public for its conduct, negatively impacting its reputation.</p><p>While its teeth may not be as sharp as some other forums&#8217; the media still has important contributions to make towards closing information and knowledge gaps around AI systems. Using techniques such as interviews with various stakeholders, examination of leaked documents, public information requests (Fink, 2017), external data-driven audits of system behavior (Diakopoulos, 2015), or large-scale investigation of AI systems (Veerbeek, 2025), media can inject valuable observations about the behavior of AI systems that trigger a call for accountability. Media can also surface information that informs and triggers other forums that do have teeth. For instance, <a href="https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/">Reuters&#8217; reporting on an internal Meta document</a> detailing chatbot policies led to <a href="https://www.hawley.senate.gov/kids-deserve-protection-hawley-launches-investigation-into-meta-for-training-its-ai-chatbots-to-target-children-with-sensual-conversation/">Senate committee investigations</a>. Other journalistic investigations, such as ProPublica&#8217;s look at algorithmic rent-setting in Texas, have <a href="https://www.propublica.org/article/greystar-realpage-doj-settlement-landlords-apartments-software">eventually led to legal settlements</a>.</p><p>Media also play a critical role in establishing or maintaining norms around acceptable behaviors for AI systems in society as well as who may be answerable for explaining violations of behavior. This includes propagating both descriptive norms (i.e. what actors do) and injunctive norms (i.e. what actors ought to do) (Lapinski and Rimal, 2005). Journalists apply a range of values around what kinds of outcomes or behaviors of actors may be normatively detrimental and therefore warrant scrutiny. In their daily decisions around what is newsworthy they have to assess what impacts in society are worthy of broader attention. This is the agenda-setting power of the media. By selecting and framing impacts of AI to report on, media can help establish beliefs or reinforce attitudes, which can eventually develop into social norms or expectations for the behavior of AI systems (Shehata et al, 2021). And of course the media is not homogenous. News outlets on the left vs. the right of the political spectrum prioritize different risks and impacts of AI in society in their coverage (Allaham et al, 2025).</p><p>In the course of their reporting journalists may seek accounts to help explain some observed behavior &#8212; why did the AI system produce some bad outcome? This activity helps to establish accountability relationships between actors in the system and the media as a forum. To do this journalists parse the complex sociotechnical system and consider which actors might take responsibility. By asking certain actors for explanations (e.g. a tech developer or data annotation provider), journalists audition expectations that the actor may need to answer for some outcome or (in)action. Some actors may not respond to requests for explanations, though by including these gaps in their article (&#8220;i.e. XYZ did not respond to requests for comment&#8221;), journalists subtly signal an injunctive norm &#8212; perhaps the actor <em>should </em>have provided an account. Journalists can also query other stakeholders in the system such as experts who study the system to ask them who they think ought to be responsible for some outcome, thus further contributing to the development of injunctive norms.</p><h4><strong>Policy Implications</strong></h4><p>The media&#8217;s power to shape the public and political agenda around AI, to investigate and expose problems, and to contribute to the development of social norms makes it a critical forum for enabling AI accountability. Policymakers should consider how to support the media&#8217;s role to foster a more accountable AI ecosystem.</p><p>For one, policies that support the media&#8217;s capacity for producing information about AI system behavior can be augmented. This could include everything from strengthening public records requests laws and whistleblower protections to increased data access provisions for auditing. Investing in <em>more</em> journalists working on the AI accountability beat would also serve to increase the stock of information, which is why it&#8217;s encouraging to see programs from the <a href="https://pulitzercenter.org/journalism/initiatives/ai-accountability-network">Pulitzer Center</a> and the <a href="https://www.tarbellfellowship.org/programme">Tarbell Center</a> focused on exactly that.</p><p>But also, policymakers need to be cognizant of how different media and perspectives in society are representing the norms and standards of behavior for AI systems. The agenda setting power of media (including new AI-driven media) influences what the public and, consequently, policymakers consider important. Policy should invest resources in large scale tracking surveys of public attitudes towards a range of AI behaviors. Moreover, a media monitor should be set up to track discourse and assess valuations of AI behavior in news, editorials, and other social media. Survey and tracking results can then inform standards for AI system behavior.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Accountability Review! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h4><strong>References</strong></h4><p>Allaham, M., Kieslich, K., Diakopoulos, N. Informing AI Risk Assessment with News Media: Analyzing National and Political Variation in the Coverage of AI Risks. Proceedings of the Conference on AI, Ethics, and Society (AIES). 2025. <a href="https://arxiv.org/abs/2507.23718">https://arxiv.org/abs/2507.23718</a></p><p>Diakopoulos, N. Algorithmic Accountability: Journalistic investigation of computational power structures. Digital Journalism 3, 398&#8211;415 (2015). <a href="https://doi.org/10.1080/21670811.2014.976411">https://doi.org/10.1080/21670811.2014.976411</a></p><p>Fink, K. Opening the government&#8217;s black boxes: freedom of information and algorithmic accountability. 17, 1&#8211;19 (2017). <a href="https://doi.org/10.1080/1369118X.2017.1330418">https://doi.org/10.1080/1369118X.2017.1330418</a></p><p>Jacobs, S. &amp; Schillemans, T. Media and public accountability: typology and research agenda. In Media and Governance, Eds. T. Schillmans and J. Pierre. (Polity Press, 2019).</p><p>Lapinski, M. K. &amp; Rimal, R. N. An Explication of Social Norms. <em>Communication Theory</em> <strong>15</strong>, 127&#8211;147 (2005). <a href="https://doi.org/10.1111/j.1468-2885.2005.tb00329.x">https://doi.org/10.1111/j.1468-2885.2005.tb00329.x</a></p><p>Shehata, A. et al. Conceptualizing long-term media effects on societal beliefs. Annals of the International Communication Association 45, 1&#8211;19 (2021). <a href="https://doi.org/10.1080/23808985.2021.1921610">https://doi.org/10.1080/23808985.2021.1921610</a></p><p>Veerbeek, J. Fighting Fire with Fire: Journalistic Investigations of Artificial Intelligence Using Artificial Intelligence Techniques. Journalism Practice, 1&#8211;19 (2025). <a href="https://doi.org/10.1080/17512786.2025.2479499">https://doi.org/10.1080/17512786.2025.2479499</a></p><p>Wieringa, M. What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability. FAT* &#8217;20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency 1&#8211;18 (2020) <a href="https://doi.org/10.1145/3351095.3372833">doi:10.1145/3351095.3372833</a>.</p>]]></content:encoded></item><item><title><![CDATA[Translating Copyright Law into Standards for Accountable AI Training]]></title><description><![CDATA[A proposed &#8220;fair learning doctrine&#8221; shifts the copyright debate from substantial similarity detection to model training standards.]]></description><link>https://www.ai-accountability-review.com/p/translating-copyright-law-into-standards</link><guid isPermaLink="false">https://www.ai-accountability-review.com/p/translating-copyright-law-into-standards</guid><dc:creator><![CDATA[Nick Diakopoulos]]></dc:creator><pubDate>Tue, 19 Aug 2025 12:01:05 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mocI!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1ac346b-0173-43bc-a79a-09cea34ea61a_288x288.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Setting expectations for behavior&#8212;and then assessing against those expectations&#8212;is a <a href="https://www.ai-accountability-review.com/p/the-problem-of-ai-accountability">cornerstone of accountability</a>. A recent paper published at FAccT, <em><a href="https://dl.acm.org/doi/pdf/10.1145/3715275.3732193">Interrogating LLM Design under Copyright Law</a></em>, argues that for copyright violation behaviors we might be better off focusing on the training standards of LLMs rather than their output violations. This <a href="https://www.ai-accountability-review.com/p/prospective-accountability">shift in perspective</a>&#8212;from outputs to development process&#8212;offers a path for establishing technical standards that ensure model training practices meet expectations derived from legal codes.</p><p>The underlying problem addressed by the paper is that LLMs can &#8220;memorize&#8221; content that they&#8217;ve been trained on, reproducing portions of their training data verbatim. LLM developers are <a href="https://chatgptiseatingtheworld.com/2024/08/27/master-list-of-lawsuits-v-ai-chatgpt-openai-microsoft-meta-midjourney-other-ai-cos/">currently facing dozens of legal cases</a> alleging copyright violations. The paper argues that one of the challenges facing courts is that assessing output copyright violations hinges on showing <em>substantial similarity</em> between the original and output. But substantial similarity is a subjective legal concept that resists algorithmic implementation, meaning that we can&#8217;t necessarily expect LLMs to be able to reliably monitor and detect whether their output meets any kind of substantial similarity legal standard. Moreover, because users might prompt a model in adversarial ways to nudge a model towards outputting a response that is a copyright violation, this muddies the water around responsibility for the violation. How much <a href="https://www.ai-accountability-review.com/p/reflexive-prompt-engineering-as-a">responsibility should the user have</a>?</p><p>This paper proposes an alternative focus: instead of debating whether an output looks &#8220;too similar,&#8221; legal forums might scrutinize whether training decisions substantially increased (or decreased) the risk of memorization. The paper refers to this as a &#8220;fair learning doctrine&#8221; and the authors argue that &#8220;By setting an appropriate standard, the doctrine can incentivize design choices that align with ethical and legal norms.&#8221; In essence, this reframing would allow developers to be held accountable if they didn&#8217;t implement the standard.</p><p>The paper works through a couple of analyses using Pythia, an open-source LLM trained on The Pile [2] to offer a proof-of-concept of such training standards. In one experiment the authors show that upweighting the number of times a document appears in a training dataset doesn&#8217;t substantially affect the memorization of that document. This analysis demonstrates a method that developers might use to analyze whether their model is sensitive to this kind of upweighting. In another analysis, the authors simulate what would happen if an entire dataset (like FreeLaw or PubMed Central) were excluded from training. Here they find that overlaps in data density can affect memorization risks&#8212;suggesting the relevance of dataset curation choices.</p><p>In general, these analyses are indicative, but there needs to be additional research to really flesh out what a development standard for minimizing memorization in LLMs should look like. After sufficient research, a technical standards body such as ISO or IEEE might then formalize it and socialize it. At that stage it could be used as a benchmark for any model developer. The main contribution of the paper is that it starts building a bridge between law and model training, suggesting legally informed development standards that might one day be operationalized and used for the purposes of accountability.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Accountability Review! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3><strong>References</strong></h3><p>[1] Wei, J. T.-Z., Wang, M., Godbole, A., Choi, J. &amp; Jia, R. Interrogating LLM design under copyright law. Proc. 2025 ACM Conf. Fairness, Accountability, Transparency. 3030&#8211;3045 (2025) <a href="https://dl.acm.org/doi/10.1145/3715275.3732193">doi:10.1145/3715275.3732193</a>.</p><p>[2] Gao, L. et al. The Pile: An 800GB Dataset of Diverse Text for Language Modeling. arXiv (2020) <a href="https://arxiv.org/abs/2101.00027">doi:10.48550/arxiv.2101.00027</a>.</p>]]></content:encoded></item><item><title><![CDATA[Prospective Accountability]]></title><description><![CDATA[A focus on prevention rather than blame offers a prudent alternative to classic conceptions of accountability]]></description><link>https://www.ai-accountability-review.com/p/prospective-accountability</link><guid isPermaLink="false">https://www.ai-accountability-review.com/p/prospective-accountability</guid><dc:creator><![CDATA[Nick Diakopoulos]]></dc:creator><pubDate>Thu, 14 Aug 2025 14:02:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mocI!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1ac346b-0173-43bc-a79a-09cea34ea61a_288x288.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Discourse on AI accountability is often focused on the idea of accountability as a <em>retrospective</em> activity of blame identification and assignment. It&#8217;s reactive and backward-looking in time: How can we find someone to blame for something that already happened and have them explain it? It traces cause and assigns fault after a failure, whether that&#8217;s a biased hiring algorithm or an autonomous vehicle crash.</p><p>Retrospective accountability is certainly important for achieving justice for impacted individuals. If someone was harmed we ideally want to be able to identify who is to blame and have them explain what happened and face the consequences. But identifying causality in complex networks is no easy task, especially given information gaps around AI system behavior related to access or opacity. It may not always even be possible.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Accountability Review! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>In contrast, <em>prospective</em> accountability is proactive. Instead of asking <em>&#8220;Who caused this?&#8221;</em> after the fact, we ask <em>&#8220;Who is responsible for preventing this?&#8221;</em> before deployment. It&#8217;s about assigning forward-looking responsibilities to stakeholders in order to take steps to avoid undesirable outcomes (Johnson, 2011). Informed by past events, it looks ahead to devise clear expectations for behavior and performance, so that actors have explicit duties to prevent or mitigate anticipated risks and harms. The <em>quality</em> <em>of the plan for how to act in the future</em> is what an actor might have to answer for and explain in an accountability forum. We may also expect actors to explain how their plans are adaptive and responsive so that plans improve as new methods and technologies for preventing harms become available.</p><p>A prospective approach to AI accountability would identify actors in the system and assign them specific responsibilities for avoiding bad outcomes and ensuring good outcomes. A central challenge is then normative: What are the bad outcomes we want to avoid? What are the prescribed plans for preventing those bad outcomes? Are these forward-looking responsibility assignments fair? To address these questions we need reliable methods for anticipatory risk and impact assessment (Kieslich et al, 2024) as well as robust stakeholder maps which detail which actors are in the best position and have the capacity (and resources) to act to reliably see to certain outcomes.</p><p>The distinction between retrospective and prospective accountability resembles that between <em>outcome</em> and <em>process</em> accountability (Patil et al, 2014). We can either hold someone accountable for an outcome (i.e. retrospective blame for something that happened), or we can hold them accountable for a standard of how they enacted an outcome (i.e. prospective accountability for their plan to act to achieve some outcome). If ChatGPT produces<a href="https://arstechnica.com/health/2025/08/after-using-chatgpt-man-swaps-his-salt-for-sodium-bromide-and-suffers-psychosis/"> unsafe outputs that negatively impact users&#8217; health</a>, we can and should hold OpenAI accountable for that bad output, but we can also hold them accountable for the content moderation processes they implement to try to prevent that. Failure to implement an accepted process to protect users might indicate negligence. Some harm event could trigger a call for an explanation of this process which might then feed into prospection on how to improve the plan in the future.</p><p>For AI policymakers and analysts, shifting from retrospective to prospective accountability means embedding forward-looking responsibilities (not necessarily only for AI system developers) into enforceable governance frameworks. But both types of accountability rely on addressing<a href="https://www.ai-accountability-review.com/p/the-problem-of-ai-accountability"> many of the same underlying questions</a>, such as: How do we set the standards of preventative plans?; and How do we monitor and know about the implementation of those plans? If we move away from identifying and assigning blame as the goal of policy, perhaps because it&#8217;s sometimes impossible given sociotechnical complexity, butts up against jurisdictional issues, or triggers fears of user surveillance, prospective accountability can be a useful alternative.</p><h4><strong>References</strong></h4><p>Johnson DG (2011) Software Agents, Anticipatory Ethics, and Accountability. The Growing Gap Between Emerging Technologies and Legal-Ethical Oversight, pp. 61&#8211;76.</p><p>Kieslich K, Diakopoulos N and Helberger N (2024) Anticipating impacts: using large-scale scenario-writing to explore diverse implications of generative AI in the news environment. AI and Ethics: 1&#8211;23.</p><p>Patil SV, Vieider F and Tetlock PE (2014) Process versus Outcome Accountability. In: Bovens M, Goodin Robert E, et al. (eds) <em>The Oxford Handbook of Public Accountability</em>.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Accountability Review! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Could Autonomy Certificates Enable AI Accountability?]]></title><description><![CDATA[A new idea for documenting levels of autonomy in AI agents could also be a boon for accountability]]></description><link>https://www.ai-accountability-review.com/p/could-autonomy-certificates-enable</link><guid isPermaLink="false">https://www.ai-accountability-review.com/p/could-autonomy-certificates-enable</guid><dc:creator><![CDATA[Nick Diakopoulos]]></dc:creator><pubDate>Mon, 04 Aug 2025 15:03:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!nQqL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f2da9b1-8613-4b3f-aba0-b68fd98a4e75_1356x454.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A new essay <a href="https://knightcolumbia.org/content/levels-of-autonomy-for-ai-agents-1">published last week by the Knight First Amendment Institute</a> proposes that AI agents should be rated on their level of autonomy by a third-party governing body [1]. The authors argue that such &#8220;autonomy certificates&#8221; would act as a form of digital documentation that could be useful in risk assessments, the design of safety frameworks, and in engineering. But I also think they could be a beneficial idea for supporting AI accountability.</p><p>The <em>autonomy</em> of an agent is defined in the paper as &#8220;the extent to which an AI agent is designed to operate without user involvement.&#8221; Essentially, it&#8217;s how much the agent can do on its own without interacting with a user. The level of autonomy of an agent is an intentional design decision&#8212;for instance, engineers may define the tools an agent can use and the scope of its perception of its environment.</p><p>Various levels of autonomy are articulated in the paper, ranging from level 1 where the user is an operator that drives much of the decision-making, to level 5 where the user is an observer that has no capacity for involvement in the agent&#8217;s decisions or actions. In between are level 2 (user as collaborator), level 3 (user as consultant), and level 4 (user as approver).</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nQqL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f2da9b1-8613-4b3f-aba0-b68fd98a4e75_1356x454.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nQqL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f2da9b1-8613-4b3f-aba0-b68fd98a4e75_1356x454.png 424w, https://substackcdn.com/image/fetch/$s_!nQqL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f2da9b1-8613-4b3f-aba0-b68fd98a4e75_1356x454.png 848w, https://substackcdn.com/image/fetch/$s_!nQqL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f2da9b1-8613-4b3f-aba0-b68fd98a4e75_1356x454.png 1272w, https://substackcdn.com/image/fetch/$s_!nQqL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f2da9b1-8613-4b3f-aba0-b68fd98a4e75_1356x454.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nQqL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f2da9b1-8613-4b3f-aba0-b68fd98a4e75_1356x454.png" width="1356" height="454" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5f2da9b1-8613-4b3f-aba0-b68fd98a4e75_1356x454.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:454,&quot;width&quot;:1356,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nQqL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f2da9b1-8613-4b3f-aba0-b68fd98a4e75_1356x454.png 424w, https://substackcdn.com/image/fetch/$s_!nQqL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f2da9b1-8613-4b3f-aba0-b68fd98a4e75_1356x454.png 848w, https://substackcdn.com/image/fetch/$s_!nQqL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f2da9b1-8613-4b3f-aba0-b68fd98a4e75_1356x454.png 1272w, https://substackcdn.com/image/fetch/$s_!nQqL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f2da9b1-8613-4b3f-aba0-b68fd98a4e75_1356x454.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Levels of autonomy as outlined in [1]. </figcaption></figure></div><p>Autonomy certificates prescribe &#8220;the maximum level of autonomy at which an agent can operate given 1) some set of technical specifications that define the agent&#8217;s capability (e.g. AI model, prompts, tools), and 2) its operational environment.&#8221; As such they essentially define an <em>authorized standard</em> for how much an agent is allowed to do within some context. Providing an expectation for behavior is their main benefit in supporting AI accountability since such standards of behavior <a href="https://www.ai-accountability-review.com/p/the-problem-of-ai-accountability">need to be established</a> in order to trigger accountability proceedings.</p><p>For example, if an agent is rated as level 3, but begins acting at what the certificate standard defines as level 4, this might trigger a call for accountability. This could involve the provider of the AI agent needing to explain to the third party certification body why or how that may have occurred, and with the sanction being that the autonomy certificate may be revoked or reissued at a different level. The autonomy certificate could thus act as a standard for helping to ensure that AI agents only operate at the level of autonomy that they&#8217;ve been certified for.</p><p>Another dimension of accountability that autonomy certificates would support is in outlining the behaviors of the system that need to be <em>monitored</em>. If an AI agent is scoped as being able to use a certain set of tools autonomously (i.e. without user intervention) then this creates an additional need for logging of that tool use. Likewise, for systems rated at lower levels of autonomy (and higher levels of user involvement), the certificate might indicate the kinds of user behaviors that need to be logged. All this logging could then support explanations as part of accountability proceedings if the AI agent was observed misusing a tool, or a user was found to be approving harmful actions that the AI agent suggested.</p><p>The authors suggest that autonomy certificates would be produced through a third-party evaluation process that systematically tests an AI agent to identify the &#8220;minimum level of user involvement needed for the agent to exceed a certain accuracy or pass rate threshold&#8221; on a given benchmark task. They would also need to be updated as systems are updated, such as when new models are released). As such they would need a fair bit of expert human attention, and thus resources, in order to produce. But the benefits to accountability could be meaningful.</p><h2><strong>References</strong></h2><p>[1] Feng, K. J. K., McDonald, D. W. &amp; Zhang, A. X. Levels of Autonomy for AI Agents. Knight First Amendment Institute. July, 2025. <a href="https://knightcolumbia.org/content/levels-of-autonomy-for-ai-agents-1">https://knightcolumbia.org/content/levels-of-autonomy-for-ai-agents-1</a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Accountability Review! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Reflexive Prompt Engineering as a Route to Accountability]]></title><description><![CDATA[Maybe how we prompt AI matters as much as how it's built]]></description><link>https://www.ai-accountability-review.com/p/reflexive-prompt-engineering-as-a</link><guid isPermaLink="false">https://www.ai-accountability-review.com/p/reflexive-prompt-engineering-as-a</guid><dc:creator><![CDATA[Nick Diakopoulos]]></dc:creator><pubDate>Tue, 29 Jul 2025 14:02:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mocI!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1ac346b-0173-43bc-a79a-09cea34ea61a_288x288.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>While much of the focus of AI governance, such as in the EU AI Act, has been on the developers or providers of models, <a href="https://dl.acm.org/doi/pdf/10.1145/3715275.3732118">a new research paper</a> published at the Fairness, Accountability, and Transparency Conference argues that at least <em>some</em> responsibility should also be assigned to the deployers/users of general purpose AI systems [1]. A deployer can be defined as an entity that <em>uses</em> an AI system &#8220;under its authority&#8221;, though the AI Act excludes use for &#8220;personal non-professional activity&#8221; from this definition [2]</p><p>The paper develops the idea that accountability shouldn&#8217;t just be tied to the underlying technical development of a system, but that the instructions we give AI via prompting are also an important aspect that shapes how AI systems act in the world. Prompting is a &#8220;critical interface between human intent and machine output&#8221; and so triggers a moral responsibility to attend to the ethical, legal, and social consequences of choices in prompting.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Accountability Review! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The proposed framework for responsible prompting is termed &#8220;reflexive prompt engineering&#8221;, emphasizing a heightened self-awareness users should have in their role in controlling AI systems via the prompts they use. It consists of five components, synthesized through the author&#8217;s literature review of academic articles and technical documentation:</p><ul><li><p><strong>Prompt Design</strong> This involves systematically creating instructions for the AI. The goal is to move beyond mere functionality and include steps that focus on responsibility, such as using diverse examples to guide the model in few-shot prompts.</p></li><li><p><strong>System Selection</strong> This component emphasizes making strategic choices about which AI model to use based not only on its capabilities but also on its environmental impact, transparency, and data privacy protections.</p></li><li><p><strong>System Configuration</strong> This involves adjusting model parameters, such as "temperature," which controls the balance between predictable and creative outputs. Responsible configuration means choosing settings that align with the use case.</p></li><li><p><strong>Performance Evaluation</strong> This is the systematic assessment of a prompt's effectiveness. The framework calls for evaluation criteria that include fairness, potential biases, and implications for privacy and data protection.</p></li><li><p><strong>Prompt Management</strong> This refers to the documentation and organization of prompts over time, including version control and history. This practice is vital for enabling accountability as prompts can serve as supporting documents in explanations of system performance.</p></li></ul><p>From an accountability perspective the premise of the idea is that if there is a standard &#8220;responsible prompting&#8221; practice for deployers, we can potentially hold them accountable if harm is caused and they did not adhere to that standard. Basically that some entity would be considered <em>negligent</em> if they didn&#8217;t follow the standard of responsible practice. Of course, to have that effect, any such standard would need to be widely accepted and recognized as a reasonably expected practice in industry or amongst informed end-users. </p><p>Implementing reflexive prompt engineering guidelines, together with literacy and training, would be a nice way to advance responsible organizational practices. Such guidelines could get implemented as part of broader organizational <a href="https://www.ethos-ai.org/p/creating-your-ai-use-policy-part-2">AI use policies</a>. But to really advance accountability here public policymakers would need to implement rules so that deployers could be held accountable to an accepted standard of practice around prompting, with documentation required to show decision rationale around prompt design, system selection and configuration, evaluation, and management. Policymakers could support this avenue by calling for official industry standards around prompt engineering, and then instituting documentation and transparency requirements for deployers. A forum would be assigned with the authority to monitor the transparency information and interrogate deployers in the event of a trigger indicating the deployer had created some harm.</p><p>Ultimately, this research provides policymakers with a valuable blueprint that helps shift the conversation on AI accountability toward a more holistic view that recognizes the pivotal role of the user. The idea is clear: how we interact via prompts with AI systems is a fundamental part of their impact in the world, and so probably ought to have some responsibility assigned to it.</p><h4><strong>References</strong></h4><p>[1] Djeffal, C. Reflexive Prompt Engineering: A Framework for Responsible Prompt Engineering and AI Interaction Design. Proc. 2025 ACM Conf. Fairness, Accountability, Transparency. 1757&#8211;1768 (2025) <a href="https://dl.acm.org/doi/10.1145/3715275.3732118">doi:10.1145/3715275.3732118</a>.</p><p>[2] The AI Act Explorer. <a href="https://artificialintelligenceact.eu/article/3/">https://artificialintelligenceact.eu/article/3/</a> </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Accountability Review! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Problem of AI Accountability]]></title><description><![CDATA[How do we make sociotechnical AI systems answerable for their behavior?]]></description><link>https://www.ai-accountability-review.com/p/the-problem-of-ai-accountability</link><guid isPermaLink="false">https://www.ai-accountability-review.com/p/the-problem-of-ai-accountability</guid><dc:creator><![CDATA[Nick Diakopoulos]]></dc:creator><pubDate>Mon, 21 Jul 2025 14:03:05 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ZFFb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd28a5e5c-6245-41c1-afcc-e58b96b2df13_1152x1214.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>To help set the scope for the AI Accountability Review I want to start us off with a solid definition of AI Accountability and the problems it entails. These can help drive towards potential policy options to address those problems. We&#8217;ve got two big ideas intersected here: &#8220;AI&#8221; and &#8220;Accountability&#8221;. Let&#8217;s dissect what they mean&#8212;individually and then together.</p><p>Refined over several years, the<a href="https://www.oecd.org/en/publications/explanatory-memorandum-on-the-updated-oecd-definition-of-an-ai-system_623da898-en.html"> OECD&#8217;s definition of </a><strong><a href="https://www.oecd.org/en/publications/explanatory-memorandum-on-the-updated-oecd-definition-of-an-ai-system_623da898-en.html">AI System</a></strong> is broad and jargon-heavy, but also precise: &#8220;<em>An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.</em>&#8221; (Explanatory memorandum on the updated OECD definition of an AI system, 2024). Algorithms are formally defined differently than AI, but for the purposes of what accountability means in this context, I view the earlier nomenclature of &#8220;algorithmic accountability&#8221; (Diakopoulos, 2015) as practically synonymous with &#8220;AI accountability&#8221; discussed here.</p><p><em>Objectives</em> are the <em>goals</em> of the AI system. They can be explicitly written as rules by people, or they can be implicit in data that encodes examples that the system should emulate, for instance, through machine learning. <em>Inferences</em> are <em>outputs</em> of the AI system created on the basis of inputs. <em>Autonomy</em><strong> </strong>is how much a system can act without human involvement. And <em>adaptiveness</em> is the idea that an AI system can evolve after its initial development.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ai-accountability-review.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Accountability Review! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The OECD clarifies that &#8220;an AI system&#8217;s objective setting and development can always be traced back to a human who originates the AI system development process.&#8221; Indeed, it is broadly recognized that <strong>AI systems are complex sociotechnical systems</strong> that interweave a machine-based component and a range of human actors in their design, development, deployment, and use (Chen and Metcalf, 2024). Human influence is always present even if only indirectly linked to actions an AI system takes. As Novelli and colleagues describe further: &#8220;<em>The performance of a sociotechnical system relies on the joint optimization of tools, machinery, infrastructure and technology &#8230; on the technical side, and of rules, procedures, metrics, roles, expectations, cultural background, and coordination mechanisms on the social side.</em>&#8221; (Novelli et al, 2024).</p><p><strong>Accountability</strong> has two meanings in the Oxford English dictionary: <em>answerability </em>(i.e. &#8220;...liability to...answer for one&#8217;s conduct&#8230;&#8221;), and <em>responsibility</em>. A definition that has gained some traction in the AI literature is from Mark Bovens, who defines it as: &#8220;<em>a relationship between an actor and a forum, in which the actor has an obligation to explain and to justify his or her conduct, the forum can pose questions and pass judgment, and the actor may face consequences.</em>&#8221; (Bovens, 2007). This emphasizes that <strong>accountability is </strong><em><strong>relational</strong></em> &#8212; it exists between an actor and a forum. Both the relationship and the obligation from actor to forum <em>must be established under some authority</em> in order to facilitate an explanation or justification of behavior. <strong>A goal for AI policymakers should be to understand how to configure authority to create such relationships</strong>.</p><p>Bovens goes on to write that &#8220;Accountability is a form of control, but not all forms of control are accountability mechanisms.&#8221; Indeed accountability arises as a response to the problem of <em>delegation</em> in principal-agent relationships where a principal delegates some tasks to an agent which acts on the principal&#8217;s behalf. Accountability is the mechanism to constrain this delegation relationship by making the agent answerable to a forum for its conduct (sometimes the forum is just the principal itself). In essence the principal delegates a task to an agent and then monitors the execution of that task, or further delegates this monitoring to a forum which can judge the behavior and enact sanctions if needed. The following diagram illustrates how an accountability relationship is established with authority flowing originally from a democratic election process and where the principal delegates both the task and the monitoring of that task.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZFFb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd28a5e5c-6245-41c1-afcc-e58b96b2df13_1152x1214.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZFFb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd28a5e5c-6245-41c1-afcc-e58b96b2df13_1152x1214.png 424w, https://substackcdn.com/image/fetch/$s_!ZFFb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd28a5e5c-6245-41c1-afcc-e58b96b2df13_1152x1214.png 848w, https://substackcdn.com/image/fetch/$s_!ZFFb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd28a5e5c-6245-41c1-afcc-e58b96b2df13_1152x1214.png 1272w, https://substackcdn.com/image/fetch/$s_!ZFFb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd28a5e5c-6245-41c1-afcc-e58b96b2df13_1152x1214.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZFFb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd28a5e5c-6245-41c1-afcc-e58b96b2df13_1152x1214.png" width="1152" height="1214" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d28a5e5c-6245-41c1-afcc-e58b96b2df13_1152x1214.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1214,&quot;width&quot;:1152,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ZFFb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd28a5e5c-6245-41c1-afcc-e58b96b2df13_1152x1214.png 424w, https://substackcdn.com/image/fetch/$s_!ZFFb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd28a5e5c-6245-41c1-afcc-e58b96b2df13_1152x1214.png 848w, https://substackcdn.com/image/fetch/$s_!ZFFb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd28a5e5c-6245-41c1-afcc-e58b96b2df13_1152x1214.png 1272w, https://substackcdn.com/image/fetch/$s_!ZFFb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd28a5e5c-6245-41c1-afcc-e58b96b2df13_1152x1214.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Accountability is a mechanism for constraining behavior when delegating to agents that can act with some degree of autonomy</strong>. It can help mitigate the risks around the loss of agency by the principal, where the agent&#8217;s actions don&#8217;t fully align with the principal's interests and goals (Koenig, 2025). No wonder it&#8217;s such a fundamental concept for governing AI. If people are going to delegate any number of tasks to AI systems&#8212;as they&#8217;re now doing en masse with generative AI&#8212;accountability is a way for people to manage that delegation. Traditionally accountability has applied to individuals or organizations, but now the agent we&#8217;re trying to constrain using accountability is a sociotechnical system where the technical component of that system may have varying levels of autonomy or adaptiveness in the world.</p><p><strong>The overarching problem of AI accountability is about how to make sociotechnical AI systems answerable for their behavior.</strong> Applying the idea of accountability to AI requires we think through some of the basic dimensions of accountability as per Bovens&#8217; definition, namely: (1) agents that are complex sociotechnical systems, (2) forums that need access to observe and interrogate these systems, (3) the capacity for explanation and justification, and (4) behavioral standards that both trigger accountability, and guide judgements and consequences.</p><p>The technical (i.e. &#8220;machine-based&#8221;) component of an AI system raises issues for assigning moral responsibility. Typically, people are morally responsible for a harm if they <em>caused</em> it and <em>intended</em> to cause it (Nissenbaum, 1994). While AI systems can certainly cause harm, they can&#8217;t intend to cause it, although the <em>people</em> in the sociotechnical system certainly could. This alludes to a key question:<strong> How should accountability work in a distributed system with complex interactions between human and non-human actors?</strong> Issues here relate to distributed responsibility (e.g. organizationally internal vs external, across the supply chain, stakeholder mapping, challenges created by open source, assigning and enforcing sanctions, etc.), moral and legal responsibility (e.g. legal personhood of AI, levels of autonomy/agency/intent, legal liability, etc), human issues (e.g. roles such as user or developer, design of technical artifacts, codes of conduct, human-in-the-loop issues, AI influence on human behavior and vice versa, etc). An underlying issue is in how accountability relationships and obligations are even established to begin with, and with what authority.</p><p>There&#8217;s also the question of <strong>How can forums know about AI system behavior?</strong> In order to trigger a request for explanation or justification of conduct a forum first needs to know about that conduct. How do forums observe and monitor complex sociotechnical systems to assess their behavior? This gets into issues of observability and data access, transparency and opacity, measurability, auditing, benchmarking, logging and incident reporting, red teaming, public records laws, and so on. Approaches to knowing about AI system behavior will vary for different kinds of forums, such as political, legal, professional, social, or media.</p><p>We also need to grapple with the question of <strong>How can AI systems explain and justify their behavior?</strong> Once a forum knows about AI system behavior, the system must be able to render an explanation to the forum, which may entail reasoning capability or human-AI interaction to help make sense of how the system took inputs to outputs. This must be done interactively such that the forum can also pose questions, such as to interrogate the system or contest its output. One of the underlying challenges here is how to attribute cause in a complex system, sometimes referred to as the &#8220;many hands&#8221; problem.</p><p>Finally we need a good answer to the issue of <strong>What standards should be used to judge AI system behavior?</strong> Accountability relies on a set of criteria for assessing behavior, and these could come from social norms and expectations which may differ across actors in the complex system, risk and impact assessment approaches, ethical principles, standards bodies, or regulations. And this all needs to be adaptive as technical capabilities and AI behaviors advance. An open problem is how to agree on standards that might apply around the world to AI systems in global use, not only in what might trigger a call for accountability and establish an obligation from an agent to a particular forum, but also around what the consequences or sanctions should be for behavior that falls short of standards.</p><p>In summary, establishing AI accountability is an approach for managing the delegation of tasks to increasingly autonomous and adaptive AI systems. It necessitates addressing fundamental questions about agents as complex sociotechnical systems, enabling forums to monitor and interrogate these systems, ensuring the capacity for explanation and justification, and setting clear behavioral standards for judgment and consequences. Policy approaches will need to address these challenges to create the conditions for AI accountability and effectively govern AI in society.</p><h4><strong>References</strong></h4><p>Bovens M (2007) Analysing and Assessing Accountability: A Conceptual Framework. <em>European Law Journal</em> 13(4): 447&#8211;468.</p><p>Chen BJ and Metcalf J (2024) <em>Explainer: A Sociotechnical Approach to AI Policy</em>. Data &amp; Society.</p><p>Diakopoulos N (2015) Algorithmic Accountability: Journalistic investigation of computational power structures. Digital Journalism 3(3): 398&#8211;415.</p><p>Explanatory memorandum on the updated OECD definition of an AI system. (2024). DOI:<a href="https://doi.org/10.1787/623da898-en">https://doi.org/10.1787/623da898-en</a><a href="https://app.readcube.com/library/?style=ACM%20SIG%20Proceedings%20(%22et%20al.%22%20for%2015+%20authors)+%7B%22language%22:%22en-US%22%7D">.</a></p><p>Koenig PD (2025) Attitudes toward artificial intelligence: combining three theoretical perspectives on technology acceptance. AI &amp; SOCIETY 40(3): 1333&#8211;1345.</p><p>Nissenbaum H (1994) Computing and accountability. <em>Communications of the ACM</em> 37(1): 72&#8211;80.</p><p>Novelli C, Taddeo M and Floridi L (2024) Accountability in artificial intelligence: what it is and how it works. <em>AI &amp; Society</em> 39(4): 1871&#8211;1882.</p>]]></content:encoded></item></channel></rss>