Closing Information Gaps via AI Transparency
Policymakers need to establish rigorous standards that prioritize information quality and the specific needs of accountability forums
Before anyone can be held accountable for an AI system’s behavior we’re going to need some information about that system. What was the system’s behavior and was its performance unexpected? What are the underlying values and goals of its designers? Did the developers take appropriate steps to test for and prevent harmful outcomes? How are organizational policies designed and implemented for the ongoing operation of the system? Transparency is the umbrella idea of closing these kinds of knowledge gaps, and should be differentiated from explanation which is a more specific approach (Corbett and Denton, 2025; Hayes et al, 2023). More formally, transparency can be defined as “the availability of information about an actor allowing other actors to monitor the workings or performance of this actor” (Meijer et al, 2014). And while transparency in itself cannot ensure accountability, it often plays a critical supporting role, providing the informational substrate for understanding AI system behavior that can then filter into various forums that might seek to hold actors in an AI system accountable.
Transparency sets up a relationship between two entities—here an AI system and a forum—where information about the AI system becomes available to the forum. Because AI systems are sociotechnical this includes information about both the data and technical model in the system, as well as the human components such as organizational policies, procedures or practices, and user behaviors (Diakopoulos, 2020). For the sake of accountability, provided information should help the forum determine congruence with relevant values, goals, and normative or legal expectations of behavior (Hayes et al, 2023; Fleischmann and Wallace, 2005). Transparency information can be voluntary (e.g. a blog post), obligatory (e.g. legally mandated disclosure to an administrator), or involuntary (e.g. external audits, or leaks), though recent research has underscored the inadequacy of volunteered “first-party” transparency information compared to external “third-party” evaluations of social impacts (Reuel et al, 2025).
To be useful for accountability, transparency information needs to reflect high information quality. At a minimum it needs to be accessible, understandable, relevant, and accurate (Hayes et al, 2023; Diakopoulos, 2020; Turilli and Floridi, 2009). Beyond just availability, information needs to be accessible so that it can be easily found by audiences such as various accountability forums. It also needs to be understandable or usable by those audiences and aligned to their information processing capabilities and capacities. It needs to be relevant to diagnosing some behavior of interest whether that be in shedding light on some negative outcome for retrospective accountability, or providing critical context to inform prospective accountability. Information also needs to be accurate such that it is valid, reliable, and free of error (Turilli and Floridi, 2009), since otherwise it can suffer from strategic activities that shape or distort information, leading to uninformative or boilerplate disclosures (Marin et al, 2025). Other aspects of information quality that are pertinent include the currency or timeliness of the information, and its comprehensiveness. AI transparency will typically fall short when the above factors and attributes aren’t adequately addressed.
A reoccurring pattern we see in the literature is a failure to clearly articulate the intended audience or forum for transparency information, with implications for how the information would be maximally accessible, understandable, and relevant for that audience. For instance, in the 2025 Foundation Model Transparency Index (Wan et al, 2025), the authors establish a set of 100 indicators that they apply to various models to evaluate how transparent they are in terms of data, training, compute usage, modeling, and downstream impacts and use policies. But the audience for all of this information—and its utility for accountability—is anything but clear. What transparency initiatives like this one need to do is clearly articulate the public interest and accountability purpose of each indicator, helping to connect over to the audience or forum that would then use that information for accountability. Similarly, a recent proposal for AI agent transparency (Ezell et al, 2025) appears oriented somewhat towards technical developers “debugging” agent incidents. If the information in that framework could be made available to administrative or judicial forums, it’s likely they would benefit from at least some of the information. But the ideal would be a more parsimonious framework that more closely tracks the needs of those forums for specific issues they may need to assess for accountability.
While I would argue that transparency is a necessary pre-condition for accountability, critics point out that transparency is not an unalloyed positive force. It shouldn’t be assumed to always enable accountability (Corbett and Denton, 2023), though policies that shape adherence to the attributes of quality transparency information described above should increase the likelihood of its utility. Transparency can also come into tension with other values, such as privacy, freedom of expression, or intellectual property (Ananny and Crawford, 2018; Diakopoulos, 2020; Turilli and Floridi, 2009) leading to situations where tradeoffs need to be made in highly context-specific ways. One of the most frequent counter arguments to more transparency is that it could enable gaming or manipulation of the system (van Bekkum and Borgesius, 2021), though careful context-specific engineering, threat modeling, and consideration to forum-specific access provisions should alleviate this issue (Diakopoulos, 2020). We might also consider the idea that social forums may use manipulation as a way to sanction a system—in other words manipulating a system may in some contexts and situations be considered a component of holding a system accountable for unwanted behavior. Ultimately, the choices around what, when, and how AI systems are made transparent are political (Corbett and Denton, 2023).
The role of policy here is to thread the needle through these criticisms to scope transparency and shape it towards positive outcomes for society. Policy must create obligations for actors within AI systems to produce the information needed by any given forum (e.g. administrative, legal, etc.) to make the relevant assessment of system performance. This information needs to meet accessibility, understandability, relevance, accuracy, currency, and comprehensiveness quality criteria. One way to do this is to be more specific about standards for AI system transparency information production: what standard processes and practices should be evidenced by actors making transparency information available? Public sector policy makers cannot leave this unspecified, otherwise there is too much room for strategic and performative behavior. Another role for policy makers is to engage in the politics of where and how to make tradeoffs with other values such as privacy; looking to public attitudes should probably inform this. Transparency policies need to be user-centered (e.g. towards whatever forum the information is intended for) and context-specific, and would benefit from human-centered engineering and evaluation to refine their scope, meet user needs, and maximize their utility for accountability.
References
Ananny M and Crawford K (2018) Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society 20(3): 973–989.
Corbett E and Denton R (2023) Interrogating the T in FAccT. Conference on Fairness, Accountability, and Transparency: 1624–1634.
Diakopoulos N. (2020) Transparency. Oxford Handbook of Ethics and AI. Eds. Markus Dubber, Frank Pasquale, Sunit Das.
Ezell C, Roberts-Gaal X and Chan A (2025) Incident Analysis for AI Agents. Proc. AI, Ethics, and Society (AIES) DOI: 10.48550/arxiv.2508.14231.
Fleischmann KR and Wallace WA (2005) A covenant with transparency. Communications of the ACM 48(5): 93–97.
Hayes P, Poel I van de and Steen M (2023) Moral transparency of and concerning algorithmic tools. AI and Ethics 3(2): 585–600
Marin, L. G. U.-B., Rijsbosch, B., Spanakis, G. & Kollnig, K. Are Companies Taking AI Risks Seriously? A Systematic Analysis of Companies’ AI Risk Disclosures in SEC 10-K forms. arXiv (2025). https://arxiv.org/abs/2508.19313
Meijer A, Bovens M and Schillemans T (2014) Transparency. The Oxford Handbook of Public Accountability. Oxford University Press.
Reuel A, Ghosh A, Chim J, et al. (2025) Who Evaluates AI’s Social Impacts? Mapping Coverage and Gaps in First and Third Party Evaluations. arXiv. https://arxiv.org/abs/2511.05613
Turilli, M., Floridi, L.: The ethics of information transparency. Ethics Inform. Technol. 11, 105–112 (2009)
Bekkum M van and Borgesius FZ (2021) Digital welfare fraud detection and the Dutch SyRI judgment. European Journal of Social Security 23(4): 323–340.
Wan A, Klyman K, Kapoor S, et al. (2025) The 2025 Foundation Model Transparency Index. arXiv. DOI: 10.48550/arxiv.2512.10169.

