The Media as Accountability Forum
How policymakers could help support the media's role in fostering a more accountable AI ecosystem.
An accountability relationship between an actor and a forum means that the actor has to answer to that forum for some conduct. There are a range of types of forums that might have accountability relationships with AI systems including political (e.g. congressional hearings), legal (e.g. courts), administrative (e.g. auditors), professional (e.g. professional societies, industry working groups), social (e.g. civil society organizations), or media (e.g. newsmedia, social media).
Different forums operate in different ways, have different capacities for obtaining information or explanation, and may have different standards of expected behavior or ways to sanction the actor. Because of their different capacities to know and act, forums often work in concert to hold an actor accountable. In a networked view of accountability [1] the interplay between forums is a necessary feature of how accountability is ultimately rendered. Here I focus on the media as a forum for AI accountability, but it’s also important to keep in mind this networked view of how the media forum connects to others.
Jacobs and Schillemans present a typology for how the media contribute to accountability of public institutions, outlining four distinct functions: spark, forum, amplifier, and trigger [5]. As a spark, the ordinary activity of news reporting (“just asking questions”) may cause organizations to reconsider their behavior or role in a process. As a forum, the media act as a space where investigations uncover unwanted behaviors leading to critical questions that are posed to the actor for explanation. The media can also amplify the impact of other accountability forums, for example, by bringing more attention to congressional hearings. The last role, trigger, is where the media contributes to enabling other accountability forums by producing relevant information that spurs formal accountability in other forums.
Unlike legal or administrative forums the media is an informal forum and has no real authority to enforce infractions from the actors they address. Media forums wield power by drawing public attention to issues, with the consequences being largely reputational in nature. An actor who fails to provide a satisfactory account of an outcome may appear negligent in the public eye or draw the disapproval of the public for its conduct, negatively impacting its reputation.
While its teeth may not be as sharp as some other forums’ the media still has important contributions to make towards closing information and knowledge gaps around AI systems. Using techniques such as interviews with various stakeholders, examination of leaked documents, public information requests [2], external data-driven audits of system behavior [3], or large-scale investigation of AI systems [4], media can inject valuable observations about the behavior of AI systems that trigger a call for accountability. Media can also surface information that informs and triggers other forums that do have teeth. For instance, Reuters’ reporting on an internal Meta document detailing chatbot policies led to Senate committee investigations. Other journalistic investigations, such as ProPublica’s look at algorithmic rent-setting in Texas, have eventually led to legal settlements.
Media also pay a critical role in establishing or maintaining norms around acceptable behaviors for AI systems in society as well as who may be answerable for explaining violations of behavior. This includes propagating both descriptive norms (i.e. what actors do) and injunctive norms (i.e. what actors ought to do) [6]. Journalists apply a range of values around what kinds of outcomes or behaviors of actors may be normatively detrimental and therefore warrant scrutiny. In their daily decisions around what is newsworthy they have to assess what impacts in society are worthy of broader attention. This is the agenda-setting power of the media. By selecting and framing impacts of AI to report on, media can help establish beliefs or reinforce attitudes, which can eventually develop into social norms or expectations for the behavior of AI systems [7]. And of course the media is not homogenous. News outlets on the left vs. the right of the political spectrum prioritize different risks and impacts of AI in society in their coverage [8].
In the course of their reporting journalists may seek accounts to help explain some observed behavior — why did the AI system produce some bad outcome? This activity helps to establish accountability relationships between actors in the system and the media as a forum. To do this journalists parse the complex sociotechnical system and consider which actors might take responsibility. By asking certain actors for explanations (e.g. a tech developer or data annotation provider), journalists audition expectations that the actor may need to answer for some outcome or (in)action. Some actors may not respond to requests for explanations, though by including these gaps in their article (“i.e. XYZ did not respond to requests for comment”), journalists subtly signal an injunctive norm — perhaps the actor should have provided an account. Journalists can also query other stakeholders in the system such as experts who study the system to ask them who they think ought to be responsible for some outcome, thus further contributing to the development of injunctive norms.
Policy Implications
The media's power to shape the public and political agenda around AI, to investigate and expose problems, and to contribute to the development of social norms makes it a critical forum for enabling AI accountability. Policymakers should consider how to support the media's role to foster a more accountable AI ecosystem.
For one, policies that support the media’s capacity for producing information about AI system behavior can be augmented. This could include everything from strengthening public records requests laws and whistleblower protections to increased data access provisions for auditing. Investing in more journalists working on the AI accountability beat would also serve to increase the stock of information, which is why it’s encouraging to see programs from the Pulitzer Center and the Tarbell Center focused on exactly that.
But also, policymakers need to be cognizant of how different media and perspectives in society are representing the norms and standards of behavior for AI systems. The agenda setting power of media (including new AI-driven media) influences what the public and, consequently, policymakers consider important. Policy should invest resources in large scale tracking surveys of public attitudes towards a range of AI behaviors. Moreover, a media monitor should be set up to track discourse and assess valuations of AI behavior in news, editorials, and other social media. Survey and tracking results can then inform standards for AI system behavior.
References
[1] Wieringa, M. What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability. FAT* ’20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency 1–18 (2020) doi:10.1145/3351095.3372833.
[2] Fink, K. Opening the government’s black boxes: freedom of information and algorithmic accountability. 17, 1–19 (2017). https://doi.org/10.1080/1369118X.2017.1330418
[3] Diakopoulos, N. Algorithmic Accountability: Journalistic investigation of computational power structures. Digital Journalism 3, 398–415 (2015). https://doi.org/10.1080/21670811.2014.976411
[4] Veerbeek, J. Fighting Fire with Fire: Journalistic Investigations of Artificial Intelligence Using Artificial Intelligence Techniques. Journalism Practice, 1–19 (2025). https://doi.org/10.1080/17512786.2025.2479499
[5] Jacobs, S. & Schillemans, T. Media and public accountability: typology and research agenda. In Media and Governance, Eds. T. Schillmans and J. Pierre. (Polity Press, 2019).
[6] Lapinski, M. K. & Rimal, R. N. An Explication of Social Norms. Communication Theory 15, 127–147 (2005). https://doi.org/10.1111/j.1468-2885.2005.tb00329.x
[7] Shehata, A. et al. Conceptualizing long-term media effects on societal beliefs. Annals of the International Communication Association 45, 1–19 (2021). https://doi.org/10.1080/23808985.2021.1921610
[8] M. Allaham, K. Kieslich, N. Diakopoulos. Informing AI Risk Assessment with News Media: Analyzing National and Political Variation in the Coverage of AI Risks. Proceedings of the Conference on AI, Ethics, and Society (AIES). 2025. https://arxiv.org/abs/2507.23718