SEC 10-K Disclosures as a Route to Corporate AI Accountability?
There's some value to them, but policy needs to call for more specific statements
If society doesn’t know about how AI was used or contributed to some outcome there can be no accountability. This is where transparency can be a useful enabler. Transparency—defined as “the availability of information about an actor allowing other actors to monitor the workings or performance of this actor” [1]—comes in many different shapes and sizes. Here I want to talk about it in terms of corporate disclosures made to the U.S. Securities and Exchange Commission (SEC) in 10-K filings.
A 10-K filing is documentation that public companies need to submit annually to the SEC. It provides a comprehensive overview of the business including operations, financial performance, and any significant risks. In recent years the SEC has become concerned with “AI Washing” around the risks of AI, essentially that businesses might be making false claims by over-hyping the technology or underindexing the risks. This interest has even continued under the new administration. Filings are legally binding, and insufficient disclosures can lead to litigation or other enforcement actions.
These disclosures can act as a set of expectations around corporate perceptions of AI risks. If the public knows the company knows there is a risk then we might expect them to do something to try to mitigate it. It also provides a little ray of light that might help accountability forums, such as the media, ask the company about what it’s doing about the risk.
So, what exactly are companies disclosing about AI risks in these filings? A recent paper on arXiv presented an analysis of more than 30,000 10-K filings from more than 7,000 companies made between 2020 and 2024 [2]. Analysis shows that just about half the companies by 2024 mentioned AI somewhere in their disclosure, which was up from only about 1 in 8 in 2020.
The researchers qualitatively analyzed a sample of 50 companies, including 10 of the top tech companies. In that sample they found a wide range of societal risks from AI being cited, including discrimination, privacy, misinformation, malicious use, interactional harms, and so on. The risks were also framed in particular ways to dodge responsibility: “The top-tech firms often seem to externalise societal AI risks, attributing them to third-party misuse (e.g., faulty datasets or misuse of their models) while rarely acknowledging their own role in developing and deploying systems that may contribute to these risks…”
Oftentimes companies rely on vague or broad boilerplate language when they talk about risks, though there are at times more specific statements. In the paper the researchers quote the disclosure from Cognizant Technology Solutions: “The uncertainty around the safety and security of new and emerging AI applications requires significant investment to test for security, accuracy, bias, and other variables - efforts that can be complex, costly, and potentially impact our profit margins.” That’s the kind of statement that might be useful for accountability purposes.
Perhaps just as interesting are the risks the researchers didn’t observe in the sub-sample, which included environmental harms of AI, socioeconomic displacements, dangerous AI capabilities, multi-agent risks, and information ecosystem pollution. These are the risks that it seems companies haven’t yet recognized are anything they need to worry about. That may also limit accountability proceedings if companies don’t think these are issues they need to address.
There are clear limitations for informing AI accountability from 10-K filings both due to vague language and responsibility shirking. At the same time, this study does show that there can sometimes be bits of useful transparency included in these disclosures. Still, a more effective policy might more clearly indicate the types and specificity of AI risk information that are expected in these kinds of filings.
References
[1] Albert Meijer, “Transparency,” in The Oxford Handbook of Public Accountability, ed. Mark Bovens, Robert E. Goodin, and Thomas Schillemans (Oxford: Oxford University Press, 2014)
[2] Marin, L. G. U.-B., Rijsbosch, B., Spanakis, G. & Kollnig, K. Are Companies Taking AI Risks Seriously? A Systematic Analysis of Companies’ AI Risk Disclosures in SEC 10-K forms. arXiv (2025). https://arxiv.org/abs/2508.19313