72% of S&P 500 companies already consider AI a material risk
The accelerated adoption of artificial intelligence (AI) by major U.S. corporations has raised alarms within their own control and governance structures. According to the latest report published by The Conference Board and the analytics firm ESGAUGE, 72% of companies in the S&P 500 now identify AI as a material risk in their 2025 annual reports, a figure that contrasts sharply with the 12% reported in 2023.
READ ALSO. LLM: How much does it cost to train a large-scale Artificial Intelligence model?
This 500% increase in just two years reflects not only the expansion of AI in business operations, but also the growing concern among executives and corporate boards about the unintended effects that may arise from its use, such as application errors, more sophisticated cyberattacks, regulatory impacts, and reputational damage.
What types of AI-related risks are companies reporting?
The report analyzes the risk disclosures included in 10-K filings, which are mandatory before the U.S. Securities and Exchange Commission (SEC). In these documents, companies must specify the factors that could significantly affect their financial performance or operational stability.
The AI-related risks most frequently appearing in these documents fall into three main categories: reputational, cybersecurity, and regulatory or legal. Emerging risks are also identified, including those related to intellectual property, data privacy, and failed technology adoption.
How does AI affect corporate reputation?
Reputational risk is the most cited in 2025 reports, mentioned by 38% of S&P 500 companies (191 companies). This type of risk can arise from multiple failures: from unsuccessful AI project implementations that do not deliver as promised, to public errors made by automated customer service systems such as chatbots or recommendation tools.
The most common concerns include:
- Implementation and adoption failures: mentioned by 45 companies, referring to poorly integrated or ineffective projects that undermine confidence in corporate leadership.
- Consumer-facing AI: 42 companies warn that visible errors in customer interactions can cause immediate negative reactions.
- Privacy violations and sensitive data breaches: 24 firms acknowledge that mishandling personal information can lead to scandals and sanctions.
Additionally, 11 companies specifically mention the problem of model hallucinations or errors generated by language models, and 7 highlight concerns about bias and unfair automated decision-making.
The speed at which these errors spread across social networks and digital media makes their impact much greater than that of traditional operational failures.
Why does AI represent a growing threat to cybersecurity?
Artificial intelligence is not only used to protect systems but also to attack them. This is recognized by 99 companies in the S&P 500, which in their 2025 reports mention AI as a relevant risk in terms of cybersecurity, maintaining the same percentage as the previous year (20%).
One of the most recurring risks is that AI expands the attack surface by creating new data flows, integrations with external providers, and tools with access to critical systems. This not only generates more vulnerable entry points but also enhances adversaries: attackers can use AI to design more sophisticated, harder-to-detect, and automated attacks.
Among the most notable subcategories are:
- AI-amplified cyber risk: 40 companies warn about how AI accelerates intrusion attempts and complicates their detection.
- Third-party dependence: 18 companies point to the risks of relying on cloud providers or SaaS platforms.
- Leaks and unauthorized access: 17 companies highlight the danger of exposing sensitive data.
- Ransomware and advanced malware: 10 firms report that attackers use AI to generate more complex malicious code.
- Critical infrastructure and operational continuity: in sectors such as energy and healthcare, some companies note that AI-driven attacks could even affect the physical functioning of their operations.
What legal and regulatory implications are emerging from the use of artificial intelligence?
As governments and regulatory bodies move forward with the creation of specific AI regulations, companies face an increasingly demanding and fragmented regulatory environment, which can translate into compliance costs, sanctions, and litigation.
In total, 63 companies in the S&P 500 now include legal or regulatory risk as one of their main areas of concern. The report notes that 41 of these companies refer to regulatory uncertainty, especially around the European Union’s AI Act, which was approved in 2024 and imposes strict rules on high-risk systems.
Another 12 companies warn about the risk of sanctions and regulatory actions in the United States and other countries, while 6 more mention complex cases such as disputes over intellectual property in training data, or issues of legal liability when AI makes autonomous decisions.
What other emerging risks are beginning to appear in reports?
Beyond reputational, cybersecurity, and regulatory risks, the report identifies 55 companies that mention additional threats associated with AI adoption. These include:
- Intellectual property: 24 firms fear disputes over copyrights, use of third-party data, or theft of trade secrets.
- Privacy: 13 companies warn of potential violations of laws such as GDPR, HIPAA, and California privacy regulations.
- Technology adoption: 8 companies cite execution risks such as high costs, low scalability, and projects that fail to meet objectives.
- Market impact: some companies recognize that the misuse of AI can lead to a loss of competitiveness or affect their value to investors.
Additionally, the report anticipates that in future disclosure cycles new risk categories will emerge, such as the environmental footprint of AI (due to the intensive use of energy and water in data centers), labor impacts from automation, and agentic AI risks — systems with the capacity to make decisions without human supervision.