AI Disclosures and Securities Litigation: An Authoritative Guide on AI Risks and Disclosures [2025]

Table of Contents

Introduction to AI Disclosures and Securities Litigation

As we move towards an increasingly digitized world, the integration of artificial intelligence (AI) into various sectors has become a focal point of innovation and excitement. However, with this surge in AI applications comes a host of new risks and legal challenges that companies must navigate meticulously.

    • With the advent of AI, the stakes have been raised; companies must now ensure that they provide comprehensive and accurate information about their AI initiatives by having strong corporate governane and internal controls in place. Failing to do so could result in significant legal liabilities and reputational damages.
  • Robust Corporate Governance: In this guide, we address AI Risks and Disclosures and attempt to equip organizations with the knowledge and strategies necessary for having robust corporate governance and investor protects in place.
    • By addressing these risks proactively through detailed disclosures, companies can better safeguard themselves against litigation while fostering a culture of transparency and accountability.
    • Additionally, this guide underscores the collaboration between legal teams, data scientists, and corporate executives to ensure that all AI-related disclosures are accurate, meaningful, and compliant with regulatory standards.

As the regulatory environment continues to evolve, staying ahead of AI Disclosures requirements will be pivotal for companies looking to leverage AI technologies responsibly. The insights provided in this authoritative guide are designed to help organizations not only comply with current regulations but also anticipate future changes in the landscape of securities litigation related to AI.

By prioritizing transparency and comprehensive risk management, companies can harness the full potential of AI innovations while minimizing their exposure to legal risks which will only continure to rise as demonstated by the chart below on AI filings.

Understanding AI Disclosures in the Context of Securities Litigation

  • Building confidence: Comprehensive and transparent AI disclosures foster investor confidence by addressing the hidden risks associated with rapidly advancing AI technology.

The Importance of Transparency in AI Technologies

  • Clear communication: This includes details on algorithms, data inputs, decision-making processes, and ethical measures, ensuring investors understand the benefits and risks.
  • Strategic advantage: Transparency can serve as a strategic advantage, differentiating a company in a competitive market by fostering trust.
  • Accountability and improvement: Open communication about AI systems fosters accountability and enables continuous improvement through dialogues with stakeholders. 

Additional considerations and best practices

  • Avoid “AI-washing”: Make truthful, specific, and evidence-backed AI claims, AI Incidents, and avoid overstating capabilities or making vague, “boilerplate” statements.
  • Establish governance: Implement corporate governance structures to identify and manage AI risks at both the board and management levels.
  • Standardize terminology: Use consistent AI terminology across all public communications, including marketing, investor presentations, and SEC filings, to avoid misrepresentation claims.
  • Focus on explainability and accountability: Strive for explainable AI systems that can justify their decisions and establish clear lines of responsibility for potential AI Incidents.
  • Stay current: Keep abreast of regulatory developments and industry standards, which are evolving rapidly.
man with green hoodie looking over computer that says fraud used in AI Disclosures
Transparency is crucial to preempting and mitigating potential legal complications that may arise from the misuse or unanticipated consequences of AI technologies.

Key legal frameworks governing AI disclosures

  • United States: While the U.S. does not have a single federal AI law, it has a “patchwork of federal and state-level initiatives”.
  • Europe: The European Union has taken a leading regulatory role with two key frameworks:
    • The EU AI Act: Enacted in August 2024, this is the world’s first comprehensive legal framework on AI, with most provisions becoming applicable by August 2026. It uses a risk-based approach, with stricter rules for “high-risk” systems like those used in credit scoring.
    • General Data Protection Regulation (GDPR): This data privacy law already has significant implications for AI, particularly concerning data used for training and automated decision-making. Companies must ensure a lawful basis for processing personal data, uphold data minimization, and offer a “right of explanation” for decisions based solely on automated processes.
  • International: Organizations are working towards harmonized global standards.

Common AI risks associated with AI in financial markets

  • Algorithmic bias: AI systems can reflect and even amplify biases from their training data, leading to unfair or discriminatory outcomes in credit decisions, hiring, and insurance. For financial institutions, this creates compliance risks with regulations such as fair lending laws. Mitigation strategies include using diverse datasets, implementing fairness metrics, and conducting regular audits and maintaining robust corporate governance, internal controls and investor protections.
    • AI-powered attacks: AI enables sophisticated phishing, deepfake-based social engineering, and the development of more effective malware, targeting a sector that manages trillions in assets.
    • New attack surfaces: AI systems themselves can be targeted, as they require large, sensitive datasets to train and function.
  • Opaqueness and “black box” models: The complexity of many advanced AI models makes their decision-making processes difficult to understand. In financial markets, where transparency is paramount, this opacity can hinder regulatory compliance and erosion of trust.
  • Third-party risk and concentration: The financial sector’s reliance on a concentrated market of specialized cloud services and AI vendors could create systemic vulnerabilities if a key provider is compromised.

Implications and outlook

The legal and regulatory landscape is rapidly evolving to address the complex challenges posed by AI. Compliance and AI disclosures is not just about avoiding penalties but also about building and maintaining trust with investors, regulators, and the public. Companies must implement comprehensive AI governance frameworks that account for these legal and ethical risks to ensure that AI is a tool for responsible innovation rather than a source of liability in security class actionss along with reputational damages.

Recent case studies of AI-related securities litigation

The past few years have seen a rise in securities litigation related to AI, often centered on claims of “AI washing” where companies allegedly exaggerate or misrepresent their AI capabilities to investors or fail make AI disclosures related to AI Risks and AI incidents. These cases highlight the risks companies face if their AI disclosures lack accuracy and transparency. Here are some key examples:
  • C3.ai: This securities litigation, filed in September 2025, alleges that C3.ai made misleading statements regarding the robustness of its sales pipeline and financial projections, which were likely tied to its AI offerings. This exemplifies the risk of overstating the business impact of AI technologies.
  • DocGo Inc.: An executive’s claims about the potential for DocGo’s AI systems to reduce the need for human personnel, presented with inflated credentials, formed the basis of securities class action lawsuits following a missed revenue target. This highlights the risk of misleading statements by company representatives and the need for a reasonable basis for forward-looking statements regarding AI capabilities.
  • Apple Siri: Securities class actions filed in June 2025 alleges that Apple misled investors at its Worldwide Developers Conference about the readiness of enhanced Siri features for the iPhone 16 lineup. This demonstrates the risks associated with setting unrealistic AI rollout timelines, particularly when linked to revenue projections or product launches.
  • Upstart Holdings Inc.: Investors filed securities class action lawsuits against Upstart, alleging that the company made false claims about its AI loan system’s ability to deliver higher approval rates and lower interest rates with the same loss rates compared to traditional methods. The court allowed the case to proceed, finding that the plaintiffs adequately pleaded that the AI model lacked the claimed advantages.
  • Delphia (USA) Inc. and Global Predictions, Inc.: These investment advisers faced SEC enforcement actions for making false and misleading statements about their use of AI in investment strategies and operations. These cases emphasize the SEC’s focus on ensuring accuracy and avoiding overstatement of AI capabilities in investor communications.

Key takeaways from these securities class actions

  • Scrutiny of “AI Washing”: Regulatory bodies like the SEC are actively targeting instances where companies exaggerate their AI capabilities or imply a greater level of AI use than actually exists. The FTC has also launched “Operation AI Comply,” indicating a similar focus on preventing deceptive practices related to AI.
  • Importance of Substantiation: Companies must be able to substantiate their claims about AI, particularly regarding performance, capabilities, and the timeline for deployment.
  • Accuracy in Investor Communications: The cases underscore the need for precision and accuracy in all communications related to AI, including SEC filings, press releases, investor presentations, and earnings calls. Generalized or vague claims can lead to accusations of misleading investors.
  • Evolving Legal Landscape: Courts are grappling with how existing securities laws and regulations apply to AI, and new interpretations are emerging.
Legal sign design with scales of justice symbol printed on black background. 3D illustration used in AI Disclosures
As the regulatory environment continues to evolve, staying ahead of AI Disclosures requirements will be pivotal for companies looking to leverage AI technologies responsibly.

Best practices for companies in AI disclosure compliance

To avoid securities litigation and maintain investor trust, companies should adopt a proactive and comprehensive approach to Cybersecurity threats and AI disclosure compliance in addition to strengthening their corporate governance, investor protections and internal controls:
  • Establish a Robust AI Governance Framework:
    • Define clear policies and procedures: Outline acceptable AI use cases, data protocols (including ethical data usage and privacy safeguards), and human oversight requirements.
    • Ensure data integrity: Establish standards for data lineage tracking, metadata, and ensure that AI models are trained on diverse and representative datasets to prevent bias.
    • Prioritize transparency: Implement explainable AI (XAI) techniques to understand how AI systems make decisions. Transparency facilitates audits and helps build trust with stakeholders.
  • Develop Clear and Detailed AI Disclosures:
    • Ensure accuracy and avoid “AI washing”: Verify that all statements about AI usage are truthful, supported by evidence, and avoid overstating capabilities.
    • Consider the materiality of AI use: Assess if discussions about AI in board meetings, earnings calls, and investor presentations suggest materiality, warranting disclosure in SEC filings.
  • Invest in Ongoing Education and Training:
    • Educate employees on AI ethics and responsible use: Training should cover potential risks, such as discrimination, bias, privacy violations, and legal and ethical responsibilities related to AI.
By implementing these best practices, companies can foster a culture of responsible AI use, ensure compliance with legal and ethical standards, and build investor confidence in the rapidly evolving world of AI-driven business.
Type of AI System Resulting AI Disclosure Requirement
Generative AI models that create content for customersThat the content was generated by AI
Chatbots or virtual assistants interacting with customersThat customers are talking to an AI, not a human
AI systems making important decisions affecting peopleThe use of AI in the decision-making process
AI tools used in regulated services like healthcare or financeWhen AI is used in these services

The Role of Regulatory Bodies in AI Oversight

Regulatory bodies play a pivotal role in overseeing AI technologies and ensuring that companies adhere to AI disclosures that protect investor interests such as AI Risks, AI Incidents, and cybersecurity threats. These organizations are tasked with developing and enforcing regulations that address the unique challenges posed by AI, including issues related to transparency, accountability, and data privacy. As AI continues to evolve, regulatory bodies must adapt their frameworks to keep pace with technological advancements and emerging risks.

  • International Organization of Securities Commissions (IOSCO) and the Financial Stability Board (FSB): At the international level, these organizations work to create harmonized standards that facilitate global oversight of AI-related risks. These bodies collaborate with national rregulators to develop best practices and guidelines that address the complexities of AI in financial markets. As regulatory bodies continue to refine their frameworks, they play an essential role in ensuring that AI technologies are deployed in a manner that is both ethical and beneficial to investors.
To avoid the wrath of these regulatory bodies, it is important that, in addition to the above, companies implement robust corporate governance, with enhanced investor protections and internal contros to avoid reputational damages and securities litigation.
Wallstreet bear and bull used in AI Disclosures
By prioritizing transparency and comprehensive risk management, companies can harness the full potential of AI innovations while minimizing their exposure to legal AI risks.

Future trends in AI disclosures and legal implications

  • AI-focused disclosures move beyond boilerplate. Regulatory bodies like the SEC have explicitly warned against “AI washing” and boilerplate AI risk disclosures. Future trends will require companies to provide more specific and meaningful information.
  • Increased focus on ethical AI and bias. The EU AI Act and various state laws already focus on high-risk AI systems to prevent algorithmic discrimination. This trend is driven by growing public and investor scrutiny of how AI may perpetuate biases related to gender, race, and other protected characteristics. Disclosures will need to provide details on risk mitigation.
  • Expansion of AI into ESG reporting. Investors and regulators are already linking AI and ESG factors. AI tools are increasingly used to help companies and investors track and report on ESG metrics. However, this also raises the need to disclose the environmental footprint of large AI models, as well as the social and governance risks of AI deployment.
  • AI integration into regulatory oversight. Regulatory bodies like the SEC are making AI a priority and may use AI technologies to enhance their own oversight capabilities. This could enable regulators to more efficiently monitor company disclosures, corporate governance and internal controls and detect non-compliance, such as “AI washing,” at scale.
    • Legal implication: Companies will need to be increasingly transparent and accurate in their disclosures, as regulators may be able to detect fraudulent or misleading statements more quickly and efficiently.

How investors can assess AI-related risks

    • Use-case specificity: The SEC has pushed for disclosures that clearly explain how AI is used, providing specific examples rather than relying on generic statements.
    • Reasonable basis: Look for evidence that AI claims have a “reasonable basis” and are not just aspirational.
  • Assess data governance and integrity: Analyze whether a company discloses its approach to data management, as AI performance depends heavily on the quality and integrity of its data inputs.
    • Data sourcing: Check if the company explains how it obtains and uses its datasets.
  • Evaluate ethical AI practices: Seek evidence of how companies address the ethical implications of AI.
    • Human oversight: Assess how the company balances AI autonomy with human oversight, especially for high-stakes decisions.
    • Transparency and explainability: Look for a commitment to Explainable AI (XAI) to ensure that the logic behind AI-driven decisions is clear.
  • Monitor third-party and vendor risk: Companies relying on third-party AI models or platforms introduce new risks. Investors should look for disclosures on third-party relationships and how those vendors’ practices align with the company’s own AI governance framework.
  • Analyze risk factor disclosures: Pay close attention to the Risk Factors section of SEC filings, ensuring the disclosed risks are specific to the company’s AI usage and not boilerplate. Look for risks related to cybersecurity, competition, and regulatory changes.

Conclusion

As we navigate through the rapidly evolving landscape of artificial intelligence (AI), the importance of understanding AI disclosures and the associated risks cannot be overstated. By 2025, the integration of AI into various sectors has significantly broadened, necessitating a comprehensive approach to managing AI risks.

This authoritative guide serves as a critical resource for stakeholders seeking to align their practices with emerging standards in AI disclosures and securities litigation. The business community must recognize that transparent AI disclosures are not merely a regulatory requirement but a strategic imperative to mitigate potential securities litigation.

  • AI Disclosures: The complexity of AI systems demands a nuanced understanding of the risks they entail. From data privacy concerns to algorithmic biases, these risks can materially impact a company’s financial health and operational stability. Consequently, accurate and thorough AI disclosures are vital.
  • AI Risks: They provide investors and regulators with essential insights into how AI technologies are being deployed, the potential risks involved, and the measures in place to address these risks. This guide underscores the need for companies to adopt robust disclosure practices that reflect both the technical intricacies and ethical considerations of AI.
  • Structuring; This guide offers practical advice on how to structure AI disclosures to withstand legal scrutiny, emphasizing the importance of transparency, accountability, and proactive risk management. By adhering to these principles, organizations can better safeguard themselves against litigation while fostering trust among investors.

In conclusion, as AI continues to transform industries, the interplay between AI disclosures and securities litigation will become increasingly significant. This 2025 authoritative guide on AI risks and disclosures provides invaluable insights for navigating this complex terrain.

By prioritizing comprehensive and transparent AI disclosures, companies can effectively manage risks, bolster investor confidence, and mitigate the likelihood of securities litigation. As such, this guide is an essential tool for any organization looking to stay ahead in the age of artificial intelligence.

Contact Timothy L. Miles Today for a Free Case Evaluation about Security Class Action Lawsuits

If you suffered substantial losses and wish to serve as lead plaintiff in a securities class action, or have questions about AI disclosures, or just general questions about your rights as a shareholder, please contact attorney Timothy L. Miles of the Law Offices of Timothy L. Miles, at no cost, by calling 855/846-6529 or via e-mail at [email protected].(24/7/365).

Timothy L. Miles, Esq.
Law Offices of Timothy L. Miles
Tapestry at Brentwood Town Center
300 Centerview Dr. #247
Mailbox #1091
Brentwood,TN 37027
Phone: (855) Tim-MLaw (855-846-6529)
Email: [email protected]
Website: www.classactionlawyertn.com

FacebookLinkedinPinterestyoutube

 

Visit Our Extensive Investor Hub: Learning for Informed Investors 

Pros and Cons of Opting OutEmerging Trends in Securities Litigation
The Role of Institutional InvestorsInvestor Protection
Securities Filing Statistics 2024Role of Regulatory Bodies
Shareholder RightsReport a Fraud
Frequently Asked QuestionsCorporate Governance
Lead Plaintiff DeadlinesClass Certification
Lead Plaintiff SelectionTimeline of Events
Settlement Process

 

Picture of Timothy L.Miles
Timothy L.Miles

Timothy L. Miles is a nationally recognized shareholder rights attorney raised in Brentwood, Tennessee. Mr. Miles has maintained an AV Preeminent Rating by Martindale-Hubbell® since 2014, an AV Preeminent Attorney – Judicial Edition (2017-present), an AV Preeminent 2025 Lawyers.com (2018-Present). Mr. Miles is also member of the prestigious Top 100 Civil Plaintiff Trial Lawyers: The National Trial Lawyers Association, a member of its Mass Tort Trial Lawyers Association: Top 25 (2024-present) and Class Action Trial Lawyers Association: Top 25 (2023-present). Mr. Miles is also a Superb Rated Attorney by Avvo, and was the recipient of the Avvo Client’s Choice Award in 2021. Mr. Miles has also been recognized by Martindale-Hubbell® and ALM as an Elite Lawyer of the South (2019-present); Top Rated Litigator (2019-present); and Top-Rated Lawyer (2019-present),

SUBMIT YOUR INFORMATION

LAW OFFICES OF TIMOTHY L. MILES
TIMOTHY L. MILES
(855) TIM-M-LAW (855-846-6529)
[email protected]

(24/6/365)