Introduction to AI Disclosures and Securities Litigation
As we move towards an increasingly digitized world, the integration of artificial intelligence (AI) into various sectors has become a focal point of innovation and excitement. However, with this surge in AI applications comes a host of new risks and legal challenges that companies must navigate meticulously.
- AI Disclosures: Entails the transparent reporting of AI systems’ functionalities, limitations, and potential risks, is gaining prominence. AI Disclosures are essential for maintaining investor trust and ensuring regulatory compliance.
- They provide a clear understanding of how AI is being utilized within an organization and what inherent risks are associated with its deployment.
- This transparency is crucial to preempting and mitigating potential legal complications that may arise from the misuse or unanticipated consequences of AI technologies.
- Securities litigation: The importance of AI Disclosures cannot be overstated.
- Securities litigation often arises when there are allegations that a company has misled investors or failed to disclose material information that could impact investment decisions.
- With the advent of AI, the stakes have been raised; companies must now ensure that they provide comprehensive and accurate information about their AI initiatives by having strong corporate governane and internal controls in place. Failing to do so could result in significant legal liabilities and reputational damages.
- Robust Corporate Governance: In this guide, we address AI Risks and Disclosures and attempt to equip organizations with the knowledge and strategies necessary for having robust corporate governance and investor protects in place.
- It emphasizes the need for robust risk assessment frameworks and detailed disclosures to protect against securities litigation.
- AI Disclosures: The guide also addresses the various types of AI Risks that companies need to disclose, including algorithmic biases, data privacy issues, and the potential for AI systems to malfunction or generate unintended outcomes.
- By addressing these risks proactively through detailed disclosures, companies can better safeguard themselves against litigation while fostering a culture of transparency and accountability.
- Additionally, this guide underscores the collaboration between legal teams, data scientists, and corporate executives to ensure that all AI-related disclosures are accurate, meaningful, and compliant with regulatory standards.
As the regulatory environment continues to evolve, staying ahead of AI Disclosures requirements will be pivotal for companies looking to leverage AI technologies responsibly. The insights provided in this authoritative guide are designed to help organizations not only comply with current regulations but also anticipate future changes in the landscape of securities litigation related to AI.
By prioritizing transparency and comprehensive risk management, companies can harness the full potential of AI innovations while minimizing their exposure to legal risks which will only continure to rise as demonstated by the chart below on AI filings.

Understanding AI Disclosures in the Context of Securities Litigation
- Growing importance: As companies increasingly integrate AI into their business, informing investors about its nature, scope, and impact is crucial for transparency and trust.
- A new form of risk: AI technologies introduce new variables that can affect financial outcomes, making AI disclosures a critical aspect of securities litigation.
- Legal vs. hype: The clarity and precision of AI disclosures can distinguish legal compliance from costly lawsuits, especially with the SEC and private plaintiffs targeting “AI-washing”.
- Misrepresentation risks: Omissions or misrepresentations regarding AI capabilities can lead to allegations of securities fraud, particularly as investors rely on accurate information to make informed decisions.
- Building confidence: Comprehensive and transparent AI disclosures foster investor confidence by addressing the hidden risks associated with rapidly advancing AI technology.
- Central to corporate governance: The dynamics of AI disclosures are now a vital element of corporate governance and investor protections in the digital age.
The Importance of Transparency in AI Technologies
- Essential for trust and compliance: Transparent AI practices are vital for building investor trust and complying with evolving legal requirements.
- Clear communication: This includes details on algorithms, data inputs, decision-making processes, and ethical measures, ensuring investors understand the benefits and risks.
- Avoids consequences: A lack of transparency can lead to reputational damages and legal challenges from misled stakeholders.
- Strategic advantage: Transparency can serve as a strategic advantage, differentiating a company in a competitive market by fostering trust.
- Accountability and improvement: Open communication about AI systems fosters accountability and enables continuous improvement through dialogues with stakeholders.
Additional considerations and best practices
- Risk disclosure: Disclose AI risks, including cybersecurity, regulatory uncertainty, ethical/reputational damages, competition, and reliance on third-party vendors.
- Avoid “AI-washing”: Make truthful, specific, and evidence-backed AI claims, AI Incidents, and avoid overstating capabilities or making vague, “boilerplate” statements.
- Validate AI statements: Ensure public AI disclosures are supported by effective disclosure controls by having robust corporate governance and investor protections and procedures.
- Establish governance: Implement corporate governance structures to identify and manage AI risks at both the board and management levels.
- Standardize terminology: Use consistent AI terminology across all public communications, including marketing, investor presentations, and SEC filings, to avoid misrepresentation claims.
- Mitigate bias: Implement and communicate measures to prevent and address inherent biases in AI models.
- Focus on explainability and accountability: Strive for explainable AI systems that can justify their decisions and establish clear lines of responsibility for potential AI Incidents.
- Balance privacy and transparency: Carefully balance the need for transparency with the protection of customer data privacy, obtaining explicit consent where necessary.
- Stay current: Keep abreast of regulatory developments and industry standards, which are evolving rapidly.

Key legal frameworks governing AI disclosures
- United States: While the U.S. does not have a single federal AI law, it has a “patchwork of federal and state-level initiatives”.
- The Securities and Exchange Commission (SEC) expects public companies to define AI clearly in disclosures, ensure claims about AI capabilities have a “reasonable basis,” and report material AI risks. Examples of SEC scrutiny include “AI-washing,” where companies overstate their AI use. The SEC has also designated AI as an examination priority for 2025.
- Individual states are also establishing precedents. For example, California passed the Generative AI Training Data Transparency Act (AB 2013), mandating that generative AI developers disclose detailed information about the datasets used to train their systems.
- Voluntary frameworks, such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework, provide guidance on risk identification and mitigation, which can help companies ensure trustworthy AI development while enhancing corporate governance, internal controls and investor protections.
- Europe: The European Union has taken a leading regulatory role with two key frameworks:
- The EU AI Act: Enacted in August 2024, this is the world’s first comprehensive legal framework on AI, with most provisions becoming applicable by August 2026. It uses a risk-based approach, with stricter rules for “high-risk” systems like those used in credit scoring.
- General Data Protection Regulation (GDPR): This data privacy law already has significant implications for AI, particularly concerning data used for training and automated decision-making. Companies must ensure a lawful basis for processing personal data, uphold data minimization, and offer a “right of explanation” for decisions based solely on automated processes.
- International: Organizations are working towards harmonized global standards.
- The Financial Stability Board (FSB) and the International Organization of Securities Commissions (IOSCO) have released reports assessing the financial stability risks and use cases of AI. The FSB recommends that authorities enhance their regulatory capabilities and address data gaps related to AI adoption.
Common AI risks associated with AI in financial markets
- Algorithmic bias: AI systems can reflect and even amplify biases from their training data, leading to unfair or discriminatory outcomes in credit decisions, hiring, and insurance. For financial institutions, this creates compliance risks with regulations such as fair lending laws. Mitigation strategies include using diverse datasets, implementing fairness metrics, and conducting regular audits and maintaining robust corporate governance, internal controls and investor protections.
- Cybersecurity threats: AI introduces new vulnerabilities and can also be weaponized by malicious actors.
- AI-powered attacks: AI enables sophisticated phishing, deepfake-based social engineering, and the development of more effective malware, targeting a sector that manages trillions in assets.
- New attack surfaces: AI systems themselves can be targeted, as they require large, sensitive datasets to train and function.
- Opaqueness and “black box” models: The complexity of many advanced AI models makes their decision-making processes difficult to understand. In financial markets, where transparency is paramount, this opacity can hinder regulatory compliance and erosion of trust.
- Explainable AI (XAI): This emerging field provides a way to demystify complex models by offering insights into how an AI arrives at its conclusions. XAI is critical for regulatory requirements that compel financial firms to explain automated decisions.
- Third-party risk and concentration: The financial sector’s reliance on a concentrated market of specialized cloud services and AI vendors could create systemic vulnerabilities if a key provider is compromised.
- Market volatility and systemic risk: The widespread use of similar AI trading models and data sources could increase market correlations, exacerbating volatility and amplifying market stress.
Implications and outlook
Recent case studies of AI-related securities litigation
- C3.ai: This securities litigation, filed in September 2025, alleges that C3.ai made misleading statements regarding the robustness of its sales pipeline and financial projections, which were likely tied to its AI offerings. This exemplifies the risk of overstating the business impact of AI technologies.
- DocGo Inc.: An executive’s claims about the potential for DocGo’s AI systems to reduce the need for human personnel, presented with inflated credentials, formed the basis of securities class action lawsuits following a missed revenue target. This highlights the risk of misleading statements by company representatives and the need for a reasonable basis for forward-looking statements regarding AI capabilities.
- Apple Siri: Securities class actions filed in June 2025 alleges that Apple misled investors at its Worldwide Developers Conference about the readiness of enhanced Siri features for the iPhone 16 lineup. This demonstrates the risks associated with setting unrealistic AI rollout timelines, particularly when linked to revenue projections or product launches.
- Zillow Group Inc.: In this securities litigation, plaintiffs claimed that Zillow misrepresented the accuracy of its AI-powered “Zestimate offer” algorithms, leading investors to overpay for homes because of overreliance on the AI’s capabilities.
- Upstart Holdings Inc.: Investors filed securities class action lawsuits against Upstart, alleging that the company made false claims about its AI loan system’s ability to deliver higher approval rates and lower interest rates with the same loss rates compared to traditional methods. The court allowed the case to proceed, finding that the plaintiffs adequately pleaded that the AI model lacked the claimed advantages.
- Delphia (USA) Inc. and Global Predictions, Inc.: These investment advisers faced SEC enforcement actions for making false and misleading statements about their use of AI in investment strategies and operations. These cases emphasize the SEC’s focus on ensuring accuracy and avoiding overstatement of AI capabilities in investor communications.
- Nate Inc.: The founder of this mobile shopping app faced parallel actions from the SEC and DOJ for allegedly making false claims about the AI technology powering the platform. The SEC and DOJ claim that transactions were being processed manually by foreign contract workers rather than by the touted AI. This case underscores the risks of fabricating or exaggerating the degree of automation provided by AI.
Key takeaways from these securities class actions
- Scrutiny of “AI Washing”: Regulatory bodies like the SEC are actively targeting instances where companies exaggerate their AI capabilities or imply a greater level of AI use than actually exists. The FTC has also launched “Operation AI Comply,” indicating a similar focus on preventing deceptive practices related to AI.
- Importance of Substantiation: Companies must be able to substantiate their claims about AI, particularly regarding performance, capabilities, and the timeline for deployment.
- Accuracy in Investor Communications: The cases underscore the need for precision and accuracy in all communications related to AI, including SEC filings, press releases, investor presentations, and earnings calls. Generalized or vague claims can lead to accusations of misleading investors.
- Evolving Legal Landscape: Courts are grappling with how existing securities laws and regulations apply to AI, and new interpretations are emerging.

Best practices for companies in AI disclosure compliance
- Establish a Robust AI Governance Framework:
- Define clear policies and procedures: Outline acceptable AI use cases, data protocols (including ethical data usage and privacy safeguards), and human oversight requirements.
- Assign clear roles and responsibilities: Designate a dedicated team or committee to oversee AI governance, including regular audits, risk assessments, and policy enforcement.
- Ensure data integrity: Establish standards for data lineage tracking, metadata, and ensure that AI models are trained on diverse and representative datasets to prevent bias.
- Prioritize transparency: Implement explainable AI (XAI) techniques to understand how AI systems make decisions. Transparency facilitates audits and helps build trust with stakeholders.
- Manage third-party risks: When engaging with vendors or integrating third-party AI solutions, understand the technology thoroughly and require clear details about the vendor’s tools, data practices, and responsible AI policies.
- Develop Clear and Detailed AI Disclosures:
- Define AI clearly within your business context: Clarify what constitutes AI within your organization and how it’s being used.
- Provide contextualized details: Avoid generic statements and explain specific applications of AI and how it contributes to strategic objectives or operations.
- Distinguish between current capabilities and future projections: Clearly differentiate what the AI currently does versus aspirational or future capabilities.
- Disclose material risks: Identify and explain risks associated with AI, such as potential algorithmic biases, reliance on third-party providers, cybersecurity threats, or compliance with emerging regulations.
- Ensure accuracy and avoid “AI washing”: Verify that all statements about AI usage are truthful, supported by evidence, and avoid overstating capabilities.
- Consider the materiality of AI use: Assess if discussions about AI in board meetings, earnings calls, and investor presentations suggest materiality, warranting disclosure in SEC filings.
- Invest in Ongoing Education and Training:
- Educate employees on AI ethics and responsible use: Training should cover potential risks, such as discrimination, bias, privacy violations, and legal and ethical responsibilities related to AI.
- Provide data privacy and security training: Employees should understand the importance of protecting sensitive data used by AI, comply with privacy laws, and handle data securely. Training should also address cybersecurity threats related to AI and how to respond to breaches.
- Keep abreast of regulatory changes and best practices: Continuous learning and participation in industry discussions are crucial for staying informed about the evolving landscape of AI technologies, regulations, and ethical guidelines.
- Engage in continuous monitoring and periodic re-audits:Monitor AI tool usage for new risks and update policies regularly. Periodically re-audit based on risk profiles, update models and data sources, and continue training programs to prevent new compliance gaps.
| Type of AI System | Resulting AI Disclosure Requirement |
|---|---|
| Generative AI models that create content for customers | That the content was generated by AI |
| Chatbots or virtual assistants interacting with customers | That customers are talking to an AI, not a human |
| AI systems making important decisions affecting people | The use of AI in the decision-making process |
| AI tools used in regulated services like healthcare or finance | When AI is used in these services |
The Role of Regulatory Bodies in AI Oversight
Regulatory bodies play a pivotal role in overseeing AI technologies and ensuring that companies adhere to AI disclosures that protect investor interests such as AI Risks, AI Incidents, and cybersecurity threats. These organizations are tasked with developing and enforcing regulations that address the unique challenges posed by AI, including issues related to transparency, accountability, and data privacy. As AI continues to evolve, regulatory bodies must adapt their frameworks to keep pace with technological advancements and emerging risks.
- United States: The SEC is a key regulatory body responsible for overseeing AI disclosures within publicly traded companies. The SEC provides guidance on the types of information that companies should disclose to investors, indlueint AI Risks and AI incidents, and all material risks and uncertainties associated with AI systems.
- European Securities and Markets Authority (ESMA): Plays a crucial role in regulating AI in Europe, ensuring that companies comply with relevant disclosure requirements and promoting investor protection.
- International Organization of Securities Commissions (IOSCO) and the Financial Stability Board (FSB): At the international level, these organizations work to create harmonized standards that facilitate global oversight of AI-related risks. These bodies collaborate with national rregulators to develop best practices and guidelines that address the complexities of AI in financial markets. As regulatory bodies continue to refine their frameworks, they play an essential role in ensuring that AI technologies are deployed in a manner that is both ethical and beneficial to investors.

Future trends in AI disclosures and legal implications
- AI-focused disclosures move beyond boilerplate. Regulatory bodies like the SEC have explicitly warned against “AI washing” and boilerplate AI risk disclosures. Future trends will require companies to provide more specific and meaningful information.
- Legal implication: Failure to provide specific, substantiated, and non-misleading AI disclosures will continue to increase the risk of regulatory enforcement actions and securities litigation.
- Increased focus on ethical AI and bias. The EU AI Act and various state laws already focus on high-risk AI systems to prevent algorithmic discrimination. This trend is driven by growing public and investor scrutiny of how AI may perpetuate biases related to gender, race, and other protected characteristics. Disclosures will need to provide details on risk mitigation.
- Legal implication: Regulations could lead to stricter requirements for developers and deployers of high-risk AI, potentially making companies liable for discriminatory outcomes or subject to securities class action lawsuits.
- Expansion of AI into ESG reporting. Investors and regulators are already linking AI and ESG factors. AI tools are increasingly used to help companies and investors track and report on ESG metrics. However, this also raises the need to disclose the environmental footprint of large AI models, as well as the social and governance risks of AI deployment.
- Legal implication: With mandatory climate-related disclosures in some jurisdictions, companies must be prepared to disclose the energy consumption and carbon footprint of their AI systems if material. Failure to do so could lead to liability for misleading ESG claims.
- AI integration into regulatory oversight. Regulatory bodies like the SEC are making AI a priority and may use AI technologies to enhance their own oversight capabilities. This could enable regulators to more efficiently monitor company disclosures, corporate governance and internal controls and detect non-compliance, such as “AI washing,” at scale.
- Legal implication: Companies will need to be increasingly transparent and accurate in their disclosures, as regulators may be able to detect fraudulent or misleading statements more quickly and efficiently.
How investors can assess AI-related risks
- Evaluate board oversight and expertise: Look for disclosures that specify how the board or a dedicated committee oversees AI strategy and risks. Assess if the company has directors with relevant AI or technology expertise, which indicates a higher level of preparedness.
- Scrutinize technical claims: Go beyond marketing materials and examine the technical details disclosed.
- Use-case specificity: The SEC has pushed for disclosures that clearly explain how AI is used, providing specific examples rather than relying on generic statements.
- Reasonable basis: Look for evidence that AI claims have a “reasonable basis” and are not just aspirational.
- Model validation: Some companies will disclose how they developed and validated their AI algorithms, providing a sign of robustness.
- Assess data governance and integrity: Analyze whether a company discloses its approach to data management, as AI performance depends heavily on the quality and integrity of its data inputs.
- Data sourcing: Check if the company explains how it obtains and uses its datasets.
- Data privacy and security: Ensure the ccompany has robust controls in place to protect data used by AI systems, aligning with regulations like GDPR.
- Evaluate ethical AI practices: Seek evidence of how companies address the ethical implications of AI.
- Bias mitigation:Companies should detail measures taken to prevent, detect, and mitigate algorithmic bias.
- Human oversight: Assess how the company balances AI autonomy with human oversight, especially for high-stakes decisions.
- Transparency and explainability: Look for a commitment to Explainable AI (XAI) to ensure that the logic behind AI-driven decisions is clear.
- Monitor third-party and vendor risk: Companies relying on third-party AI models or platforms introduce new risks. Investors should look for disclosures on third-party relationships and how those vendors’ practices align with the company’s own AI governance framework.
- Analyze risk factor disclosures: Pay close attention to the Risk Factors section of SEC filings, ensuring the disclosed risks are specific to the company’s AI usage and not boilerplate. Look for risks related to cybersecurity, competition, and regulatory changes.
Conclusion
As we navigate through the rapidly evolving landscape of artificial intelligence (AI), the importance of understanding AI disclosures and the associated risks cannot be overstated. By 2025, the integration of AI into various sectors has significantly broadened, necessitating a comprehensive approach to managing AI risks.
This authoritative guide serves as a critical resource for stakeholders seeking to align their practices with emerging standards in AI disclosures and securities litigation. The business community must recognize that transparent AI disclosures are not merely a regulatory requirement but a strategic imperative to mitigate potential securities litigation.
- AI Disclosures: The complexity of AI systems demands a nuanced understanding of the risks they entail. From data privacy concerns to algorithmic biases, these risks can materially impact a company’s financial health and operational stability. Consequently, accurate and thorough AI disclosures are vital.
- AI Risks: They provide investors and regulators with essential insights into how AI technologies are being deployed, the potential risks involved, and the measures in place to address these risks. This guide underscores the need for companies to adopt robust disclosure practices that reflect both the technical intricacies and ethical considerations of AI.
- Securities litigation: Securities class action lawsuits related to AI is poised to become more prevalent as stakeholders increasingly scrutinize how AI impacts financial performance and corporate governance. Companies must be prepared to defend their AI practices and disclosures in legal contexts.
- Structuring; This guide offers practical advice on how to structure AI disclosures to withstand legal scrutiny, emphasizing the importance of transparency, accountability, and proactive risk management. By adhering to these principles, organizations can better safeguard themselves against litigation while fostering trust among investors.
In conclusion, as AI continues to transform industries, the interplay between AI disclosures and securities litigation will become increasingly significant. This 2025 authoritative guide on AI risks and disclosures provides invaluable insights for navigating this complex terrain.
By prioritizing comprehensive and transparent AI disclosures, companies can effectively manage risks, bolster investor confidence, and mitigate the likelihood of securities litigation. As such, this guide is an essential tool for any organization looking to stay ahead in the age of artificial intelligence.
Contact Timothy L. Miles Today for a Free Case Evaluation about Security Class Action Lawsuits
If you suffered substantial losses and wish to serve as lead plaintiff in a securities class action, or have questions about AI disclosures, or just general questions about your rights as a shareholder, please contact attorney Timothy L. Miles of the Law Offices of Timothy L. Miles, at no cost, by calling 855/846-6529 or via e-mail at [email protected].(24/7/365).
Timothy L. Miles, Esq.
Law Offices of Timothy L. Miles
Tapestry at Brentwood Town Center
300 Centerview Dr. #247
Mailbox #1091
Brentwood,TN 37027
Phone: (855) Tim-MLaw (855-846-6529)
Email: [email protected]
Website: www.classactionlawyertn.com
Visit Our Extensive Investor Hub: Learning for Informed Investors

