2025 brought seismic shifts in the cyber, artificial intelligence (“AI”), and privacy landscape, marked by an explosion of AI-powered cybercrime and intensifying regulatory enforcement, targeting compliance with data security, privacy, and consumer protection laws and regulations. A wave of lawsuits took aim at AI systems in novel ways from seeking damages for product liability to privacy violations. Organizations of all sizes and types grappled with record financial losses from cybercrime and a fractured legal landscape. In short, the past year fundamentally reshaped the risk equation.
Overview of 2025 Major Cyber, AI, and Privacy Developments
In 2025, the cyberthreat landscape significantly shifted with the global cost of cybercrime, hitting a new record of $10.5 trillion in losses. The rapid proliferation of AI technologies has caused an explosive escalation in cyberthreats by increasing the speed, scope, and accessibility of the cybercrime ecosystem. Organizations faced a surging number of devastating ransomware attacks and successful social engineering breaches that resulted in the compromise of millions of sensitive records. Threat actors also weaponized AI to commit fraud on a massive scale, from hard-to-detect phishing scams and deepfake schemes, such as voice cloning, novel variants of malware, and prompt injection attacks designed to sabotage and poison AI systems. In 2025 alone there was a 400% increase in successful phishing scams largely in part to AI tools. Voice cloning has become a primary attack vector. Fraudsters only need three seconds of audio to create a clone of a human voice with 85% accuracy, and the technology is cheap and widely accessible. Threat actors can easily scrape audio from social media posts, podcasts, corporate webinars, and publicly accessible videos to make extremely effective deepfakes. For instance, in February 2024, an employee at the global engineering firm Arup was tricked by a digital clone of the company’s CFO into wiring $25 million to fraudulent bank accounts during a video conference.
Federal cybersecurity requirements and cyberfraud enforcement actions intensified under the Biden Administration, but it was unclear whether the Trump Administration would share those same priorities. The U.S. Department of Justice (“DOJ”) quickly dispelled any notion that organizations would get a free pass on cybersecurity noncompliance. In 2025, DOJ obtained eight cyberfraud related False Claims Act (“FCA”) settlements totaling $51.75 million in damages, surpassing the damages collected by Biden’s DOJ under its three-year Civil Cyber-Fraud Initiative. In December 2025, however, DOJ raised the stakes and escalated its enforcement posture with a strong deterrent message. It announced a five-count criminal indictment against Danielle Hillmer, a former senior manager at a government contractor, for obstruction and orchestrating a cybersecurity fraud scheme allegedly designed to defraud and mislead federal agencies about the security of a cloud-based platform used by the U.S. Army and other government customers. These actions signal DOJ’s commitment to hold both organizations and individuals accountable for cyber-related fraud and data security violations using both criminal and civil tools. Federal and state enforcement actions targeting the online safety of minors, data security, and deceptive or unfair uses of AI tools — such as algorithmic pricing practices — also intensified.
In April 2025, DOJ’s new sweeping bulk sensitive data security program (“DSP”) went into effect, which significantly restricts or prohibits the transfer of bulk U.S. sensitive personal data (i.e., human genomic, human ‘omic, biometric, financial, health, and precise geolocation data) and government-related data to six countries of concern, including China. This rule also prohibits the transfer of data to covered persons, which are defined as foreign people who are primary residents of a country of concern, as well as foreign entities and the employees of foreign entities that are organized under the laws of, based in, or 50 percent or more owned by, a country of concern. Violations of this rule will be prosecuted under the International Economic Powers Act and carry a potential sentence of 20 years imprisonment and a one million dollar fine. DOJ has not yet announced any investigations or enforcement actions under this rule, but we expect to see enforcement activity in this area in 2026. The Federal Trade Commission has also signaled that it intends to prioritize the enforcement of Protecting Americans’ Data from Foreign Adversaries Act of 2024 this year. Companies should therefore carefully scrutinize their access controls, vendors, and data mapping and inventory practices to ensure compliance with these laws.
The rapid development of GenAI tools also fueled an expansion of civil lawsuits that will soon decide who is responsible for harm caused by AI tools. Claims ranged from product liability and wrongful death suits alleging that AI chatbots drove users to self‑harm, to class actions against insurers for allegedly erroneous coverage denials based on flawed AI predictive models, and suits against AI note‑taking services such as Otter.ai and Fireflies.ai for alleged wiretap and privacy violations. Indeed, in November 2025 alone, seven lawsuits were filed against OpenAI in state courts in California alleging ChatGPT interactions drove users into delusional states resulting in severe psychological harm and in four cases users were driven to commit suicide. On January 7, 2026, several settlements were announced of lawsuits filed in Florida, Colorado, Texas, and New York against Google and Character.AI. This is the tip of the iceberg for litigation threats tied to the deployment of AI tools. Meanwhile, insurance companies are deciding what AI uses will be covered and under what type of policies.
For the first time since 2020, no new state comprehensive laws were enacted in 2025. Massachusetts appeared to be on the verge of passing a law with strict data minimization requirements, similar to Maryland, but these efforts failed despite the Massachusetts Senate passing The Massachusetts Data Privacy Act by a vote of 40 to 0 in September. The number of states with comprehensive data privacy laws remains at 20. However, 2025 saw numerous new state privacy regulations, most notably 127 pages of new regulations in California that expand current privacy requirements and cover automated-decision making technology, risk assessments, cybersecurity audits, and insurance requirements. Eight states also expanded their comprehensive data privacy frameworks, including Connecticut, which lowered applicability thresholds, broadened the definition of sensitive data, enlarged consumer’s rights, and added an AI disclosure requirement when personal data is used to train large language models (“LLMs”).
Though new data privacy laws were enacted, approximately 100 new state AI laws were passed across the country and one federal AI law, the Take It Down Act in 2025. Concerned that a patchwork of AI laws would stifle innovation, on December 11, 2025, President Trump issued an Executive Order (“EO”) targeting “onerous” and “cumbersome” state AI laws in favor of a national framework. This EO was issued after legislative efforts to impose a sweeping 10-year moratorium on new state AI laws failed. A battle in 2026 over preemption and restrictions on future state AI legislation and enforcement appears likely.
Top Five Cyber, AI, and Privacy Developments in 2025
-
Cyberfraud Enforcement Actions Under the FCA Intensify Drastically Increasing Liability Risks.
In the first year of President Trump’s second administration, DOJ made clear that cybersecurity is one of its top priorities and that organizations and individuals will be held accountable for failing to adhere to their cybersecurity contractual requirements as well as making misrepresentations to the government in connection with its cybersecurity practices, products, or services. What has become clear over the last four years is that cybersecurity compliance applies to any government contract – not just those contained in Department of Defense (“DoD”) contracts designed to protect sensitive defense information from increasing cyber threats and foreign state actors. Failing to safeguard sensitive personal and health information as well as genetic data is equally important. Moreover, no data breach is required to trigger FCA liability. While a breach certainly will affect damages, the fact that no breach has yet occurred is not a defense to FCA claims. DOJ considers noncompliance with any cybersecurity contractual requirements and standards alone sufficient to warrant FCA liability. While initially the focus of the Civil Cyber-Fraud Initiative was to target government contractors who put U.S. information and systems at risk for failing to report cybersecurity incidents, that is no longer the case. Any company with government contracts or who sell products to the government are fair game under the FCA if they have weak cybersecurity practices.
The Cyber-related FCA Settlements in 2025 Totaled $51.75 Million.
Most significantly, in December 2025, DOJ sent shockwaves through the DoD industry when it announced that criminal charges had been brought against a senior manager at a government contractor, Danielle Hillmer, in connection with her actions to allegedly conceal serious cybersecurity violations and obstruct a federal audit. The Hillmer Indictment clearly demonstrates the importance of cybersecurity compliance. DOJ will not hesitate to use criminal tools to punish and hold individuals accountable if their conduct jeopardizes national security. According to the five-count indictment, from March 2020 through at least November 2021, Hillmer allegedly carried out a scheme to defraud the United States by obstructing federal auditors and falsely representing that the contractor’s cloud platform had implemented required security controls. Additionally, Hillmer allegedly sought to influence and obstruct third-party assessors during required audits in 2020 and 2021 by concealing deficiencies and instructing others to hide the true state of the system during testing and demonstrations.
-
AI Tools Amplified the Speed, Volume, and Accessibility of Cybercrime.
While the use of AI tools for cyberattacks have been on the rise since the public release of Generative AI (“GenAI”) models in 2022, in 2025 AI tools significantly increased the volume, speed, and sophistication of cyberattacks. AI tools have also lowered the barrier to entry enabling even individuals with no technical skills to launch successful attacks. Most alarming, AI agents are now capable of conducting cyberattacks with little human intervention. On November 13, 2025, Anthropic reported it identified a highly sophisticated espionage campaign conducted using AI agents, which it believed was orchestrated by a Chinese state-sponsored group by manipulating its Claude Code tool. Anthropic further disclosed that this “operation targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies” as was believed to be “the first documented case of a large-scale cyberattack executed without substantial human intervention.”
-
Flood of New Lawsuits Targeting Websites and AI Systems.
Website tracking litigation and enforcement actions targeting the use of cookies, chat bots, and third-party online tracking tools (e.g. pixels, embedded scripts, session replay, browser fingerprinting, etc.) skyrocketed in 2025, including those using the California Invasion of Privacy Act (“CIPA”) as well as other state and federal wiretap laws and data breach statutes. Class action lawsuits have also been filed against AI-powered transcription and note takers, including Otter.ai and Fireflies.ai, for violating state and federal wiretap laws. Most recently, ambient AI vendors, including Sharp HealthCare, have been sued for alleged privacy violations. Additionally, as discussed above, numerous lawsuits have been filed against OpenAI, Google, and Character.ai related to product defects in AI chatbots that caused individuals to become delusional and commit harm to themselves and others.
State privacy regulators have further begun flexing their muscles in this space. Texas secured a record settlement of $1.375 billion with Google for allegedly violating Texas law by unlawfully tracking and collecting users’ private geolocation data, incognito browsing activity, and biometric identifiers. The California Attorney General announced its largest settlement of $1.55 million with Healthline Media LLC for failing to allow consumers to opt out of targeted advertising and sharing sensitive data with third parties. The Connecticut Attorney General announced its first settlement under the Connecticut Data Privacy Act with TicketNetwork for numerous deficiencies in its privacy notice and for misconfigured, inoperable opt-out mechanisms.
-
New AI Laws and Enforcement Actions Begin to Alter the Risk Equation.
More than 100 new state AI laws were passed in 2025. These laws were largely targeted at certain high risk uses and industries. For instance, California and New York enacted laws designed to regulate frontier AI developers, which aim to prevent catastrophic AI harm and require the development of safety frameworks, implementation of security protocols, and mandatory incident reporting. California also enacted new regulations concerning the use of automated decision-making technology that are used to make consequential decisions, affecting consumers and a law that requires developers of public GenAI systems to provide a summary of the datasets used to train their models. New York enacted a new pricing transparency law that requires disclosure to consumers when the price of goods or services are determined algorithmically, using their personal data. This law also prohibits discriminatory pricing practices.
New laws, however, do not need to be enacted to investigate AI-driven pricing disparities or other AI-related trade practices. Rather, state and federal regulators have increased their scrutiny and investigations of AI technologies using consumer protection laws. For example, the FTC has brought enforcement actions against several companies for engaging in misleading and deceptive trade practices by misrepresenting the capabilities of their AI tools, including an AI content detector that “did no better than a coin toss” despite being marketed as 98% effective and the “world’s first robot lawyer,” which the FTC found was unable to provide legal advice. Most recently, in December 2025, the Federal Trade Commission (“FTC”) sent a civil investigative demand to Instacart about its AI-powered pricing tool, Eversight, after Consumer Reports published a study finding grocery prices differed by as much as 23% for identical items purchased from the same store among customers. Instacart has discontinued this pricing program.
Similarly, on July 10, 2025, Massachusetts Attorney General Andrea Joy Campbell announced she had secured a $2.5 million settlement with Earnest Operations LLC (“Earnest”), a private student loan company, to resolve allegations that Earnest engaged in discriminatory practices through its use of AI models to make lending decisions, which constituted unfair and deceptive practice in violation of Chapter 93A of the Massachusetts Consumer Protection Act. This is the AG Campbell’s first AI enforcement since she issued her AI Advisory in April 2024, which made clear that all AI systems and tools must comply with existing Massachusetts consumer protection, anti-discrimination, and data privacy laws.
In 2025, federal and state regulators also began scrutinizing AI-powered “companion chatbots.” In September, the FTC launched an inquiry of seven companies that operate consumer-facing AI chatbots to evaluate how these products are being developed and what steps, if any, these companies are implementing to protect children and teens from harm. Several states have enacted new laws regulating the use of such chatbots, including California, Maine, and New York. These laws require that operators notify users that they are interacting with an AI chatbot, and not a human. New York’s law also provides the first private right of action for damages caused by an AI chatbot.
On December 11, 2025, President Trump issued an Executive Order that is designed to stop the “patchwork” of state AI laws being adopted across the country that “attempt to paralyze” the AI industry and threaten innovation. DOJ has created a task force to challenge state regulations that are seen as an impediment to the U.S. leadership and dominance in AI. The AI EO, however, makes clear that state AI laws related to child safety protections will not be impacted. See EO at § 8(b)(i). To date, no state laws have been challenged, but Colorado’s comprehensive AI law, which goes into effect in June 2026, was specifically criticized in the EO.
-
Insider Threats Intensified and Converged with National Security Risks.
Increasing insider threats threatened U.S. economic and national security in 2025, compromising corporate networks, personal data, and intellectual property. For example, Coinbase reported that criminals bribed a small group of overseas customer‑support contractors to obtain access to customer data, exposing personal information of nearly 70,000 customers and prompting class‑action litigation. North Korean cyber operatives posing as remote IT workers infiltrated more than 320 companies, often leveraging GenAI tools. Once hired, these operatives embed themselves within the organization’s network gaining access to critical functions and sensitive information. This scheme generated hundreds of millions of dollars for North Korea, which DOJ and the FBI believe was used to support North Korea’s WMD program. These efforts have not ceased. Amazon recently disclosed that it had blocked 1800 job applications from suspected North Korean agents.
These cases demonstrate the importance of continuous monitoring of user activities, including geolocation information, and implementing strict, least-privilege access controls. Such controls and AI governance measures will become critical as AI-powered insiders and autonomous agents become users across corporate networks.
2026 Predictions and Takeaways
- Organizations should expect more cyberfraud FCA enforcement actions in 2026. Attorney General Bondi’s DOJ intends to hold all companies accountable for failing to comply with their cybersecurity contractual obligations and making misrepresentations to the government in connection with its cybersecurity practices, products, or services. Accordingly, organizations should ensure they are in compliance with their cybersecurity obligations. Further, entities should carefully review and scrutinize all statements that are required to be submitted to the government for accuracy, especially those under new DoD rules in connection with the Cybersecurity Maturity Model Certification Program. If any potential misconduct in this area is identified, organizations should consider conducting an internal investigation and speaking to experienced counsel about whether a voluntary self-disclosure to DOJ would be in the organization’s best interest.
- New federal and state cybersecurity requirements in California and New York will go into effect in 2026. It is essential to continually monitor the federal and state regulatory landscape to ensure organizations remain in compliance. The Cyber Incident Reporting for Critical Infrastructure Act final rule was statutorily mandated in October 2025, but has been delayed and is now expected in May 2026. This new rule will impact hundreds of thousands of companies and require reporting of any “substantial” cyber incidents within 72 hours.
- Prepare for increased state attorneys general oversight and scrutiny in 2026 of AI systems. Organizations should exercise caution and conduct risk assessments before deploying AI models, especially those that will have access to sensitive information, and ensure there is proper human oversight of AI systems to mitigate liability risks when AI goes wrong. As AI systems are increasingly used to streamline business processes, AI-powered claim submissions will likely be scrutinized by regulators and errors could form the basis for FCA liability.
- Website tracking and privacy lawsuits are likely to continue to surge in 2026 as will lawsuits directed at harms allegedly caused by AI tools and systems.
- Online deepfakes and impersonation attacks will likely continue to explode in 2026. It is therefore essential that companies have strong identity verification protections. In 2025, there were nearly eight million online deepfake files shared on social media platforms. The losses from GenAI fraud are expected to hit $40 billion by 2027 according to the Deloitte Center for Financial Services. AI modeling poisoning and prompt injection attacks will also accelerate, as will the importance of close monitoring of AI outputs and cybersecurity controls on AI systems.
- It is imperative that companies incorporate internal financial controls to mitigate potential liability risks from online fraud and qualify for insurance. Such controls should, at a minimum, include only using trusted phone numbers and contacts to confirm financial transactions; requiring a two-person rule and establishing a code word or phrase at your organization for large financial transactions; and closely scrutinizing banking account information and confirming any changes or modifications with a known contact and trusted phone number, not one in the same email containing any financial account modifications.
- Organizations must comply with new data minimization requirements. For example, Maryland adopted stringent data minimization requirements, which went into effect in October 2025 and will be enforced by the Maryland Attorney General beginning on April 1, 2026. Under Maryland’s Online Data Privacy Act of 2024, businesses must “limit the collection of personal data to what is reasonably necessary and proportionate” to provide or maintain a specific product or service that was requested by the consumer. This strict data minimization standard applies even if the consumer provides consent.
- As the risks associated with AI-related incidents increase, insurers will likely seek to introduce broad AI liability exclusions in their general liability, cyber, and errors and omissions policies, and perform more rigorous scrutiny of corporate AI governance policies to avoid having to pay large damages claims.
- Quantum computing is coming and could be used by malicious threat actors and state actors to bypass encryption of sensitive data.