
The False Claims Act continues to be a powerful enforcement tool, with nearly $3 billion recovered in 2024 alone. Maria Durant and Gejaa Gobena of Hogan Lovells analyze emerging trends, including the new administration’s approach to whistleblower cases, the role of DOGE in identifying fraud and the double-edged sword of AI in both creating and detecting potential violations.
2024 was another massive year for False Claims Act (FCA) enforcement with the DOJ reporting nearly $3 billion in recoveries for the fiscal year. The 558 settlements and judgments achieved in 2024 — the second-highest behind 2023 — reflect the government’s enforcement priorities, including combating healthcare fraud, fraud in pandemic relief programs and knowing violations of cybersecurity requirements in government contracts and grants.
Of the nearly $3 billion recovered, over $2.4 billion arose from qui tam suits pursued by either the government or relators. And, ensuring continued activity in the months to come, whistleblowers filed 979 qui tam lawsuits in FY2024, the highest ever in a single year.
Looking ahead to 2025, Attorney General Pam Bondi joins the DOJ after serving as Florida’s attorney general, where her team participated actively and frequently in multistate qui tam investigations, litigation and settlements.
Given that prosecuting fraud, waste and abuse enjoys strong bipartisan support both from Congress and the American taxpayers, one should expect that the FCA will remain a critical tool for the DOJ in the coming year. Indeed, in response to questioning by longtime FCA champion Sen. Chuck Grassley, R-Iowa, during her confirmation hearing, Bondi pledged to defend the constitutionality of the FCA against current challenges winding their way through the courts and assure proper staffing and funding levels to prosecute FCA cases.
In doing so, she noted the importance of the FCA to whistleblowers, whom Grassley characterized as “patriotic,” as well as the protection of public funds.
We will also be watching how the DOJ might integrate the work of the Department of Government Efficiency (DOGE) into its efforts to combat fraud, waste and abuse. Already in the early days of the new administration, DOGE has been scouring government payments generally, with recent reports noting that it is examining the efforts of the Medicare/Medicaid to use technology to identify fraud, waste and abuse. How that factors into FCA enforcement (e.g., more DOJ-initiated FCA investigations based on the results of data analytics/AI) will be a topic to watch closely.
The future of whistleblower suits
Although Bondi’s public support for the FCA’s qui tam provisions may settle the question of where the Trump DOJ stands on the issue, all eyes will be on the 11th Circuit when it hears oral argument later this year on the constitutionality of the whistleblower provisions under Article II. We also anticipate further litigation on this issue in other circuits from those seeking to widen the circuit split in the hope of enticing the Supreme Court to get involved.
Notwithstanding its position on the constitutionality of whistleblower suits, the DOJ could take steps at the policy level to exert more influence over qui tams in which it has declined to intervene. During the first Trump Administration, the DOJ’s Commercial Litigation Branch Fraud Section issued a policy memorandum outlining the factors its trial attorneys and assistant US Attorneys should consider when deciding whether to move to dismiss a qui tam action under Section 3730(c)(2)(A), which allows the government to dismiss an action over the relator’s objection. The memorandum describes Section 3730(c)(2)(A) as “an important tool to advance the government’s interests, preserve limited resources, and avoid adverse precedent” and advises prosecutors to consider filing a motion for dismissal if it would: (i) curb meritless qui tam actions; (ii) prevent parasitic or opportunistic qui tam actions; (iii) prevent interference with agency policies and programs; (iv) control litigation brought on behalf of the United States; (v) safeguard classified information and national security interests; (vi) preserve government resources; or (vii) address egregious procedural errors.
Despite formalization of the policy, the first Trump Administration continued the DOJ’s tradition of infrequently seeking dismissal of non-intervened qui tam cases under 31 U.S.C. § 3730(c)(2)(A), but this could change under Trump 47. The second Trump Administration’s strongly articulated interest in cutting federal spending and the creation of DOGE could suggest a more critical review of qui tam actions with an eye toward dismissal of those qui tam actions it views as a drain on resources or an interference with its priorities. And, with the Supreme Court’s 2023 decision in Polansky v. Executive Health Resources, Inc. permitting the government to intervene and move to dismiss at any stage of a whistleblower suit, the path is clearer for the DOJ to do so.
Finally, it remains to be seen whether we will start to see a new variety of FCA qui tam actions stemming from Trump’s executive order on DEI, which takes the position that diversity, equity and inclusion programs can violate federal civil rights law and seeks to end such practices not only in the federal government but also in the private sector.
A key feature of the EO is its clear intention to empower whistleblowers to pursue enforcement under the FCA by alleging that federal contractors and grant recipients failed to comply with regulations and contract and grant clauses contemplated under the EO. The EO directs each agency head to include terms in every contract or grant award making clear that the contractor/grantee: (A) agrees that compliance with “all applicable Federal anti-discrimination laws is material to the government’s payment decisions” for FCA purposes; and (B) certifies that it does “not operate any programs promoting DEI that violate any applicable Federal anti-discrimination laws.”
Because claims and certifications submitted under contracts and grants potentially implicate the FCA, whistleblower allegations of knowing failures to comply with those terms may be a precursor to investigations and ultimately litigation under the FCA if the EO’s terms are incorporated. Litigation is ongoing, but even if the order is allowed to stand, FCA liability under its terms is far from a foregone conclusion. In the meantime, affected entities and individuals would do well to monitor the terms of new or amended contracts or grants and take steps to implement best practices, mitigate DEI risks and prepare for potential investigations and disputes.
The influence of subregulatory guidance on the prosecution of FCA cases
Another policy from the first Trump Administration that will be revived in some form under Trump 47 involves restrictions on the DOJ’s reliance on subregulatory guidance when bringing FCA and other enforcement actions. During the first Trump presidency, the DOJ released a memorandum by then-Associate Attorney General Rachel Brand significantly restricting the DOJ’s use of executive agency guidance documents in affirmative civil enforcement actions. The Brand memo precluded the DOJ from “effectively convert[ing] agency guidance into binding rules,” and prevented the DOJ lawyers from using noncompliance with agency guidance to establish violations of law. In 2021, former Attorney General Merrick Garland rescinded the Brand memo, criticizing it as “overly restrictive” and a “substantial” departure from the DOJ’s “traditional approach” to guidance documents.
In February, Bondi issued a number of “first-day” directives, including a memo rescinding the Biden-era policy that allowed the use of subregulatory guidance in FCA enforcement decisions. In doing so, Bondi directed the associate attorney general to provide a report “concerning strategies and measures that can be utilized to eliminate the illegal or improper use of guidance documents,” which is expected soon.
The revival of a policy similar to the Brand memorandum, combined with the lingering impact of the overturning of the Chevron doctrine in Loper Bright, signals that the opinions of federal agencies on key regulatory issues in FCA cases may carry less weight in FCA litigation going forward. Moreover, if as discussed above, the new Trump DOJ intends to assert more control over the litigation of whistleblower suits, the DOJ may use a reinstated Brand memo or similar policies as justification for seeking the dismissal of qui tam suits where the theory of liability relies on agency guidance documents. Even in instances where the government declines to intervene to dismiss a qui tam case, however, the revival of the Brand memo or similar policies will give FCA defendants a renewed basis to argue that relators who stand in the shoes of the government should not be permitted to rely on agency guidance either.
The risks and rewards of AI
The rapid development of generative artificial intelligence (AI) presents both risks and rewards vis-a-vis FCA enforcement. On the one hand, while companies are finding more opportunities to deploy AI technology to accomplish their business objectives, doing so without the appropriate guardrails and oversight could increase the risk of government enforcement, including liability under the FCA. On the other hand, AI technology can also be employed to help companies mitigate these risks and bolster their FCA compliance efforts.
Today, AI is capable of learning from data patterns and generating new responses to inquiries. Using healthcare space as an example, technology that once was used only to streamline claim filings can now make coverage or diagnosis recommendations. However, with AI performing more tasks autonomously, greater government scrutiny is likely.
Using AI in healthcare and other fields where government programs require precise recordkeeping creates unique FCA risks. Algorithms can produce false records by design or by mistake, but the FCA’s knowledge component heightens the risk of enforcement against providers who “recklessly disregard” inaccurate results produced by their AI systems. Indifference toward the accuracy of submitted claims due to an overreliance on new technology could create FCA liability.
While more advanced generative AI has yet to see enforcement from regulators, prosecutors began focusing on computer-assisted fraud several years ago, and we do not expect that focus to change with the new DOGE’s mandate to cut government spending. The DOJ has increased scrutiny of health plans and providers under Medicare Advantage and Medicaid Managed Care systems, including pursuing enforcement actions against plans for generating false diagnosis codes and against providers who use algorithms to generate high-reimbursement returns. Relators have also pursued FCA actions against healthcare companies that provide or use data analytics to submit Medicare claims. Recently, University of Colorado Health settled an FCA lawsuit alleging that its system of AI-generated billing improperly up-coded claims. The settlement is just one example of how the misuse of AI, either knowingly or mistakenly, can lead to liability under the FCA.
AI in the busy hands of professional relators also creates the risk of more FCA litigation. AI can fine-tune software used to assess companies’ systems and find potential problem areas for investigation. This strategy is not new, but it will likely continue to grow. Several years ago, for example, a data analysis firm used data mining to assess publicly available CMS data and attempted to uncover fraudulent claims submitted by a hospital system. The relator based its complaint on this statistical analysis of Medicare-claims data, which it alleged showed the hospital system “submitted proportionally more claims with higher-paying diagnosis codes than comparable institutions.” Although the case was successfully dismissed on appeal, the risks of AI use by relators should not be discounted, especially given the lucrative incentives created by FCA qui tam cases.
The government is not far behind relators in capitalizing on AI technology to identify false claims. The 2022 AI Training Act, implemented under the Biden Administration, requires the Office of Management and Budget to establish AI training programs for government workers in executive agencies, which will likely lead to more sophisticated government enforcement of FCA claims.
While the second Trump Administration rescinded President Joe Biden’s AI executive order, the replacement order provides general guidance to support and remove impediments to continued growth of AI technologies.
The DOJ’s use of AI trails behind other government agencies, but it has already implemented topic modeling to consolidate records review, algorithms to manage case documents and machine learning to detect anomalies, and will likely continue to work toward further expansion under the new administration. And, as noted above, the second Trump Administration’s DOGE officials are purportedly using AI to ferret out waste, fraud and abuse in many areas across the government. These initiatives, coupled with the administration’s stated agenda of trimming government departments and personnel, could indicate that it will increasingly look to AI technology for help in detecting fraudulent behavior.
While the legal risks of unfettered AI deployment are not insignificant, rapidly evolving AI technologies also present unique opportunities. Companies harnessing this advanced technology to accelerate their business interests can — and should — use the technology to avoid FCA liability. The DOJ has urged companies to identify and mitigate AI risks through their compliance programs. In its 2024 “Evaluation of Corporate Compliance Programs” update, the DOJ indicated it will now look at how companies manage AI-related risks in both their business and compliance programs.
Adapted with permission from an article on HoganLovells.com.
