Since 2022, Ferretti Firm supports Dynamo Camp ( www.dynamocamp.org ), the first Recreational Therapy Camp in Italy specifically structured to host children and young people with serious and chronic pathologies and their families.
21 April 2026
Artificial Intelligence and Regulated Professions: Recent Regulatory and Case Law Developments in the United States
Recent litigation brought by Nippon Life Insurance Co. of America (“Nippon”) against OpenAI has reignited the debate on the effects that the use of generative artificial intelligence (“AI”) may produce when deployed within intellectual professions subject to licensing and professional regulation.
Starting from the analysis of this case, this article examines the main issues associated with the use of AI systems in regulated professions, reviewing recent developments in U.S. case law and legislation and offering general considerations on the limits and inherent risks of large language models (LLMs).
Nippon Life Insurance Co. of America v. OpenAI (2026)
The case originates from litigation initiated in 2021 by Ms. Graciela Dela Torre against Nippon, seeking damages for illnesses she claimed to have developed during her employment with the insurance company. Approximately three years after the proceedings began, the parties reached a settlement agreement under which, among other things, the claimant waived any future claims for damages against the insurer.
One year after the settlement, Ms. Dela Torre contacted her attorney to assess whether the agreement could be challenged. After receiving a negative response, she submitted the same question to ChatGPT and received a different answer. According to the allegations, ChatGPT suggested that her attorney was mistaken and encouraged her to challenge the settlement.
Relying on these responses, Ms. Dela Torre used ChatGPT to conduct legal research, develop arguments under Rule 60(b) of the Federal Rules of Civil Procedure, and draft motions, complaints, and other court filings aimed at reopening the settled dispute. At the time the lawsuit against OpenAI was filed, she had submitted numerous pro se filings generated with the assistance of ChatGPT.
Nippon alleges that OpenAI provided a tool that enabled, facilitated, and amplified various wrongful acts, including unauthorized practice of law (UPL), tortious interference with contract, and abuse of process. The most sophisticated argument advanced in the case is that OpenAI failed to implement any refusal mechanism for requests involving personalized legal advice.
Similar lawsuits have previously been brought in the United States against legal service providers accused of unauthorized practice of law (see, among others, Janson v. LegalZoom.com, Inc.; LegalForce RAPC Worldwide, P.C. et al. v. LegalZoom.com, Inc. et al.; MillerKing LLC v. DoNotPay, Inc.; Faridian v. DoNotPay, Inc.; FTC v. DoNotPay). However, Nippon Life Insurance Co. of America v. OpenAI represents the first attempt to extend UPL liability to an AI developer. Although the terms of service of major LLM providers expressly prohibit reliance on such systems for legal advice, this line of litigation is likely to expand as AI-powered legal self-help services continue to grow.
United States v. Heppner (2025)
In a case of significant relevance to the business world, United States v. Heppner, Bradley Heppner, former president of GWG Holdings—a financial company that later became insolvent—was indicted by federal prosecutors on charges including securities fraud, wire fraud, false statements to auditors, and falsification of corporate records.
During the criminal proceedings, Heppner used Anthropic’s chatbot Claude to generate analyses and reports relating to his case, intending to share them with his defense counsel in preparing his legal strategy. He argued that such communications should be protected by attorney-client privilege, as well as by the work product doctrine.
Judge Jed Rakoff of the Southern District of New York rejected this argument and ordered Heppner to produce 31 documents generated with the assistance of Claude. The court held that no attorney-client relationship “exists or could exist” between a user and an AI platform, as chatbots are not lawyers, do not assume professional confidentiality obligations, and are not subject to ethical rules.
This decision, issued in February 2026, has immediate implications for business and professional practice: any executive, manager, or consultant who uses commercial chatbots to develop legal strategies, assess regulatory risks, or prepare materials to be shared with counsel risks creating documents that may be discoverable by opposing parties or prosecuting authorities in future proceedings.
New York Senate Bill S7263
From a regulatory standpoint, New York Senate Bill S7263, introduced in 2025 by Senator Kristen Gonzalez, proposes to amend the state’s General Business Law by adding a new Section § 390-f. The bill would prohibit chatbot owners from allowing systems to provide responses, information, or advice, or to take actions that—if performed by a natural person—would constitute the unauthorized practice of certain regulated professions under state law: “[t]his bill would prohibit a chatbot to give substantive responses; information, or advice or take any action which, if taken by a natural person, would constitute unauthorized practice or unauthorized use of a professional title as a crime in relation to professions who licensure is governed by the education law or the judiciary law.”
This development—potentially to be followed by other states—is significant and raises concerns for AI developers, not only due to the possible civil and criminal consequences, but also because liability disclaimers would not be effective. The bill expressly provides that “[p]roprietors may not waive or disclaim this liability by notifying consumers that they are interacting with a non-human chatbot system. A person may bring a civil action to recover damages, and if the proprietor has willfully violated this section, costs, attorney’s fees and other costs of litigation. Proprietors utilizing chatbots shall provide clear, conspicuous and explicit notice to users that they are interacting with an artificial intelligence chatbot program.”
The case law and regulatory framework emerging from the analysis shows the progressive creation of a clear line of demarcation between automated advice provided in unregulated matters and in reserved domains. With regard to the latter, the limitations discussed appear to be driven not only by the need to safeguard the integrity of legal processes and the centrality of the judicial function, but also by the need to protect individual rights in relationships between private parties and businesses, particularly in relation to automated decisions affecting significant legal positions. Those who provide advice without the safeguards inherent in professional services may produce concrete—sometimes irreversible—legal effects in the absence of human oversight grounded in judgment, discretion, and individual responsibility.
In addition to the foregoing legal considerations, there are also various practical and operational factors, outlined below, that distinguish professional services from AI-generated outputs.
Beyond the strictly legal implications examined thus far, the analysis of real-world cases shows that the challenges posed by AI in regulated professions cannot be attributed solely to regulatory gaps or developers’ design choices, but are rooted in structural and inherent limitations of generative systems. In light of ongoing regulatory developments—and in particular the approach adopted by the New York legislature through Senate Bill S7263—such limitations may never find full practical expression. Should similar initiatives consolidate and spread across other state and federal jurisdictions, the development of AI in regulated professions risks being not only constrained, but structurally precluded even before reaching the necessary level of technological maturity.
The information contained in this article is provided for general informational purposes only and does not constitute, and is not intended to constitute, legal advice or any other form of professional advice. The content does not take into account the specific circumstances of any individual case and should not be relied upon as a basis for making decisions without obtaining appropriate professional advice.