Since 2022, Ferretti Firm supports Dynamo Camp ( www.dynamocamp.org ), the first Recreational Therapy Camp in Italy specifically structured to host children and young people with serious and chronic pathologies and their families.
27 April 2026
After examining the most recent legislative and case law developments in the United States concerning the relationship between artificial intelligence (“AI”) and regulated professions, this article focuses on the main categories of risk and liability that companies may face, outside that perimeter, when they use AI systems in their operations.
As discussed below, such use may give rise to liability profiles connected to violations of general principles of civil law, consumer protection regulations, and data protection rules, contributing to the development of an evolving body of litigation in North America.
One of the central issues in AI-related litigation outside regulated professions concerns liability for incorrect information conveyed through automated systems. The decision issued by the Civil Resolution Tribunal of British Columbia in Moffatt v. Air Canada addressed this issue with particular clarity, establishing that a company is responsible for incorrect information provided to its customers through an automated chatbot.
The tribunal rejected the defense argument that the chatbot constituted a separate and autonomous legal entity from the company that implemented and controls it. According to the ruling, companies cannot invoke a “technological veil” to avoid liability for the errors of their automated tools: a chatbot is not an autonomous entity but an extension of the company that chose, implemented, and controls it. Therefore, any information conveyed through that channel is attributable to the company itself, with the same legal consequences as any other corporate statement.
This decision is frequently cited in legal scholarship as an example of corporate risk: anyone deploying an AI system may be held liable for the outputs generated by that system, regardless of whether the incorrect information was produced by an algorithm rather than a human employee.
A risk profile that is gaining increasing practical relevance—and deserves particular attention in corporate planning—concerns the gradual modification of insurance policies. Some insurers have begun introducing clauses that limit or exclude coverage for losses generated by AI systems in errors & omissions (E&O) and directors & officers (D&O) policies.
The most critical aspect of this trend lies in the breadth of some exclusions: in certain cases, carve-outs are drafted to cover not only errors directly generated by the insured’s AI, but also those attributable to partners or third-party providers using such technologies. In this scenario, the insured may find itself exposed to liabilities not covered by existing policies due to errors committed by AI systems.
The implications for corporate governance are significant: boards of directors and audit committees are required to carefully assess the adequacy of existing insurance coverage, evaluating whether any exclusions are compatible with the company’s AI risk profile, taking into account both internal tools and those used by third parties.
In January 2026, the Kentucky Attorney General brought an action against an AI chatbot developer, Character.AI, for deceptive practices, including failure to disclose risks to consumers, particularly with regard to the protection of minors. The case, Kentucky Attorney General v. Character Technologies, Inc., was brought under state consumer protection law, highlighting how existing legal frameworks can be applied to unlawful conduct involving chatbots even in the absence of AI-specific regulation.
This case confirms an ongoing trend: pre-existing enforcement tools—antitrust laws, consumer protection rules, and unfair competition frameworks—are increasingly being adapted to address the new protection needs arising from AI.
The use of generative AI systems in customer interactions exposes companies to additional risks related to the regulation of communication privacy.
Following the February 2025 decision in Ambriz v. Google, LLC—in which the court denied Google’s motion to dismiss—several class actions have been filed against companies using generative AI systems in customer interactions. Plaintiffs argue that such tools may transcribe, analyze, and store user communications without the consent required under state wiretapping laws.
According to this argument, a user’s consent when contacting a company is intended for interaction with a human operator and does not extend to the automated processing of the conversation through AI systems. In this context, state laws such as the California Invasion of Privacy Act provide for penalties of up to $5,000 per violation; when applied at scale, such provisions can result in significant exposure for affected companies.
Beyond the strictly compensatory aspect, litigation in this area raises broader compliance issues. Relying on AI to manage customer interactions exposes companies to concrete risks of non-compliance, both with regard to data protection laws and sector-specific regulations that require contextual assessments and tailored obligations. AI systems provide outputs without being subject to such obligations—and, above all, without questioning them.
A highly significant line of case law arises from a series of disputes—including Williams v. City of Detroit, Woodruff v. City of Detroit, and Mobley v. Workday, Inc.—concerning the use of AI systems in sensitive areas such as public safety, the labor market, and public administration.
Taken together, these cases show that the use of AI systems to automate decision-making can lead to unlawful and discriminatory practices, especially where such systems replace or override human judgment. This creates a tangible risk of violations of fundamental rights—personal liberty, equal treatment, and dignity—as well as contractual and tort liability where automated tools produce systemic errors, bias, or harmful effects disproportionately affecting legally protected groups.
Also noteworthy is FTC v. Rite Aid Corporation, in which the federal consumer protection authority challenged the use of a facial recognition system that generated false positives at a disproportionately high rate affecting individuals belonging to ethnic minorities. The overall picture emerging from these cases indicates that reliance on AI does not exempt operators from legal responsibility for the consequences of automated decisions, instead imposing higher standards of transparency, oversight, and accountability.
The landscape of AI-related litigation outside regulated professions is evolving along three main axes, which outline and anticipate developments in legal debate in the coming years.
First, existing cases—relating to insurance companies, recruitment processes, and automated customer management through chatbots—are progressing through the courts, with the first merits decisions expected between the summer and the end of 2026. These rulings may provide the first meaningful guidance on still largely open issues, such as criteria for attributing liability for AI errors and the scope of organizational oversight obligations.
Second, an entirely new front is emerging, linked to so-called “agentic AI”: systems capable of making autonomous operational decisions—approving orders, managing contracts, responding to customers—without real-time human supervision.
Third, the phenomenon of “AI washing”—the practice of companies marketing technology as “AI-based” when it is not, or overstating the capabilities of their systems—is becoming an independent source of liability for misleading commercial statements. The FTC has already identified this area as an enforcement priority, making a significant increase in related proceedings likely.
The case law framework analyzed in this article—considered together with that examined in the previous contribution on regulated professions—delivers a clear message: the adoption of artificial intelligence is not legally neutral. The use of these systems in contexts that produce legally relevant effects—for companies, their executives, consumers, and third parties—requires careful risk assessment, appropriate governance, effective (and not merely formal) human oversight, and awareness that general legal principles, even in the absence of specific regulation, allow liability to be allocated to those who use such systems in ways that are inconsistent with applicable legal standards.
The information contained in this article is provided for general informational purposes only and does not constitute, and is not intended to constitute, legal advice or any other form of professional advice. The content does not take into account the specific circumstances of any individual case and should not be relied upon as a basis for making decisions without obtaining appropriate professional advice.