AI's Legal Bill Is Coming Due. Here's How We Think About It.

By Hutch 8 min read

The short version

AI is not a separate body of law. It's IP, privacy, contracts, employment, and product liability — all the categories that already governed your business — moving at a much faster pace. The legal consequences are not theoretical; they're showing up in court right now.

A recent Search Engine Land piece by Greg Sterling walks through what's actually been litigated, fined, or settled in the AI space lately, and it's a pretty good summary of what we've been quietly tracking on the legal side. Worth reading in full, but the takeaway we want to draw out here is a little different: the headlines are not really about AI failing. They're about what happens when companies hand the wheel to a tool that can't actually drive.

The Air Canada problem (a.k.a. the whole argument in one case)

Air Canada's customer service chatbot told a grieving passenger that the airline offered a bereavement-fare refund. It didn't. The Canadian tribunal's response was, essentially: it doesn't matter that the chatbot was wrong, the chatbot is on your website, you own what it says. They paid out.

This is the part that should make every business owner sit up. If you put an AI tool in front of your customers, you are on the hook for every word it generates. "The AI made it up" is not a defense — it's just an admission that your QA process has a hole in it.

The nine-or-so buckets of risk

Sterling's piece breaks the legal exposure into nine categories, and they're worth listing because each one is a real case, not a hypothetical:

  • Intellectual property — the U.S. Copyright Office now requires "meaningful human authorship" for copyright protection. Pure generative output, by itself, is not protectable. (Separately, see The New York Times v. OpenAI for the other side of the same coin — training-data lawsuits are very much live.)
  • Misinformation — Google Bard's false claim about the James Webb Telescope reportedly cost Alphabet roughly $100B in market value. Hallucinations are not a quirky bug, they're a liability surface.
  • Privacy — GDPR, PIPEDA, and CCPA all apply when AI tools touch personal data. Italy blocked ChatGPT outright until OpenAI added privacy controls.
  • Trade-secret leakage — Samsung engineers pasted proprietary source code into ChatGPT. That code went to an external system. Once a secret is in someone else's model, it's not a secret.
  • Employment bias — Amazon scrapped its AI hiring tool when it turned out to be downranking women. iTutorGroup settled with the FTC over age discrimination in AI screening. Biased decisions are yours, even when an algorithm made them.
  • Contract / customer expectations — see Air Canada above.
  • Vendor and supply-chain risk — a 2023 ChatGPT bug exposed users' chat histories and partial payment info, traced to a third-party library. The risk doesn't stop at the vendor you signed a contract with; it extends to whatever they signed contracts with.
  • Product liability — Zillow lost hundreds of millions when its valuation model misread the market. The algorithm doesn't get sued; the company does.
  • Regulatory / "AI washing" — the SEC has gone after firms for misleading claims about AI capabilities. The FTC took action against Rite Aid for facial recognition that produced false positives at disproportionate rates against people of color.

Every one of those was a judgment failure, not a technology failure

Read that list again. The interesting thing isn't that the AI was wrong — AI is wrong all the time, that's table stakes. The interesting thing is what was missing in each case: a person with enough context to catch it before it shipped.

Air Canada's chatbot didn't know what bereavement policy the airline actually offered; a human in customer service does. Amazon's hiring model couldn't tell that "downranking women" was the obvious read of its own outputs; any HR professional could. The Samsung engineers didn't realize that pasting proprietary code into a chat window meant handing it to someone else's server; anyone with a security background would have stopped them. Zillow's model couldn't tell that the housing market had shifted out from under its training data, because the model has no concept of "the housing market" — it just has numbers.

This is the part the hype cycle gets wrong. AI is good at producing, and it is genuinely bad at judging. It will happily generate a confident, polished, plausible-sounding answer to a question it does not understand, in a domain it cannot reason about, against legal and ethical constraints it has no idea exist. The cost of catching that gets externalized to whoever is downstream — your customers, your legal team, the FTC, a Canadian tribunal.

So the real question isn't "should we use AI?" It's "who's the adult in the room when the AI is wrong?" And the answer has to be a human with enough domain expertise to recognize when something looks off. A developer who can read what the AI generated and know it won't survive contact with production. An account manager who can read an AI-drafted email and recognize that it's promising something the contract doesn't allow. A compliance officer who can look at an AI hiring recommendation and see a disparate-impact problem before it becomes a lawsuit.

Without that wisdom at the helm, AI doesn't replace expertise — it scales the lack of it. Every press release we just listed is what happens when a company tries to take the human out of a loop the human was doing important work in.

How we actually use AI here

This is where we land, and it's pretty close to what we'd tell a client on a call:

  1. AI is a tool, not a replacement. We use it as an assistant for secondary code review, syntax checking and linting, repetitive task shortcuts, and testing/bug-checking. There are certainly savings to be had here, but quantifying them is exceedingly hard, and we don't pretend otherwise. In practice we aim for a 10-15% time-and-cost savings on work where AI can responsibly help, and we cap quoted hours so the client doesn't carry the risk of us being wrong about that.
  2. An experienced human always has the wheel. Every line of generated code is reviewed by a developer who can tell whether it actually works. Every piece of generated copy is read by someone who knows what the business can and can't promise. The AI's job is to draft, suggest, and accelerate — not to ship.
  3. Nothing PII-heavy goes near a consumer AI tool. If we're touching personal data, regulated data, or client trade secrets, the work happens in a controlled environment with vendor-vetted tooling — not a free-tier chat window. The safety and security cost of doing it the other way is way higher than the time saved.
  4. Human review is non-negotiable for anything customer-facing. No customer-facing copy, no chatbot response, no contract or legal-adjacent language goes out without a human signing off. The Air Canada case is exactly why.
  5. We assume hallucinations and brittleness. The 12x maintenance cost problem in AI-generated code is real (good writeup here), and the daily news cycle of breaches, security flaws, and SaaS failures stemming from AI-generated code is real. Plan accordingly.
  6. We don't promise tools will exist in their current form next quarter. RIP Sora. If a piece of your stack depends on a model API that the vendor can pull or change overnight, that's a risk that belongs on the project plan, not buried in the small print.

What this means for your project

None of this is about being anti-AI. We use it daily and it genuinely does help. But the way we use it is deliberate, and the things we won't use it for are also deliberate. The savings are real but smaller than the hype cycle suggests, and the cost of getting it wrong — legally, reputationally, or just in terms of brittle code you have to throw away in six months — can be a lot bigger than whatever you saved up front.

If you're trying to figure out where AI fits, or doesn't, in something you're building, we're always happy to talk it through. There's no playbook yet for any of this, but there is a defensible way to think about it, and we'd rather walk through the tradeoffs honestly than sell you a wave of AI savings that doesn't survive contact with reality.

By all means, let us know if you have questions, comments, or concerns — this is certainly a deep and interesting and ever-changing topic, and we're always happy to discuss further.

Source: Greg Sterling, "The legal consequences of using AI and how to use it safely", Search Engine Land.
Photo by
Sasun Bughdaryan on Unsplash