I think this New York proposal is dumb.
Not because bad AI outputs do not exist. They do. Anyone who has used AI long enough knows that it can be wrong, overconfident, shallow, or flat-out ridiculous. But the answer to that problem is not to treat adults like helpless livestock who must be shielded from information unless it comes through a licensed gatekeeper.
That is the mindset behind a New York bill backed by State Senator Kristen Gonzalez. The bill, S7263, would impose liability when a chatbot gives "substantive" responses or advice that would amount to unauthorized practice if a human did it in certain licensed fields. It also says a company cannot escape that liability just by warning users they are talking to AI. A user could sue for damages, and in some cases recover attorney's fees. The bill remains in committee as of March 2026.
What this bill would actually do
- Chill useful answers — AI companies will sanitize responses into mush to avoid liability
- Protect licensed guilds from competition, not the public from harm
- Reduce access to information for people who cannot afford a lawyer, doctor, or engineer every time they have a question
- Attempt something government has never done and never will: protect people from the consequences of their own bad judgment
The Bill Is Sold as Anti-Impersonation — the Language Is Much Broader
To be fair, the bill's defenders frame it as a measure against chatbots "impersonating" licensed professionals, not a total ban on information. Gonzalez says it would still allow general information, so long as the chatbot is not presenting itself as a licensed professional. Fine. That sounds narrower than the social-media version.
But the actual bill language is where the trouble starts. The sponsor memo says it would prohibit chatbot proprietors from providing "any substantive response, information, or advice" that, if done by a human, would constitute unauthorized practice under various sections of New York law tied to licensed professions.
What counts as "substantive"? What counts as "advice"? If I ask AI whether a roof truss span looks undersized, is that education or engineering advice? If I ask whether chest pain plus shortness of breath could be dangerous, is that health information or medical advice? If I ask how to respond to a demand letter, is that public legal information or legal strategy?
That ambiguity is not a bug. It is the whole game.
Once liability attaches to vague standards, companies do what companies always do: they over-censor. They clamp down beyond what the law strictly requires because the safest answer is no answer. The result is not precision. The result is neutered software — fewer useful tools for competent adults because legislators are obsessed with the least competent users.
This Is Really About Gatekeeping Information
Let's stop pretending this is only about safety. A huge chunk of modern licensing culture is about gatekeeping. Sometimes licensing makes sense — trained surgeons, competent structural engineers, qualified attorneys in actual court proceedings. Nobody serious argues otherwise.
But access to information is not the same thing as professional representation.
What requires a license
Professional representation
Performing surgery. Representing someone in court. Signing off on a structural design for a public building. Acting as someone's licensed agent in a regulated transaction.
What does not require a license
Information exchange
Explaining what a contract clause means in plain English. Describing symptoms associated with dehydration. Discussing general engineering principles. Human beings have always done this freely.
AI is just a faster, broader, more searchable version of information exchange that people have always done. And that is exactly why regulators hate it. It threatens the old tollbooth model. It lets ordinary people get preliminary guidance before paying a professional. It lowers the cost of basic understanding. It shifts power away from institutions that have enjoyed information asymmetry for decades.
The emotional pitch is safety. The practical effect is control.
Adults Are Supposed to Use Judgment
Here is the part too many lawmakers refuse to say out loud: a lot of harm comes from people making bad decisions with or without AI. Before chatbots existed, people took medical advice from cousins, vitamin grifters, chain emails, daytime television, and drunk guys at the end of the bar. They still do. The internet did not invent gullibility. It merely sped it up.
So what is the real argument here? That because some people are too foolish to verify important information, the rest of society should have weaker tools? That is backwards.
The correct response to AI is the same response intelligent adults have always used when stakes are high: check, compare, verify, and escalate to a real professional when necessary. You do not ask a chatbot whether to ignore crushing chest pain and then blame civilization when that goes poorly. That is not a technology problem. That is an adult competence problem.
The Real Victims Will Be Ordinary People, Not Big Tech
The funny part is that bills like this are sold as strikes against powerful tech companies. In reality, the burden lands heavily on the public. A wealthy person can still call a lawyer, hire a concierge doctor, or retain an engineer on demand. The ordinary person cannot.
How ordinary people actually use AI — as a starting point, not a final answer
- A small-business owner understanding a lease before paying counsel to review it
- A homeowner learning drainage and framing basics before talking to a contractor
- A patient organizing symptoms and questions before seeing a doctor
- A stressed-out person thinking through options before deciding if they need professional help
Once liability standards become vague and aggressive, the safest corporate response is to refuse more of those conversations. The result is less access for the average person and more dependence on licensed intermediaries. That is not progress. That is re-feudalization of information.
Bad Answers Should Be Handled Honestly, Not Politically
There is a reasonable argument for clear disclosures and transparent limitations. New York already has a related disclosure bill in this same legislative orbit requiring AI operators to warn users that outputs may be inaccurate. That is far more sensible than trying to police "substantive" responses. Tell users what the system is. Tell them it can be wrong. Tell them not to rely on it as a substitute for a licensed professional in high-stakes matters. That is honest.
But the bill goes further — and the incentive structure it creates is perverse. Private lawsuits for damages, plus attorney's fees for willful violations, means legal ambiguity becomes a business model. The more uncertain the boundary, the more leverage for litigation. The more leverage for litigation, the more companies dumb everything down preemptively. This is how lawmakers turn edge cases into broad censorship.
Expertise Matters — But Information Should Stay Free
I am not arguing that AI is equal to a seasoned surgeon, trial lawyer, or structural engineer. It is not. Expertise matters. Credentials matter. Real-world experience matters. That is why people still hire professionals. But there is a massive difference between two very different claims:
Reality
"AI is not a replacement for licensed experts in high-stakes situations."
Paternalism
"AI should be legally constrained from providing robust answers because the public might take it too seriously."
The correct answer
The answer to imperfect tools is competence. The answer to bad speech is better judgment — not silence.
A free society should prefer informed citizens, not dependent ones. Broad access to knowledge, not artificially scarce knowledge rationed through institutional choke points. Better judgment, not enforced helplessness.
The Bottom Line
This New York proposal is built on a false premise: that government can protect people from the consequences of their own poor judgment by restricting what AI is allowed to say. It cannot.
What it can do is make AI less useful, more sanitized, more lawyered-up, and more deferential to professional gatekeepers. It can raise costs, reduce access, and turn a powerful public tool into a neutered compliance machine.
Yes, AI can be wrong. Yes, bad advice in high-stakes fields can hurt people. Yes, companies should not market bots as licensed human professionals. But beyond that, adults need to act like adults. Government cannot stop Darwin from sorting things out. It can only inconvenience everyone else in the process.
This fight is bigger than one New York bill. It is about whether AI becomes a broadly useful tool for ordinary people or a heavily filtered product that only recites canned disclaimers until you give up and pay a professional. It is about whether access to knowledge expands or whether regulators and guilds push it back behind a paywall. And it is about whether society still expects adults to think, verify, and take responsibility for their choices.
If lawmakers keep regulating around the dumbest possible user, the smartest and most responsible users will be punished too. That is a bad trade.
References
- New York State Senate. (2025). S7263: Imposes liability for damages caused by a chatbot impersonating certain licensed professionals.
- New York State Senate. (2026, March 6). NY State Senator Kristen Gonzalez on her bill to address AI chatbots impersonating licensed professionals.
- New York State Senate. (2026, January 29). State Senator Kristen Gonzalez introduces bill to protect minors from AI chatbots, in partnership with Attorney General Letitia James.










