The Law @ Work

Spotting the Red Flags: The Legal Risks Posed by Employer Use of AI

By Kayla Snider, Esq.

Artificial intelligence, or AI as we all refer to it, is rapidly transforming and becoming increasingly integrated into employer’s decision-making.  Employers are leveraging AI tools to streamline hiring, manage employee performance, draft contracts, policies, or other legal documents, and even to answer complex legal questions.  But as the adoption of AI tools accelerates, so do the risks.  For employers in particular, who have to grapple with human emotion and employee responses to decision making, the use of AI can raise serious concerns. 

We have seen a growing number of employers rely on and adopt AI tools and want to ensure that employers are aware of the potential legal risks associated with AI use and AI decision-making.  This post is going to focus on three “red flags” with AI usage: (1) the danger of inaccurate or misleading legal outputs; (2) how AI may threaten attorney-client privilege or the work product doctrine; and (3) the growing wave of litigation targeting AI platforms.

Red Flag #1 – Inaccurate and Hallucinated Legal Advice

Employers are increasingly turning to AI tools for real-time guidance on employment issues (terminations, accommodations, workplace policies, leave requests, etc.).  However, AI-generated responses are not a substitute for legal advice—AI tools are not lawyers.  While AI may be able to generate a persuasive, well-structured response, it is also prone to producing incorrect or entirely fabricated information (commonly referred to as “hallucinations”). 

AI tools can (and will) generate responses that include misstatements of federal, state, or local employment laws; fail to account for specific requirements where your organization is located; oversimplifications or outdated legal standards.  To show you what I mean, look at this exchange between me and AI…

Q: What do I need to do to terminate an employee tomorrow?

A: Follow these steps: (1) Review documentation – ensure that the termination reason is documented and consistent with company policy; (2) Conduct the Termination Meeting – avoid debating past performance issues and keep explanations; (3) Final Paycheck – provide the final paycheck on the same day; (4) Legal Compliance – follow federal and state laws; (5) Post-Termination – maintain clear communication with the team and retrieve company property as needed.

While this response is not inherently incorrect, it oversimplifies the considerations that need to be taken when terminating an employee.  Even more worrisome, it does not detail what must be included in the employee’s final paycheck under the Massachusetts Wage Act (which has significant penalties for employers who fail to comply). 

Relying on responses from AI can lead to costly mistakes—employers could face wrongful termination, discrimination, and/or retaliation claims. We have already assisted clients with situations that carried increased and unnecessary risk due to the employer’s previous use of and reliance on AI only.  It is really important the employers treat AI-generated guidance as the starting point, not the final answer.  If it is a situation where you truly don’t know what you can and cannot do and you are relying exclusively on AI, then consult with competent legal counsel before following the recommendation of AI. 

Red Flag #2 – AI May Ruin Attorney-Client Privilege and Work Product

Attorney-Client Privilege (noun)

  • A legal principle that protects confidential communications between a client and their attorney made for the purpose of seeking legal advice
  • “During the lawsuit, the employer invoked attorney-client privilege to withhold emails from the employee.”

Work Product (noun)

  • Materials, notes, mental impressions, or legal strategies prepared by an attorney or for an attorney in anticipation of or preparation for litigation
  • “During the lawsuit, the employer sent their attorney notes relating to the allegations the employee made, which were work product and did not have to given to the employee.”

In February 2026, two different courts in two different states, within a week of each other, issued judicial opinions regarding the use of AI in litigation.  The case that swept the headlines first was a decision in a criminal case issue by the United States District Court for the Southern District of New York.  However, just 7 days earlier, the United States District Court for the Eastern District of Michigan issued a decision in a civil case that seemed to be the total opposite of what the New York court said.  Do these two cases really say opposite things?  Let’s break it down…

United States v. Heppner (New York Court, Criminal Case)

In Heppner, the court ruled that documents a criminal defendant (Heppner) created through his own exchanges with an AI platform and sent to his attorney afterwards were not protected by either the attorney-client privilege or the work product doctrine.  Heppner had used a public AI tool that specifically indicated it could not provide legal advice and whose privacy policy authorized data collection, model training, and disclosure to third parties.  He did this on his own, without direction from his attorney.  The court found that the AI tool was not a lawyer, that the terms of the platform did not create any expectation of privacy or confidentiality regarding what Heppner input into the platform, and that Heppner was not trying to obtain legal advice resulting in no attorney-client privilege.  The court also found that the documents prepared with the AI tool were not prepared at the direction of Heppner’s attorney and did not reflect his attorney’s strategy resulting in no work product protection.  Therefore, the government was able to receive documents related to Heppner’s use of the AI tool. 

Warner v. Gilbarco, Inc. (Michigan Court, Civil Case)

In Warner, a pro se party (meaning not represented by an attorney) had used ChatGPT to prepare legal briefs in anticipation of litigation.  During the discovery phase of the case, opposing counsel asked the court to compel the party to provide those materials to opposing counsel; the court denied the motion.  The court held that the pro se party’s materials were protected work product under the civil rules of procedure because they were made in preparation for the litigation.  Specifically, the court found that the pro se party’s use of AI did not waive work product protection because AI platforms are “tools, not persons” and waiver of the work product protection requires disclosure to an opposing party (which AI is not). 

While these cases may have reached opposite conclusions, that is because there were opposite facts.  The critical point that these cases make is that it’s not AI that waives attorney-client privilege or work product protection, it’s how individuals use AI.  So, employers should be wary about the confidentiality and privacy policies of the AI tools that they use and even if you are represented by an attorney it may not be good idea to use AI to help you do your own independent legal research or analysis. 

Red Flag #3 – AI Platforms Are Being Sued

To further illustrate the potential legal risks with using AI tools, there has been a rapid increase in the number of lawsuits filed against AI platforms.  These lawsuits, while still evolving, include allegations of defective or misleading outputs, failure to warn about the limitations of the AI platform, bias and discrimination, and data use and privacy violations.  While these lawsuits may ultimately clarify the responsibilities of AI vendors, they do not eliminate the red flags that AI use can raise in the meantime.  Courts have not yet established clear, consistent standards governing AI liability, and legislation regarding AI use is still pending.  The rising number of cases against AI platforms highlights the potential weaknesses in AI tools that employers should be wary of before relying solely on AI. 

The Bottom Line…

AI is not going away, and it offers powerful opportunities for employers to increase efficiency and streamline processes.  However, the red flags with AI use are no longer theoretical or potential issues; they are actively shaping workplace practices.  Employers that take a proactive, informed approach will be in the best position to benefit from AI, while minimizing legal risk.  Without proper safeguards, including consulting with competent legal counsel, relying solely on AI guidance can expose employers to significant legal risks.  Ultimately, AI should augment—not replace—legal advice from a competent attorney. 

If you or your organization have any questions about hiring, managing employee performance, terminations, leave requests, wage and hour compliance, or any other legal issues regarding employment, consider contacting experienced employment counsel.

Share this