Workslop in the Office is Annoying. Workslop in Your Law Firm is Dangerous
BY Jason Bland
- 40% of workers have received workslop in the past month.
- There is a massive perception gap between leadership and staff.
- Legal workslop carries consequences no other industry faces.
- Workslop destroys trust inside teams.
- The fix is not avoiding AI. It is building a culture of verification.
Your associate used AI to draft a motion for summary judgment. The formatting looks sharp. The argument structure seems logical. The citations are in perfect Bluebook format. There is just one problem. Three of those citations do not exist.
This is workslop. And it is already costing law firms real money.
Table of Contents
What AI Workslop Actually Means
The term "workslop" was coined by researchers at Stanford University and BetterUp Labs to describe AI-generated output that appears polished on the surface but lacks the substance to advance a given task. It looks right. It reads well enough. But when the person on the receiving end tries to act on it, they find themselves spending hours correcting, rewriting, or verifying what should have been usable work product.
The original research, published in Harvard Business Review, surveyed 1,150 U.S. desk workers. Forty percent said they had received workslop within the prior month. Each instance cost the recipient an average of nearly two hours in rework. The researchers estimated this adds up to roughly $9 million a year in lost productivity for a 10,000-person organization.
Those numbers are bad enough in a corporate setting. In a law firm, the stakes are much higher.
The Executive Perception Gap on AI
One of the more revealing findings from the broader research is the disconnect between leadership and the people doing the work. A survey of 5,000 white-collar workers found that 92% of executives believe AI makes them more productive. Meanwhile, 40% of non-managers say it saves them no time at all.
That gap should concern every managing partner who just rolled out a firm-wide AI mandate without a corresponding training program.
As Stanford researcher Jeff Hancock put it, workers are often being told to use AI without direction or support. The pressure to adopt is coming from the top. The consequences of bad output are landing on the people below. The Guardian reported that companies including Block, Amazon, and Target have laid off workers while simultaneously attributing productivity gains to AI. The remaining employees feel pressure to demonstrate that the technology is working, even when it is not.
Aytekin Tank, writing in Forbes, described the pattern this way: when AI first became widely available, leaders told teams to experiment and figure out how the technology could help. That openness, without guardrails, created what Tank called "quality-control anarchy." Teams defaulted to speed over substance, and the result was a workslop epidemic.
Law firms are not immune to this dynamic. Many managing partners have encouraged attorneys and staff to start using AI tools without defining where those tools belong in the workflow and where they do not.
Why Law Firms Should Pay Closer Attention Than Anyone Else
In most industries, workslop wastes time. In law, it creates liability.
The most visible version of legal workslop is the hallucinated citation. A lawyer prompts an AI chatbot to find supporting case law, and the model invents cases that do not exist. The citations look real. The court names are plausible. The holding language sounds authoritative. But none of it is true.
This is not a hypothetical concern anymore. Damien Charlotin, a researcher at HEC Paris, maintains a global database tracking court decisions involving AI-generated hallucinations. As of early 2026, that database contains more than 1,200 documented incidents worldwide, with roughly 800 of those in U.S. courts.
Courts are responding with escalating force. In the first quarter of 2026 alone, U.S. courts imposed more than $145,000 in sanctions tied to AI-generated errors. The Sixth Circuit sanctioned two attorneys $15,000 each for briefs containing over two dozen fabricated citations. In Oregon, an attorney was ordered to pay more than $109,000 after filing three separate motions with fabricated case law and fabricated quotations. Gordon Rees, one of the largest law firms in the country, experienced three hallucination-related incidents in six months between late 2025 and early 2026.
These are not edge cases. They are symptoms of a systemic failure to verify AI output before it leaves the firm.
The Downstream Trust Problem
The financial cost of workslop is significant. But the research suggests the interpersonal damage may be worse.
The BetterUp and Stanford study found that more than half of workers who received workslop viewed the sender as less creative. Roughly half considered them less capable. Nearly half saw them as less reliable. About a third said they were less likely to want to work with that person again.
In a law firm, reputation is everything. A partner who submits a sloppy AI-drafted brief does not just risk sanctions. They risk losing the confidence of the judge who reads it, the opposing counsel who spots the errors, and the client who expected competent representation. One bad filing can follow an attorney for years.
Tank described workslop as the "corporate equivalent of a Potemkin village." It looks impressive from a distance but is hollow up close. That metaphor applies directly to the law firm that adopts AI tools to appear innovative while producing work product that creates more problems than it solves.
The Workslop Waterfall in Legal Teams
In most organizations, workslop flows downstream. A manager sends an AI-generated memo to a direct report. A peer forwards a chatbot-drafted email to a colleague. The recipient has to spend their own time sorting out what is accurate, what is incomplete, and what is simply wrong.
In law firms, this downstream effect is amplified because legal work is inherently collaborative and hierarchical. A partner asks an associate to draft a brief. The associate uses AI. The partner reviews it but only catches the surface-level issues. The brief goes to the client. The client, who is paying $400 or $600 an hour, expects the work to be right the first time.
Now consider what happens when that brief reaches the courtroom. The judge reads the filing and finds citations that do not check out. The opposing counsel files a motion for sanctions. The client has to explain to their board why their legal team submitted fabricated case law. Everyone loses.
This is the workslop waterfall at its most destructive. The person at the bottom of the chain, often the client, absorbs the accumulated cost of everyone above them cutting corners.
Where AI Actually Works in Law Firms (and Where It Does Not)
None of this means AI has no place in legal practice. It does. The distinction is between using AI as a starting point and treating AI output as finished work.
AI can be useful for generating initial outlines, identifying research directions, drafting boilerplate language, and summarizing large volumes of documents. These are tasks where the output is a draft, not a deliverable. The attorney reviews, edits, and verifies before anything leaves the firm.
Where AI breaks down is when it replaces judgment. Legal reasoning requires understanding context, weighing competing authorities, and making arguments that are tailored to the specific facts of a case. AI models are not doing that. They are predicting what words are likely to come next based on patterns in their training data. That is useful for some tasks and dangerous for others.
The HBR research drew a distinction between "pilots" and "passengers" in AI adoption. Pilots use AI as a tool while applying their own judgment, framing, and editing. Passengers let AI do the thinking and send the result without meaningful review. The firms that will survive the workslop era are the ones that produce pilots.
What Law Firm Leadership Should Do Now
The research across all three major workslop studies points to a consistent conclusion: the problem is not the technology itself. The problem is how organizations are deploying it.
Define where AI belongs and where it does not. Not every task benefits from AI assistance. Identify the workflows where AI adds genuine value and the ones where it introduces risk. For example, answer engine optimization for your law firm is a great place for AI tools. However, citation research without verification is an obvious no-go zone. Initial contract review with human oversight is a more defensible use case.
Train your people. The Guardian reported that many companies are mandating AI use without providing training or guidelines. That is a recipe for workslop. Lawyers and staff need to understand what these tools can and cannot do. They need practical instruction on how to prompt effectively, how to verify output, and how to recognize hallucinated content.
Build verification into the process. Every piece of AI-assisted work product should go through a verification step before it leaves the firm. For legal research, that means checking every citation against a reliable database. For drafted language, that means a substantive review by someone with the expertise to evaluate accuracy and reasoning.
Set quality standards, not speed targets. If the firm's AI policy is focused on producing more output faster, it is setting up a workslop factory. The goal should be better work, not just more work. Measure outcomes, not volume.
Model the behavior from the top. Partners who forward unreviewed AI-generated memos to associates are teaching the firm that workslop is acceptable. Leadership sets the standard.
The Real Cost of Getting This Wrong
A law firm that gets AI right gains a competitive advantage. Faster first drafts. More efficient document review. Better-informed strategy. These are genuine benefits when AI is used with discipline.
A law firm that gets it wrong faces sanctions, malpractice exposure, damaged client relationships, and reputational harm that compounds over time. With more than 35 state bar associations now issuing formal guidance on AI use and the ABA emphasizing competence and candor obligations through Formal Opinion 512, the regulatory pressure is only increasing.
The workslop problem is not going away on its own. AI tools will continue to improve, but they will also continue to produce output that looks better than it actually is. That gap between appearance and substance is where the risk lives.
Every law firm that uses AI, which is quickly becoming every law firm, needs to decide right now whether it is building a culture of verification or a culture of workslop. The firms that choose verification will win the cases and keep the clients. The ones that choose speed over substance will eventually find themselves on the wrong side of a sanctions order, wondering how a brief that looked so polished turned out to be so empty.
LATEST STORIES
MORE STORIES