Content & Graphic Design

Turns Out, Sounding Like a Lawyer Requires Actually Being One

By John Reed | 03.14.2026

Here is the thing nobody in the generative AI debate wants to say out loud: you cannot automate expertise.

You can automate its appearance and replicate an air of authority, at least for a while. In some corners of the internet, that has been enough. Legal content is not one of those corners, and not a place where you should cut corners anyway.

The Three Goals of a Law Firm’s Website

A magnifying glass enlarges a fingerprint on white paper, with a ruler next to the print for scale. Another faint fingerprint is visible in the background.Every law firm website is trying to do at least one of three things: prove you exist, demonstrate that you’re worth hiring over the competition, or show up in search results for people who don’t already know your name. Most firms are chasing two or three of those at once, with varying degrees of intention and budget.

If you’re only concerned with the first one, you either have a practice so full it runs itself, or you’re saving up for a new flip phone and fax machine.

For everyone else, content has real stakes—and real trade-offs. Sure, you want to communicate information tailored to your target audience, but depending on your priorities, you need resonant, engaging, and persuasive content, and a lot of it. The question is what “a lot of it” actually buys you. The research finally has some answers.

The Skynet Is Falling

In the past few years, trumpets have heralded the arrival of generative AI as the answer to content production and expense. At the same time, alarms have warned of the havoc AI will wreak upon originality, accuracy, and the human element.

In the legal marketing community, two competing predictions emerged. One said Google would catch and punish firms publishing AI-generated pages. The other said firms not publishing AI content were already falling behind. Both turned out to be wrong, or at least not totally right.

What the actual research shows is more useful than either prediction, and it points directly at the thing AI has always struggled to replicate: genuine legal knowledge applied to a specific question by someone who actually knows what they are talking about.

The AI Content Debate Has Produced More Heat Than Light

Two studies have taken an empirical look at the relationship between AI-generated content and Google search rankings. Custom Legal Marketing (CLM) analyzed law firm websites specifically, measuring AI content usage across practice areas and correlating it against actual search performance. Ahrefs ran a broader analysis of 600,000 pages across industries. Both landed at essentially the same headline finding: the correlation between AI content percentage and ranking position is statistically negligible.

Break out your slide rules. Here comes the perfunctory math stuff.

  • The CLM correlation was r = 0.065, with a p-value of 0.138.
  • The Ahrefs number was 0.011.

If you do not speak statisticianese, let me translate: there’s no there there.

Google is not rewarding or penalizing pages and blog posts based on how they were written. That is a defensible position for Google to take. Its job is to surface useful results, not audit production processes. But that’s the least interesting part of the data.

Google Doesn’t Penalize It. So What’s the Problem?

The CLM study found a strong negative correlation between AI content and readability. Pages with higher AI content consistently scored worse on Flesch-Kincaid readability measures. And readability shows a meaningful relationship with rankings, not as a direct algorithmic input but as a proxy for the content-quality signals that Google weighs.

The only content-level variable with a statistically significant ranking correlation in CLM’s findings was word count. Substantive, well-researched writing runs longer. AI content tends either to run short, or when it does run long, to pad itself with the kind of plausible-sounding filler that deflates quality signals without anyone quite noticing why.

The accurate framing is not that AI content gets you penalized. It is that AI waters down the signals that move rankings while delivering nothing in return. When AI content fails, it’s not because Google objects to the tool, but because the output lacks what actually correlates with performance: depth, specific knowledge, and prose a person genuinely wants to read.

AI waters down the signals that move rankings while delivering nothing in return.

Hold On. Doesn’t Most Top-Ranking Content Already Use AI?

It’s a fair question, and one that deserves a real answer rather than short shrift.

The Ahrefs study found that 86.5% of top-ranking pages contain some AI-generated content. If that many high-ranking pages already include AI produced at scale quickly and in large quantities, doesn’t that undercut the case for human-written work?

Only if you stop reading there. Of the pages Ahrefs analyzed, just 4.6% were fully AI-generated. The “AI content” in the rest of that dataset reflects human-led writing where AI may have helped with research, structured an outline, or aided the human writer with titles, subheadings, keywords, or word choice. That is different from AI output, which someone may have only briefly reviewed before publication. Those two workflows produce different results, and the data reflects it.

The pages that ranked highest in the Ahrefs study also had the highest average word counts. Human contributions like depth, argument, nuance, and judgment are doing the ranking work. AI as a research tool inside a human-led process is not the same as AI as a substitute for expertise. The data clearly draw that distinction, even if the headlines do not.

Human contributions like depth, argument, nuance, and judgment are doing the ranking work.

Rankings Matter, But Beyond the Reasons You Might Think

The most direct case for search rankings is lead generation. Someone searches for an attorney, finds the firm, and calls. For many practice areas, that connection is immediate and measurable.

But rankings serve a second function that is often undervalued. Google is where professional relationships get verified. A referral source vetting an attorney before making an introduction runs a search. A prospective client who received a warm recommendation checks the firm’s content before returning the call. Opposing counsel, potential lateral hires, journalists working a legal angle on a developing story, and other interested parties and audiences use search as part of how they assess who they are dealing with. That should be a wake-up call to any firm that assumes a referral-driven practice doesn’t need to think about what Google finds when someone looks them up.

A well-ranked article on a regulatory change or a practice-area trend does more than generate clicks. It tells Google, and everyone who finds it, that the firm has genuine expertise and something worth saying.

That matters more now than it did even two years ago. You know those AI summaries Google and Bing display at the top of a search results page, above the first match? A Semrush study analyzing more than 10 million keywords found that Law & Government is one of the fastest-growing categories in AI summaries, and Google is increasingly synthesizing legal content directly in AI Overviews before users click anywhere.

The law firms most likely to be cited in those summaries are the ones Google already treats as authoritative. That authority is built through the same fundamentals that drive organic rankings: depth, readability, and genuine expertise. The bar for appearing in AI Overviews and the bar for ranking well are converging.

Google Is Holding Legal Content to a Higher Standard

As this Search Engine Land post explains, Google updated its Search Quality Rater Guidelines in January 2025 and directed approximately 16,000 human evaluators to identify AI-generated content and potentially rate it “Lowest Quality” when it lacked originality, effort, or genuine added value. This was the first time Google had explicitly defined generative AI in its guidelines and framed its misuse as a quality problem rather than a spam problem.

Legal content falls under the Your Money or Your Life (YMYL) moniker, a real category that Google holds to its highest standards for expertise, experience, authoritativeness, and trustworthiness (E-E-A-T). (Geez, will the acronyms never stop?) Filler content, inflated expertise claims, and scaled AI output now carry explicit consequences under these guidelines. The update also flagged mass-produced, scaled content abuse specifically: large volumes of pages generated with minimal effort or originality.

For any firm treating content as a numbers game, that is a direct line of exposure.

Justia’s 2025 legal SEO analysis puts it plainly: AI content without actual attorney insight, jurisdiction-specific analysis, and demonstrated experience will not meet the YMYL/E-E-A-T bar Google is actively raising. Content that appears to claim expertise it cannot demonstrate now draws scrutiny from human evaluators on top of algorithmic signals. In the legal community, that combination has teeth. Big sharp ones. Fangs.

The Practical Takeaway

The research does not say AI content is dangerous. It says AI content does not help rankings, consistently degrades the quality signals that matter, and cannot replicate the depth and specificity that makes legal content credible to Google or to anyone reading it.

Law firms doing this well are not treating it as a compliance exercise. They produce content that reflects genuine expertise and clear thinking because that is what quality professional content requires, and because their clients, their referral sources, and their reputations depend on it. Google has spent years building the machinery to distinguish between content that demonstrates knowledge and content that simulates it. The 2025 research suggests that machinery is working.

The AI content debate framed this as a choice between efficiency and quality. The data does not support that framing. What it supports is something simpler and less convenient: there is no shortcut to sounding like you know what you are talking about, because sounding like you know what you are talking about requires actually knowing what you are talking about. Google is getting better at telling the difference.

And so is everyone else reading your firm’s content.

Want more News & Ideas?