AI content & SEO · The mechanics
AI-detection tools and the ranking myths around them — what’s actually true.
The detector you can buy doesn’t work reliably. Google has said it doesn’t use AI-detection to rank and doesn’t care how content is made. And “AI content gets deindexed” mostly conflates a different thing entirely. Here’s what to stop worrying about — and the handful of things that actually do matter.
The detectors don’t work, Google doesn’t use them, and “deindexed for AI” is the wrong story.
A whole anxiety industry has grown up around AI-detection — tools that score your text for “AI-ness,” consultants who run your pages through them, scare posts about pages getting deindexed for failing some invisible AI test. Most of it rests on two false beliefs: that the detectors are accurate, and that Google uses something like them to rank. Neither is true. The detectors are unreliable enough that you shouldn’t trust their verdict, and Google has been clear it doesn’t rank by production method at all. Once you stop optimising for “looks human,” the question goes back to where it always belonged: is this the most useful answer to the search, and does someone credible stand behind it?
The detectors you can buy don’t work — and the failures aren’t subtle
The AI-detection tools on the market share a set of problems serious enough to make their output close to useless for any decision that matters:
- False positives on plain human writing. Clean, structured, well-edited prose — exactly what good writing looks like — is what these tools most often flag as machine-generated, because that’s the pattern they associate with AI. People have run famous human-written texts through detectors and gotten “AI” verdicts. If a tool can’t tell a careful human writer from a machine, its “AI” label means nothing.
- Trivially fooled. Light editing, a few rephrasings, running the text through another tool — the “score” moves around with changes that don’t affect quality at all. A metric you can game by reshuffling sentences isn’t measuring anything real.
- No agreed standard. Different detectors disagree with each other on the same text. There’s no benchmark, no validation everyone accepts, no reason to believe one tool’s verdict over another’s. It’s not a measurement; it’s a guess with a confidence percentage stapled to it.
So treating a detector’s output as a fact about your content — or anyone else’s — is a mistake. It tells you what one unreliable tool guessed. It doesn’t tell you whether the page is good, and it certainly doesn’t tell you what Google will do with it.
If a vendor’s pitch is “we’ll get your AI score down,” ask what that score is supposed to predict. It doesn’t predict rankings — Google doesn’t use it. It doesn’t predict quality — plain human writing fails it. You’d be paying someone to optimise a number that correlates with nothing you care about.
Google has said it doesn’t rank by how content was made
This is the part that ends the worry. Google’s position has consistently been that it doesn’t use an AI-detection signal to rank, and more broadly that how content is produced isn’t the ranking factor — whether it’s helpful is. Its guidance on people-first content, and the helpful content system that the March 2024 core update folded into the core algorithm, are all about whether a page genuinely satisfies the person who searched. Nowhere in any of that is “was a tool involved.” We go through the actual wording in what Google actually says about AI content — but the short version is that the search engine has been about as explicit as it gets: produce content for people, not search engines; the production method isn’t being judged; the quality and the intent are.
Which means optimising your content to “look human” is optimising for a referee who isn’t watching. The effort is better spent making the page the most useful answer to the search — which is the only thing the referee who is watching actually cares about. The deeper version of “what actually ranks” is on the topical-authority hub: helpful depth that matches intent, structured so a buyer and a search engine can both follow it.
You’re not being graded on whether the content looks human. You’re being graded on whether it helps a human. Optimise for the test you’re actually taking.
“AI content gets deindexed” — what that’s actually about
You’ll see this claim a lot, and it’s a conflation. Pages do get hit — by manual actions or algorithmic demotion — but the thing being actioned isn’t “AI.” It’s unhelpful content produced at scale primarily to manipulate rankings: the 2024 spam-policy line on “scaled content abuse,” which Google was careful to say applies regardless of how the pages are made — by automation, by humans, or both. The pages that get buried are thin, near-duplicate, padded, made-for-rankings pages. They’d get buried whether a person or a tool produced them — and plenty of all-human content farms have been, for exactly this. Calling it “the AI penalty” is like calling a speeding ticket “the red-car penalty” because the car that got pulled over happened to be red.
The classic version of this for service businesses isn’t AI at all: it’s spinning a thin landing page for every town within fifty miles — one template, the city name swapped, no local substance. That’s the “scaled content abuse” pattern in its purest form, and it would tank a hand-built version just as fast. We go through the right way to do service-area pages — and the wrong way — in service-area pages done right. The lesson generalises: the risk vector is thin-and-mass-produced, and “AI” is incidental to it.
The real risk vectors — none of which is “AI” per se
So what should you actually watch? The things that have always been risky, AI or no AI:
- Thin pages. Pages with nothing genuinely useful to say. A draft no one improved is usually thin; so is a human page no one improved. The fix is the same: don’t ship it. (More on where that line sits in how much AI is too much.)
- Near-duplicate pages. Templates with the variable swapped and nothing else changed. This dilutes the quality signals across the whole domain — it drags the good pages down with the bad ones. Covered in will AI content hurt my existing rankings.
- Mass production primarily for rankings. Auto-publishing pages at volume to game search rather than to help anyone — the “scaled content abuse” profile. The volume isn’t the problem; the volume without substance, for rankings is.
- No accountability. No author, no expertise, no one who checked it — which fails the credibility test Google’s quality guidance is built around. The fix is a real byline and a real editor; see E-E-A-T when AI helped write it.
Notice the through-line: every real risk is about quality and intent, and a real topical map is the structural defence — when every page has a distinct job and genuine substance, you’ve built the opposite of “scaled content abuse.” That’s why the agency’s AI-accelerated production sits on a deliberate topical structure with senior editing on every page rather than just running drafts out the door — the speed is real, but the discipline is what keeps the volume coherent. You can see it built at scale on the authority-site service.
“Detectors don’t work” is not “anything goes.” It means the detector isn’t the thing to worry about — the thing to worry about is whether your pages are genuinely useful. If you auto-publish unedited, generic, near-duplicate drafts at volume, you’ve built the exact profile that does get actioned, and “the detector said it was fine” won’t save you because the detector was never the issue. Stop optimising for “looks human.” Start optimising for “is the best answer here.”
What to stop worrying about — and what to actually do
Stop worrying about: AI-detector scores (they predict nothing you care about); whether your content “reads as human” (Google isn’t grading that); a vague sense that “AI content gets deindexed” (it’s unhelpful, mass-produced content that gets actioned, AI or not). Actually do: make every page the most genuinely useful answer to the search it targets; put a real practitioner’s experience and a real byline on it; structure it into a coherent topical map so the volume is depth, not padding; measure it after publish and revise what underperforms. That’s the whole job, and it’s the same job whether a person or a tool produced the draft. If you want it done across a site, that’s the authority-site build — or send a URL for a free 5-minute content audit and we’ll tell you whether your content is the leak, with no detector scores involved.
Common questions
On detection, specifically.
Should I run my pages through an AI-detector before publishing?
There’s not much point. The detectors are unreliable — false positives on plain human writing, trivially fooled, no agreed standard — so the score doesn’t tell you whether the page is good, and Google doesn’t use anything like it to rank, so it doesn’t tell you what Google will do either. Spend the time making the page the most useful answer to the search instead. More in can Google tell if content was written by AI.
Can Google detect AI-written content?
Probably to some degree — but it’s the wrong question. Google has said it doesn’t use AI-detection to rank and doesn’t care how content is made, only whether it’s helpful. So “can it tell?” matters far less than “is the page the best answer, with someone credible behind it?” Stop optimising for “looks human”; optimise for being the most useful result. Full version on can Google tell if content was written by AI.
But doesn’t AI content get deindexed?
What gets actioned is unhelpful content produced at scale primarily to manipulate rankings — the “scaled content abuse” line, which Google said applies regardless of how the pages are made. Thin, near-duplicate, made-for-rankings pages get hit; they’d get hit whether a person or a tool produced them, and all-human content farms have been. “AI” is incidental. The real risk vectors are in will AI content hurt my existing rankings.
So is AI content bad for SEO or not?
AI content produced lazily is bad for SEO — but so is human content produced lazily. AI content produced well is just content. The variable that decides it is quality and intent-match, not which tool typed it. Direct version on is AI content bad for SEO.

Q2 capacity · 4 builds · 2 slots remaining
Stop chasing a fake score. Be the best answer.
Send us your URL. We’ll send back a free 5-minute Loom — whether your content is the leak, what’s thin, and what we’d build to make every page the best answer to its search. AI-accelerated, human-directed. No detector scores. No call required.