AI content & SEO · The mechanics
What Google actually says about AI content — in plain English, not hot takes.
There’s the guidance Google has actually published, and there’s the version the internet decided it published. They’re not the same. Here’s the real wording on people-first content, the helpful content update, the March 2024 core update, and the spam-policy line that actually matters — translated into something a business owner can act on.
Read the actual guidance. It’s shorter and calmer than the panic.
If you’ve only encountered Google’s position on AI content through headlines, you’ve been handed a caricature. The headlines say “Google is cracking down on AI content.” The actual documentation says something narrower and more useful: produce content for people, not for search engines; how a page was produced isn’t the thing being judged — whether it helps the person who searched is. That distinction is the whole game. Once you see it clearly, most of the AI-content anxiety just evaporates, and what’s left is the work that always mattered.
“Helpful, reliable, people-first content” — the phrase to memorise
Google’s long-standing guidance document for creators is built around one idea repeated in different words: create helpful, reliable, people-first content. The self-assessment questions it offers are the kind a sensible editor would ask anyway — does this page give a satisfying answer? Does it demonstrate first-hand expertise? Would you trust the information enough to act on it? Was it made primarily to help people, or primarily to rank? Notice what isn’t on that list: “was a person or a tool at the keyboard.” Google has been explicit that using automation — including AI — to generate content is not against its guidelines when it’s used to help people. The guideline it does enforce is against content produced primarily to manipulate rankings, which has been a rule for as long as there have been rankings to manipulate.
So the operative test isn’t AI or human. It’s useful or not — and produced for whom. A human can fail that test badly (a thin “areas we serve” page with the city name swapped, written by an intern in ten minutes, helps nobody). A page drafted with AI and then directed, fact-checked, and structured by someone who knows the subject can pass it cleanly. The tool is invisible to the test. The intent and the quality are the whole thing.
The line that matters isn’t “184 pages built with AI” versus “184 pages built by hand” — it’s “184 pages a buyer would actually find useful” versus “184 pages that exist to pad a sitemap.” Same number, opposite outcomes, and the difference has nothing to do with who typed.
The updates people half-remember: September 2023 and March 2024
Two real, dated events get conflated into a vague sense that “Google did something about AI.” Worth being precise about both.
- The September 2023 helpful content update. A refresh of the system Google had introduced to demote sites that were largely unhelpful — content that felt like it was written for search engines rather than people. It wasn’t an “AI penalty.” It was a quality signal, and it would catch a site full of thin, search-engine-shaped human writing exactly as readily as a site full of thin AI writing. The lesson was never “stop using AI”; it was “stop publishing unhelpful pages.”
- The March 2024 core update. This is the one that mattered structurally: Google folded the helpful-content system into its core ranking algorithm rather than running it as a separate periodic system. In plain terms, “is this site genuinely helpful?” stopped being an occasional sweep and became part of how every result is ranked, continuously. Again — not aimed at AI. Aimed at unhelpful content, however it’s made.
If you take one thing from the timeline: Google’s direction of travel has been to weight helpfulness more, not to weight the production method at all. Which is good news if your content is genuinely useful, and bad news only if it isn’t — in which case the production tool was never your problem.
Google didn’t change the question. It just stopped accepting “but it looks like content” as an answer.
The 2024 spam-policy additions — and the one that actually applies
In 2024 Google updated its spam policies with two additions worth knowing by name, because this is where the real risk lives — and where it doesn’t.
- Scaled content abuse. The policy targets generating many pages primarily to manipulate search rankings rather than to help people — and Google was careful to say it doesn’t matter how those pages are created: by automation, by humans, or by a mix of both. Read that again, because it’s the sentence that ends the debate. The abuse is the mass production for rankings, not the AI. A person hand-spinning 300 near-identical city pages is doing the prohibited thing; an agency producing 180 pages that each carry genuine local substance and answer a real search is not. The variable Google names is intent and quality at scale, not tooling.
- Site reputation abuse. A different problem entirely — third-party content published on a reputable site mostly to ride that site’s ranking strength (the “parasite SEO” pattern). It’s in the news because of where it tends to happen, but it’s unrelated to whether your own content was drafted with AI. Worth knowing it exists; not something most service businesses are doing.
So: what’s against policy is mass-producing pages primarily to game rankings, and publishing parasitic third-party content on your domain. What’s fine — explicitly fine — is using AI to help produce content that helps people. The classic way service businesses trip the first wire isn’t AI at all; it’s the temptation to spin a thin landing page for every town within fifty miles. We go through exactly that failure mode in service-area pages done right — the pattern that gets sites buried is “one template, the city name swapped, no local substance,” and it would bury a human-written version just as fast.
None of this means “AI content carries no risk.” It means the risk isn’t the AI — it’s what people do with it. If you auto-publish unedited drafts at volume with nobody checking accuracy, nobody owning the angle, and nothing genuinely new to say, you’ve manufactured exactly the “scaled content abuse” profile the policy describes, and you’ve earned the result. The guidance protects content that helps people. It was never going to protect a page that doesn’t.
Google’s guidance vs. the internet’s version of it
The gap between what Google says and what gets repeated about it is wide enough to fall into, so it’s worth naming the most common distortions:
- “Google penalises AI content.” It doesn’t, and has said so. It acts against unhelpful content and content mass-produced to manipulate rankings — categories that include plenty of human-written work. See does Google penalize AI-written content for the short, direct version.
- “AI content gets deindexed.” What gets actioned is unhelpful, thin, mass-produced content — and a site full of bad human pages would face the same. The “AI” in that sentence is doing no work. We pull that one apart in AI-detection tools and the ranking myths.
- “You have to keep AI under some percentage.” There’s no percentage in any Google document, because there isn’t one. The test is whether the published page is accurate, original, and genuinely useful, and whether a knowledgeable person stands behind it — not what fraction was machine-drafted. More on that in how much AI is too much.
- “You must disclose AI use to rank.” Google’s ranking systems don’t require an “AI-assisted” label. Some sectors and editorial standards may; honesty may. What matters for both rankings and trust is a real human byline with accountability behind the page — see do I need to disclose AI content.
The honest summary of Google’s position, in one breath: it rewards content that genuinely helps the person who searched, regardless of how it was produced, and it acts against content made primarily to game rankings — at any scale, by anyone, with any tool. Everything else is commentary.
What this means for how you actually produce content
If the production method isn’t the thing being judged, the obvious move is to use AI to go faster on the parts where speed is just speed — the first draft — and to put real human weight on the parts that decide whether the page is any good: who owns the angle, who verifies every claim, who wires the internal links, who decides the page has nothing to say and kills it. That’s how Miss Pepper produces content — AI-accelerated, human-directed: senior people set the angle, do the editing and the fact-checking, build the structure, and own the result. The volume stays coherent because it sits on a real topical map — every page earns its place in a deliberate structure rather than padding a sitemap — which is exactly the line the “scaled content abuse” policy draws. We spell out the production side in the human-edit workflow that makes AI-produced content actually rank, the credibility side in E-E-A-T when AI helped write it, and you can see it built at scale on the authority-site service — or send a URL for a free 5-minute content audit and we’ll tell you whether your content is the leak before you commit to anything.
Common questions
On the guidance, specifically.
Has Google actually said it’s okay to use AI for content?
Yes — Google’s position has consistently been that using automation, including AI, to generate content isn’t against its guidelines when the content is made to help people. What’s against the guidelines is content produced primarily to manipulate rankings, which has always been the rule. The production method isn’t the ranking factor; helpfulness is. Short version on does Google penalize AI-written content.
What did the helpful content update and the March 2024 core update actually do?
The helpful content update (refreshed September 2023) demoted sites that were largely unhelpful — content that felt written for search engines, not people. The March 2024 core update folded that helpful-content system into Google’s core ranking algorithm, so “is this genuinely helpful?” became a continuous part of ranking rather than a periodic sweep. Neither was an “AI penalty”; both target unhelpful content however it’s produced.
What’s “scaled content abuse” and should I worry about it?
It’s the 2024 spam-policy line against mass-producing pages primarily to manipulate rankings — and Google explicitly said it applies regardless of how the pages are made: by automation, by humans, or both. You should worry about it only if that’s what you’re doing. Producing a real topical cluster where every page carries genuine substance and answers a real search isn’t that — see will AI content hurt my existing rankings and, for the classic version of the mistake, service-area pages.
Do I have to keep AI use under some percentage to be safe?
No — there’s no percentage in any Google document. The test is whether the published page is accurate, original, genuinely useful, and backed by someone who knows the subject. If AI did most of the typing but a senior person set the angle, verified the facts, and owns the result, that’s fine. If nobody actually checked it, even a little is too much. Full version: how much AI is too much.

Q2 capacity · 4 builds · 2 slots remaining
Read the guidance. Then build to it.
Send us your URL. We’ll send back a free 5-minute Loom — whether your content actually helps the people searching, and what we’d build on a real topical map. AI-accelerated, human-checked. No call required.