TL;DR
Google's 2025 Trust and Safety Report disclosed that the company removed 292 million policy-violating reviews last year, with 160 million of those caught specifically by generative-AI detection (Gemini-based content classification). Review deletion rates increased 600% between January and July 2025, peaking at nearly 2% of all monitored business locations losing at least one review per week. Some of those removals are real reviews from real customers — caught by the same pattern detection that catches fakes. The three signals most likely to flag a legitimate review: short generic copy, posting clusters within minutes of an SMS click-through, and reviewer-account patterns the AI considers suspicious. The appeal flow works but requires specificity. Here's the playbook.
The 160M number, in context
Google's AI detection layer is operating at a scale no human moderation team could approach. Gemini-based classifiers process every new review submission against a multi-billion-parameter content model trained to identify:
- Linguistic patterns characteristic of ChatGPT, Claude, and other LLMs used to mass-produce fakes
- Generic phrasing ("Terrible service, avoid!") with no concrete detail
- Template language across multiple reviewer accounts
- IP/device clustering (multiple reviews from the same network or device emulators)
- Submission burst patterns following marketing-tool fingerprints
The 160M removed via AI is roughly 55% of total removals — the other 45% came from human reports and policy enforcement. The 600% jump in deletion rates between Jan and Jul 2025 was the production rollout of Gemini's classifier expanding from English-only to multilingual.
Why real reviews get caught
The detection model is precision-tuned, not perfect. It produces a confidence score per review; reviews below a threshold are removed automatically, those near the threshold queue for additional signals. False positives — legitimate reviews wrongly removed — generally fall into three patterns:
1. Short generic content. A real review that says "Great service, will return!" looks identical to AI-generated copy. The classifier can't distinguish them on text alone, so it falls back on reviewer-account signals. If the reviewer has fewer than 3 prior reviews, no profile photo, and no Local Guides level, the short-generic review is flagged.
2. Posting clusters after a marketing send. When you send a review request SMS to 100 customers and 15 of them click through and write reviews in the same hour, that's a cluster. The classifier flags this as "potentially solicited" — which is actually allowed per Google policy if not gated, but it still raises the review's threshold for staying up.
3. The "first-and-only-review" pattern. A reviewer account whose first and only Google review is for your business is a high-risk signal. The classifier doesn't know whether the customer just doesn't usually leave reviews (most people don't) or whether the account was created specifically to leave that review.
What gets your reviews wiped
Per analyses of public removal events and our own customer data across 600 SMBs:
| Trigger | Likelihood of removal | Typical recovery |
|---|---|---|
| AI-generated text patterns | 95% | Very low — review re-removed if reposted |
| Burst from same IP/Wi-Fi | 80% | Low without changes |
| Loyalty-incentive language ("loved earning my points!") | 70% | Moderate after policy cleanup |
| Staff-name mention with marketing template phrasing | 60% | Moderate |
| Short generic + low-trust reviewer account | 40% | Moderate via appeal |
| Owner appeals to reinstate | — | 30–45% reinstatement rate when documented |
The last row is the practical one. The appeal works often enough to be worth the effort, but only with documentation.
The appeal flow that works in 2026
Google's review appeal interface lives in Google Business Profile → Reviews → flagged review → Report a problem → "I think this review was removed in error." The form accepts free-text justification.
What gets appeals approved (~30–45% reinstatement rate when done right):
1. Concrete proof of the relationship. "Customer name [first name only is fine], invoice #12345, service date 2026-04-12, attached receipt." If your booking or POS system can produce a record, attach it.
2. Identify the trigger you're appealing. "We believe this review was removed under the AI-content classifier. The reviewer is a verified customer with a 14-year purchase history with our business." Naming the specific policy area Google flags it under shows you understand the system.
3. Patience. Initial reviews can take 10 minutes to 30 days. Appeals take 2–10 business days. Resubmissions of the same appeal don't help — they queue separately.
What gets appeals denied:
- Vague language ("this is unfair," "we lost a great review")
- No documentation of the customer relationship
- Appealing on grounds that are themselves policy violations ("the customer was offered a discount for the review")
- Appealing too quickly after the same customer wrote a previous review that was also removed
How to make your future reviews harder to remove
Three structural changes:
1. Encourage longer, more specific reviews. A 2-sentence review with a service name, a staff first name (yes, even after the April 2026 policy change — the customer mentioning a staff member is fine; you asking for it is the violation), and a specific outcome ("the cleaning revealed a cavity my old dentist missed") is far more durable than "Great place!"
2. Spread your review asks over time. A 100-customer batch SMS produces a cluster. Twenty asks per week, every week, produces a stream — and streams get higher trust scores than clusters.
3. Build reviewer-account quality indirectly. Customers who become Google Local Guides — by reviewing other businesses, uploading photos, answering questions — have far higher review trust scores. A small fraction of your customer base becoming Local Guides protects your entire review profile. Mentioning the Local Guides program in a non-pushy way (a 1-line "PS — if you write reviews regularly, Google's Local Guides program gives perks") seeds this organically.
What this means for your strategy
The era of "as many reviews as possible" is over. The detection layer punishes that. The new era rewards:
- Consistent monthly velocity (5–8 per month)
- Specific, detail-rich reviews
- Diversified reviewer base across multiple platforms
- Fast, public responses from the owner
- Clean compliance with the April 2026 policy update
In our customer data, businesses that switched from "review collection sprints" to "consistent slow review building" between 2024 and 2026 saw their review-removal rate drop from 4.1% to 0.6% — and their local pack rankings improve in parallel.
The 160 million removed reviews aren't a Google attack on small business. They're a quality filter that finally has the tools to enforce what was always policy. The businesses that adjusted to the new game are now visible above the businesses that didn't.
The good news is the new game is simpler. Be patient, be specific, be honest. Earn each review. Respond to each one. Repeat for 12 months.
It's not glamorous. It works.
New blog posts. No spam.
Get the next reputation playbook delivered when it drops.