google.com, pub-3419384046288870, DIRECT, f08c47fec0942fa0
top of page

Ads That Actually Move the Needle: How to Tell If Google’s “Conversions” Are Real (and What to Do About It)

By Juxtaposed Tides — Practical strategy for small businesses that want results, not reports.


The short version (for busy owners)


Most ad platforms are great at taking credit for purchases that were going to happen anyway. Your job is to measure incremental impact—sales that happen because of the ad, not just after the ad.


A hand peels back paper revealing Google Ads logo. Text reads "The Scam Behind Google Ads." Beige and red design.

If you only remember three things:


  1. Attribution ≠ Causation. A platform saying “we got 200 conversions” doesn’t prove it caused 200 sales.

  2. Measure lift, not just credit. Use holdouts (regions or audiences that see no ads) to estimate what would’ve happened without ads.

  3. Optimize for net new. Shift budget toward campaigns that move incremental revenue, not just “last-minute reminders.”


Keep reading for exactly how to do this—step by step, in plain language.


White text on yellow background reads: "How Google Makes $250 Billion from Ads." "Juxtaposed Tides" in blue below.

Why ads “look good” even when they aren’t


Ad platforms watch what people do on your site (via tags/pixels) and learn who is likely to buy. Then they show those people ads right before they purchase. When the purchase happens, the platform says, “That was us!”


That’s attribution. What you actually need to know is incrementality—how many of those sales would not have happened without the ad.


Analogy: Handing out coupons in the checkout line and then bragging that you “drove 500 purchases.” Sure—you handed a coupon to a shopper who was already paying. That’s not influence; it’s inference.


Jargon translator (so you can steer the ship)

  • Attribution: The way credit for a sale is assigned (e.g., last click, first click, data-driven).

  • View-through conversion: Someone saw an ad (didn’t click) and later bought. Easy to over-credit.

  • Look-back window: How far back the platform can claim influence (e.g., 30 days after an ad view).

  • Incrementality / Lift: The extra sales caused by ads vs. what would have happened anyway.

  • Holdout / Control: A group that is deliberately shown no ads so you can see the baseline.


The playbook: turn “credit” into proof


Step 1 — Get the measurement basics right


  • Use UTM tags on every link you control (email, social, affiliates). This prevents organic/brand searches from stealing credit or masking channel impact.

  • Define 1–3 real conversions (e.g., paid order, qualified lead, booked consult). Ignore vanity events.

  • Turn off or down-weight view-through conversions (or report them separately). They inflate credit with weak evidence.

  • Shorten look-back windows where practical (e.g., 7–14 days) so you don’t count stale influence.


Step 2 — Run a simple, honest lift test (you can!)


Pick one city/ZIP, audience, or time window to keep dark (no ads). Everything else runs as usual.

  • Duration: 2–4 weeks.

  • Keep everything else steady: pricing, promos, emails.

  • Compare: Sales rate per 1,000 users in “Ads ON” vs. “Ads OFF.”


If Ads ON yields $12,000 and Ads OFF yields $10,500, your incremental lift is $1,500 for that period—not the entire $12,000.

Rule of thumb: If your reported conversions are much larger than your measured lift, you’re paying a toll for sales you already owned.

Step 3 — Score every campaign by incremental ROAS

Make a simple table and update weekly:

Campaign

Spend

Reported Revenue

Lift Estimate*

iROAS = Lift/Spend

Keep / Fix / Cut

Brand Search

$1,500

$18,000

$1,200

0.8x

Cut/Cap

Non-Brand Search

$2,000

$9,500

$4,200

2.1x

Scale

Remarketing

$800

$6,100

$1,000

1.25x

Tweak

*Lift is estimated via holdouts, time-based pauses, or geo tests. Perfect isn’t required—directionally correct beats black-box guesses.


Bar chart titled "Incremental Lift" with blue, orange, and gray bars labeled Reported Conversions and Baseline Sales.

Step 4 — Fix the usual suspects


  • Brand search eating budget? Cap bids on your own name unless competitors are poaching—then bid carefully and test.

  • Remarketing hogging credit? Tighten frequency caps, shorten windows (e.g., 7 days), exclude recent purchasers, and split “cart abandoners” from “site browsers.”

  • Smart/Performance Max feels magical? It’s likely mixing credit from brand + remarketing + shopping. Break it out. Test standard search/shopping vs. PMax with identical budgets and a proper holdout.

  • Landing pages leak? If bounce rate is high or time-to-value is slow, your “ad” problem is actually a site problem.


Three experiments any small business can run this month


1) Brand vs. No-Brand

  • Test: Pause branded keywords for 10 days while monitoring direct/organic traffic and competitor ads.

  • Watch: If total orders don’t drop much, brand ads were mostly taking credit. Re-enable with caps or only during competitor surges.


2) Remarketing Window Shrink

  • Test: Cut remarketing look-back from 30 days to 7.

  • Watch: If reported conversions fall slightly but total sales stay steady, you just stopped paying for already-decided buyers.


3) Geo Holdout

  • Test: Keep one ZIP code dark; run everything else normally.

  • Watch: Compare per-capita revenue. The difference is your lift. Reallocate budget accordingly.


What a healthy reporting view looks like (copy this)


Weekly snapshot for the team (one screen):

  • Spend: By channel and campaign.

  • Incremental Lift (modeled): From your latest test or rolling baseline.

  • iROAS: Lift ÷ Spend.

  • True CAC: Spend ÷ incremental new customers.

  • Leading signals: Qualified leads, add-to-carts, consult bookings (not just clicks).

  • Friction flags: Page speed, form completion rate, checkout drop-off.


Rules:

  • If iROAS < 1.0 for 2–3 weeks → Fix or cut.

  • If iROAS 1.0–2.0 → Iterate (creative, audience, landing page).

  • If iROAS > 2.0 and capacity allows → Scale carefully (10–20% budget increases to avoid algorithm shock).


Creative & funnel changes that boost actual impact


  • Message to moment: Match ads to where the buyer is.

    • Cold: Problem framing + proof (case study, before/after).

    • Warm: Comparison, FAQs, risk reversals.

    • Hot: Offer + urgency + friction-free checkout.

  • One promise, one page: Every ad should drive to a landing page that resolves exactly the promise in the ad headline. No menu maze.

  • Speed is a feature: Sub-2-second load times. Mobile first. Trim scripts and images.

  • Proof beats polish: Add real testimonials, star ratings, photos, and short demos above the fold.

  • Follow-up wins: If you capture email/phone with consent, your owned reminder (email/SMS) outperforms paying a platform to “remind” your buyer again.


The budget conversation (in plain dollars)

Let’s say the platform report shows $20,000 “conversion value” on $5,000 spend (ROAS 4.0). Looks great.


Your holdout test says the period would have produced $15,500 without ads.Incremental lift = $4,500.iROAS = $4,500 ÷ $5,000 = 0.9.


Decision: You’re paying $5k to get $4.5k back. Cut or retool until iROAS > 1.5–2.0 (your threshold may vary by margin and lifetime value).


A simple policy set we recommend to clients

  1. Always-on UTMs and a single source of truth dashboard.

  2. Quarterly lift tests (geo or time-based).

  3. View-through conversions reported separately, not mixed with clicks.

  4. Shorter look-backs (7–14 days) unless proven otherwise.

  5. Cap brand & remarketing unless tests prove high incremental lift.

  6. Documented creative → page promise mapping.

  7. Fail fast & reallocate—10–20% weekly budget moves based on iROAS.


Copy-and-paste: email to your team/boss

Subject: Switching our ad reporting to incremental results I’m updating our ad measurement to focus on incremental lift (what ads actually cause), not just platform-reported conversions. We’ll run a 3-week geo holdout to estimate baseline sales. We’ll separate view-throughs and shorten look-back windows. Weekly, we’ll score campaigns by iROAS (Lift/Spend) and reallocate budget accordingly. Goal: spend only where ads create net-new revenue. I’ll share the first lift readout and a keep/fix/cut plan in two weeks.

What if you don’t have big budgets?


You still have options:

  • Time-based tests: Pause a campaign for 7 days; watch total sales (not just reported conversions).

  • Micro-segments: Run remarketing only for cart abandoners (not all site visitors).

  • Creative focus: Improve your offer/landing page first; it multiplies every channel.

  • Owned audiences: Use email + SMS for reminders instead of paying platforms to “remember” for you.


The Juxtaposed Tides take


Platforms are excellent at measurement that flatters themselves. Your advantage is clarity: define what you truly value (incremental revenue and real CAC), then test, simplify, and reallocate without sentimentality.


If you want help, we’ll:

  • Audit your current setup (tags, attribution, windows).

  • Design a right-sized lift test.

  • Build a one-screen incremental dashboard.

  • Refactor campaigns around net-new outcomes.


Because “great reports” don’t pay the bills. Real lift does.



Comments


bottom of page