When and why
Choosing the Right Audit Timeframe
What each Catalyst Audit date range reveals, when to use each timeframe, and why audit frequency matters more than window length.
When you run a Catalyst Audit, one of the first decisions is the date range. Should you look at the last 30 days? 90? Something custom?
The answer depends on what you’re trying to learn. Each timeframe reveals different signals — and choosing the right one is the difference between a snapshot and a story.
Key terms used in this article:
| Term | What it means |
|---|---|
| CPA (Cost Per Acquisition) | How much you spend in ads to get one sale or lead |
| ROAS (Return on Ad Spend) | Revenue earned per dollar of ad spend — a ROAS of 3.0x means $3 back for every $1 spent |
| COGS (Cost of Goods Sold) | What your product actually costs to make or buy — the expense that comes before profit |
| Attribution | Connecting a sale back to the ad click that drove it |
| Smart Bidding | Google and Microsoft’s automated bid strategies that adjust bids per auction using conversion data |
| Quality Score | Google’s 1—10 rating of your keyword relevance, ad quality, and landing page experience |
How Trellis Approaches Your Data
Think of your ad account like a patient’s medical chart. A doctor doing a check-up doesn’t request every record from the last decade. They pull the last three months of vitals, then reference a one-page summary of your history. That’s how a Catalyst Audit works.
Trellis pulls detailed, campaign-level performance data for the period you select. We analyze that data — spend, conversions, CPA, ROAS, search terms, Quality Scores, and more — alongside your actual product margins, attribution data, and a historical baseline. Statistical models and rule-based methods run first, producing structured evidence and constraint flags. A language model then synthesizes these pre-computed findings into the narrative report, writing within the constraints the methodology has already established — not guesswork.
The result is an audit tuned to your business, not a generic report.
Available Timeframes
Last 7 Days and Custom 14-Day — Early Warning Checks
Short windows serve one purpose: monitoring recent changes. If you adjusted a bid strategy, launched new ad copy, or paused a campaign, a 7- or 14-day audit tells you whether the change is moving metrics in the right direction — or breaking something.
Google documents the Smart Bidding learning period as 7—14 days after a significant change. A 7-day check captures the initial response. A 14-day window covers two full weekly cycles, which removes day-of-week bias and spans the entire learning period.
What short windows can tell you:
- A sudden CPA spike or conversion drop after a change — early signal to investigate
- Whether budget pacing shifted after a bid strategy switch
- If a new ad variant is getting impressions and clicks at expected rates
What short windows cannot tell you:
- Whether a trend is real or just noise — trend detection requires multiple audit cycles, not a longer single window
- Whether a bid strategy change is “working” — Google and Microsoft both recommend waiting at least 6 weeks before judging automated bidding performance
- Profitability conclusions — conversion samples this small (often under 15) fall below the confidence threshold for COGS-adjusted analysis
Think of it like checking your temperature the day after starting a new medication. A fever tells you something needs attention. A normal reading tells you nothing went wrong. Neither tells you whether the medication is working — that takes weeks.
When to use: Run a 7-day audit within the first week of a significant account change. If something looks off, investigate immediately. If metrics look stable, wait for your next 30- or 90-day audit for the full picture.
Last 30 Days — Monthly Check-in (Recommended for active accounts)
Your routine health check. A 30-day audit catches:
- Wasted search terms draining budget without conversions
- Attribution gaps — how platform-reported conversions compare to actual orders
- COGS-adjusted profitability — which campaigns earn real profit after product costs, not just revenue
For most active accounts, monthly 30-day audits are the recommended default. Here is why: Catalyst Audit℠ analysis compounds with each audit cycle. The history gate advances from baseline to full analysis capability based on how many audits you have run — not how long each individual window is. Monthly cadence reaches full analysis depth (Bayesian projections, control charts, budget impact modeling) in 6 months. Quarterly 90-day audits reach the same depth in 18 months.
A single 30-day audit gives you one data point. Six monthly audits give you a trend, a calibrated baseline, and statistical models that sharpen with each cycle.
Last 90 Days — Quarterly Deep Dive
A 90-day window is the right choice in two specific situations: low-volume accounts that need to accumulate conversion data, and comprehensive reviews timed to seasonal boundaries. It is not the recommended default for active accounts running regular audits.
- Statistical confidence for low-volume accounts. Google’s documentation recommends 30 conversions per month for Target CPA and 50 for Target ROAS. Microsoft Ads requires 30 conversions in 30 days for automated bidding to optimize reliably. If your account generates fewer than 15 conversions per month, a 30-day window may fall below the threshold for confident analysis. A 90-day window accumulates enough conversion data to cross that bar.
- Bid strategy evaluation. Google recommends at least 6 weeks to evaluate a Smart Bidding change. A 90-day window captures the full evaluation period plus stabilization — enough to tell you whether that switch from Manual CPC to Target ROAS actually worked.
- Seasonal transitions. A quarter captures at least one seasonal boundary (winter to spring, summer to fall), revealing how demand shifts affect your campaigns.
- Budget pacing patterns. Three monthly cycles in a single window reveal whether you’re systematically under- or over-spending relative to your targets.
A 90-day Catalyst Audit also pulls in historical trend context and a year-over-year baseline comparison from summarized data outside the audit window. This layered approach gives you the depth of a long lookback without the noise.
Month to Date
Covers the current calendar month so far. Helpful for checking pacing mid-month.
Previous Month
The full prior calendar month. A clean 28—31 day window useful for month-over-month comparison.
Custom Range (up to 90 days)
Pick your own start and end dates, capped at 90 days. Use this when you need a specific window — for example, isolating the four weeks before and after a campaign restructure, or measuring performance during a holiday promotion.
Why 90 Days Is the Cap
You might wonder: if 90 days is good, wouldn’t 180 or 365 be better?
Not necessarily. Here’s why.
More data doesn’t always produce better analysis. When you pour too much information into a single analysis pass, critical findings get buried under volume. Research on analytical systems consistently shows that curated, well-scoped data produces higher-quality output than exhaustive datasets. It’s the same principle behind a focused lab panel versus ordering every test in the catalog — precision beats volume.
Old search term data adds noise, not clarity. Search queries from nine months ago reflect a different competitive landscape, different seasonal demand, and potentially different campaign structures. Including them dilutes the findings that actually drive your next decision.
Mixing seasons distorts your averages. Combining July performance with January performance in the same analysis produces misleading numbers. July and January represent fundamentally different demand environments for most businesses — averaging them tells you nothing useful about either period.
Trellis already delivers long-range context without requiring a long date range. Every Catalyst Audit automatically layers in three additional data sources beyond the raw performance window:
- Historical trend summaries spanning up to 180 days from your data warehouse. These show the trajectory of key metrics (CPA, ROAS, conversion volume) month over month — so even a 90-day audit reveals whether your account has been improving or declining over the past half-year.
- Year-over-year baseline comparison using statistical models that calculate the probability that current performance has meaningfully changed from the same period last year.
- Changelog context that tracks what changed in your account — bid strategy switches, budget adjustments, paused campaigns — and when those changes happened. This connects performance shifts to their likely causes.
This layered architecture means a 90-day audit already delivers roughly 80% of the insight a 365-day raw data dump would provide — with significantly higher analytical quality.
A Note on Low-Volume Accounts
If your campaigns generate fewer conversions per month, the 90-day window becomes more important, not less. Here’s why:
Both Google and Microsoft define practical minimums for their automated bidding to work:
| Platform | Bidding Strategy | Minimum Conversions (per 30 days) |
|---|---|---|
| Google Ads | Maximize Conversions | 15—20 |
| Google Ads | Target CPA | 30 |
| Google Ads | Target ROAS | 50 |
| Microsoft Ads | Automated bidding | 30 |
A campaign generating 10 conversions per month has only 10 data points in a 30-day audit — too few for confident analysis. That same campaign accumulates 30 conversions over 90 days, crossing the threshold where Trellis can apply statistical models with meaningful confidence. For low-volume accounts, the quarterly 90-day audit is often the only window that produces actionable insight.
Sources: Google Ads Help: Smart Bidding Learning Period, Google Ads Help: Smart Bidding with Shopping/Performance Max, Microsoft Learn: Budget and Bid Strategies
Choosing Your Timeframe: A Quick Guide
| If you need… | Choose | Why |
|---|---|---|
| Early read after a change | Last 7 Days or Custom 14-Day | Catches breakage, confirms nothing went wrong |
| A routine health check (recommended default) | Last 30 Days | Catches waste, confirms profitability, builds history gate with each cycle |
| Low-volume account analysis or seasonal review | Last 90 Days | Accumulates conversions below the monthly threshold; captures seasonal transitions |
| Mid-month pacing check | Month to Date | See where spend and conversions stand |
| Clean monthly comparison | Previous Month | Full calendar month, no partial data |
| A specific event window | Custom (up to 90 days) | Isolate a promotion, launch, or restructure |
Best Practices
- Build your routine around monthly 30-day audits. This is how Catalyst Audit℠ compounds value. The statistical models sharpen, the history gate advances, and baseline comparisons gain precision with each cycle. A monthly cadence reaches full analysis depth in 6 months.
- Run a 7-day check after any significant account change — bid strategy switches, budget shifts, or campaign restructures. Treat it as a smoke test, not a verdict.
- Use 90-day windows for low-volume accounts or seasonal reviews. When your account generates fewer than 15 conversions per month, extend to 90 days to accumulate enough data for confident analysis. Also appropriate when you need to capture a seasonal transition in a single window.
- Use custom ranges when you need to isolate a specific period, like a holiday promotion or the weeks following a major campaign change.
- Don’t chase longer windows for their own sake. The Catalyst Audit’s layered context — trend summaries, baselines, and changelogs — already provides the long-range perspective. Your selected timeframe controls the detailed, campaign-level analysis. The history gate advances by audit count, not by days in the window.
What’s Next
- Testing Timelines and Evaluation Windows — how long each type of account change needs before you can draw meaningful conclusions
- Reading Your Audit Report — how to read the analysis and act on recommendations
- Business Context and Your Economics — how your margins shape every recommendation