When and why
Testing Timelines and Evaluation Windows
How long each type of account change needs before you can draw meaningful conclusions — backed by Google and Microsoft documentation.
You changed something in your ad account. Now what? The hardest part of paid advertising isn’t making changes — it’s knowing how long to wait before deciding whether those changes worked. Catalyst Audit℠ respects the documented evaluation periods from Google Ads and Microsoft Ads so its recommendations account for whether a change has had enough time to calibrate.
The Short Version
- Bid strategy changes need 4-6 weeks on Google (50 conversions or 3 conversion cycles) and 2-4 weeks on Microsoft (30 conversions minimum) before you can draw meaningful conclusions.
- Budget changes reset the platform’s pacing cycle. Wait 7-14 days before evaluating.
- New Performance Max campaigns need at least 6 weeks on Google and 2-4 weeks on Microsoft to ramp up.
- Ad copy tests and new keyword additions need 14-30 days to accumulate enough data for reliable comparison.
- Catalyst tracks when each change was made and flags recommendations that would layer new changes on top of ones that haven’t settled yet.
The Evaluation Timeline Reference
This table consolidates the documented minimum evaluation periods from Google Ads and Microsoft Ads. These are platform-published guidelines, not rules of thumb.
| Change Type | Google Ads Minimum | Microsoft Ads Minimum | Why This Long |
|---|---|---|---|
| Bid strategy change (Smart Bidding) | 4-6 weeks | 2-4 weeks | Google requires ~50 conversion events or 3 conversion cycles to calibrate. Microsoft requires 30 conversions per 30-day period. |
| Budget change | 7-14 days | 7-14 days | Google’s monthly spending limit uses a 30.4-day pacing cycle. Mid-month budget changes recalculate remaining spend for the period. |
| New keyword or ad group | 14-30 days | 14-30 days | Adding keywords triggers a “composition change” in the bid strategy, restarting the learning process. The new traffic mix needs time to stabilize. |
| Ad copy A/B test | 14-30 days (4-6 weeks for formal experiments) | 14-30 days | Google’s Experiments framework recommends 4-6 weeks and discards the first 7 days as ramp-up. Informal tests need at minimum 14 days of data. |
| Performance Max campaign | 6 weeks minimum | 2-4 weeks (or 2-3 conversion cycles) | Google’s guidance is explicit: run PMax for at least 6 weeks before evaluating. Microsoft documents 2-4 weeks with 3-4 days to first impressions. |
| Campaign structure change | 30-60 days | 30-60 days | Restructuring campaigns resets historical performance signals. You need at least one full purchase cycle to establish a new baseline. |
The cost of cutting short. Evaluating a bid strategy change after 10 days instead of 45 is like checking whether a medication works after two doses. The platform is still learning your conversion patterns, your audience mix, and your competitive landscape. Early data is noisy — it reflects the calibration process, not the steady-state outcome.
What Triggers the Learning Period
Not every change triggers a full learning reset. The impact depends on what you changed and how much it shifts the data the bidding strategy relies on.
| Trigger | What Resets | Impact |
|---|---|---|
| New bid strategy (e.g., Manual CPC to Target CPA) | Full recalibration | High — 50 conversions or 3 conversion cycles to re-learn |
| Bid target change (e.g., Target CPA from $25 to $30) | Target recalibration | Moderate — shorter learning, but metrics will fluctuate |
| Budget change > 20% of daily budget | Pacing recalibration | Moderate — affects delivery patterns and auction participation |
| Adding or removing keywords | Composition change | Moderate — traffic mix shifts, bidding adjusts to new query profile |
| Conversion action change (add, remove, or change primary) | Signal recalibration | High — the metric the strategy optimizes toward has changed |
| Campaign status change (pause/enable) | Delivery reset | Low to moderate — depends on how long the campaign was paused |
Google Ads surfaces this directly in the UI through bid strategy statuses. When a campaign enters learning, you’ll see a “Learning” status on the bid strategy. During this window, key metrics like CPA and ROAS will fluctuate more than usual. This is expected behavior, not a sign of failure.
The Compounding Problem
The most common mistake in account management is layering changes before the previous change has settled. When you change the bid strategy on day 1 and adjust the budget on day 10, you’ve introduced two variables into the same evaluation window. If CPA rises on day 15, you can’t determine whether the bid strategy is still calibrating, the budget change disrupted delivery, or both.
Here’s a concrete example. Suppose you switch a campaign from Manual CPC to Target CPA on April 1. By April 10, CPA looks 30% higher than the prior period. The instinct is to intervene — maybe increase the budget to generate more conversions, or tighten the CPA target to bring costs down. But both actions would reset the learning period, extending the window of unstable performance.
The better approach: note the elevated CPA, verify the bid strategy status shows “Learning,” and wait. Check again at the 14-day mark for early signal, at 30 days for mid-assessment, and at 45 days for a conclusion. That’s three checkpoints, each building on more complete data.
One change at a time. If you need to make multiple changes, sequence them. Structural changes first (negative keywords, keyword pauses), then bid strategy changes once the traffic mix stabilizes, then budget adjustments after the strategy has calibrated. This approach is slower, but it produces data you can actually interpret.
How Catalyst Uses These Timelines
Catalyst Audit tracks when each account change was made through structured change records. Every audit checks these records before generating recommendations.
Stabilization awareness. If a bid strategy was changed 8 days ago, the audit notes it’s still in the learning period. Rather than recommending additional changes, it reports the current learning status and sets a monitoring checkpoint for the next audit.
Change collision detection. When multiple changes overlap in the same evaluation window, the audit flags the collision. “Budget was increased 15% on April 5, and a keyword was paused on April 8 — these changes are still interacting. Isolating impact requires waiting until April 19 at minimum.”
Audit cadence alignment. A 14-21 day audit cadence provides natural checkpoints across longer evaluation windows. A bid strategy change that needs 45 days to fully evaluate gets 2-3 audits across that period:
| Audit | Timing | Purpose |
|---|---|---|
| First audit | Day 14 | Early signal — is the learning period progressing normally? Any red flags? |
| Second audit | Day 28-30 | Mid-assessment — conversion volume trends, CPA trajectory, budget pacing |
| Third audit | Day 42-45 | Conclusion — has the strategy stabilized? Compare against pre-change baseline. |
This cadence avoids the twin traps of checking too early (reacting to noise) and checking too late (missing a genuine problem that compounds over weeks).
Platform-Specific Notes
Google Ads
The 30.4-day pacing formula. Google calculates your monthly spending limit as your average daily budget multiplied by 30.4 (the average number of days in a month). On any given day, spend can reach up to 2x your daily budget. Budget changes mid-month recalculate remaining spend for the rest of the period. This is why budget changes need 7-14 days to evaluate — you’re waiting for the pacing to stabilize within the new monthly calculation.
Google’s Experiments framework is the most rigorous option for structured A/B testing. Experiments run for 4-6 weeks, and the first 7 days of data are automatically discarded to account for ramp-up. The default confidence interval is 80%, configurable by the advertiser. Google uses jackknife resampling with two-tailed significance testing.
For Performance Max campaigns specifically, Google recommends allowing 3-4 weeks for asset performance insights to develop before adjusting your creative mix. Frequent changes to budget, bid strategy, or campaign status within the initial 6-week window can reset the learning phase entirely.
Microsoft Ads
The 30-conversion threshold. If a campaign falls below 30 conversions over any 30-day period, automated bidding strategies (Maximize Conversions, Target CPA, Target ROAS) stop actively optimizing bids. This is a hard minimum, not a recommendation. If your campaign consistently falls below this threshold, Microsoft suggests switching to a different strategy. Enhanced CPC has no minimum, making it a practical fallback for lower-volume campaigns.
Microsoft’s Performance Max learning period is 2-4 weeks or 2-3 conversion cycles, whichever is longer. First impressions typically appear within 3-4 days. Microsoft specifically warns against large or frequent target changes during the initial learning period and recommends starting with a target that’s slightly less restrictive than your existing campaigns.
Budget changes in Microsoft Ads take effect within approximately one hour. The monthly budget formula differs from Google’s: month-to-date spend plus daily budget multiplied by remaining days in the month. This means budget changes late in the month have less room to affect overall spending.
What Catalyst Doesn’t Do
- Catalyst does not enforce evaluation timelines. You can make changes whenever you want. The audit advises you on whether enough time has passed to draw conclusions, and flags when it hasn’t — but the decision is yours.
- Catalyst does not predict when the learning period will end for your specific account. Learning duration depends on your conversion volume, conversion cycle length, and the type of change you made. These timelines are documented minimums, not guaranteed endpoints.
- Catalyst is a periodic insight tool, not an experimentation platform. For controlled A/B tests with traffic splitting, use your ad platform’s native experiment features. Catalyst evaluates the outcomes of those experiments within the audit.
Sources
- Duration of the learning period — Google Ads Help — Published: Continuously updated | Verified: 2026-04-19
- Tips on measuring Smart Bidding performance — Google Ads Help — Published: Continuously updated | Verified: 2026-04-19
- About average daily budgets — Google Ads Help — Published: Continuously updated | Verified: 2026-04-19
- Optimization tips for Performance Max campaigns — Google Ads Help — Published: Continuously updated | Verified: 2026-04-19
- About the Experiments page — Google Ads Help — Published: Continuously updated | Verified: 2026-04-19
- About bid strategy statuses — Google Ads Help — Published: Continuously updated | Verified: 2026-04-19
- Budget and Bid Strategies — Microsoft Advertising API — Published: Continuously updated | Verified: 2026-04-19
- How to increase conversions with Performance Max — Microsoft Advertising — Published: 2024-11-01 | Verified: 2026-04-19
- The perfect time to set up Performance Max campaigns — Microsoft Advertising — Published: 2024-10-01 | Verified: 2026-04-19
What’s Next
- Choosing the Right Audit Timeframe — what each date range reveals and why 90 days is the recommended window
- Audit Evidence and Citations — how every recommendation cites its evidence
- Catalyst Audit vs. AI Chatbot Analysis — why structured methodology matters more than conversational flexibility