Skip to main content
Back to Docs

How it works

Audit Evidence and Citations

Every recommendation cites its evidence. Here is how Catalyst sources, vets, and applies industry references.


When a Catalyst Audit℠ recommends a change to your campaigns, it doesn’t just say “trust me.” Every recommendation cites its evidence, labels every claim so you know where it came from, and includes the case against the recommendation so you can make an informed decision.

The Short Version

  • Every recommendation must be backed by at least two measured data points from your account. If the data isn’t there, Catalyst says so rather than guessing.
  • Every number in the audit is labeled: [FACT] for measured data, [PROJECTED] for estimates with stated assumptions, [INFERRED] for pattern recognition, and [INSUFFICIENT DATA] when the sample is too small to act on.
  • Each recommendation includes a counter-argument — the strongest reason not to follow it — along with a risk score and a monitoring plan with specific thresholds.
  • Confidence tiers gate what Catalyst is willing to recommend based on how much conversion data is available.
  • References come from ad platform documentation, verified industry research, and your own account’s historical performance.

Data Labels — Know What You’re Looking At

Every number in a Catalyst Audit carries a label that tells you how much weight to put behind it:

[FACT] — This number came directly from your ad platform or your order data. It’s measured, not estimated. CPCs, conversion counts, spend totals, and revenue figures are facts. You can act on these with confidence.

[PROJECTED] — This is a forward-looking estimate. “Pausing this keyword could save approximately $200/month.” The audit spells out the assumptions behind the projection and uses a conservative adjustment (estimates are typically discounted by 30% to account for real-world conditions that don’t match the model). Projections always show three scenarios: optimistic, expected, and conservative.

[INFERRED] — A pattern Catalyst identified based on your data combined with known industry behavior. These always use ranges (“15-25%”) rather than exact figures because the precision isn’t there for a point estimate. Treat these as hypotheses worth testing.

[INSUFFICIENT DATA] — There isn’t enough information to draw a reliable conclusion. The recommendation is always the same: wait, collect more data, then revisit. Catalyst will never invent a recommendation when the data doesn’t support one.

A good rule of thumb: act on [FACT]-tagged findings first, verify the assumptions behind [PROJECTED] estimates, and treat [INFERRED] insights as areas to monitor.

The Evidence Requirement

Every recommendation in a Catalyst Audit must pass a series of evidence checks before it appears in the report. These checks are built into the methodology, not applied after the fact.

At least two data points. A recommendation citing only one metric can be misleading. “Your CPA is $40” is a fact, but it’s not a recommendation. Catalyst requires at least two measured data points to support any action. For example: “Your CPA is $40 [FACT] and your break-even CPA is $32.50 [FACT], placing this campaign in the marginal profitability tier.”

A counter-argument. For every recommendation, Catalyst presents the strongest reason not to follow it. This isn’t hedging — it’s intellectual honesty. If the recommendation is to pause a keyword, the counter-argument might be: “This keyword is in a learning phase after a recent match type change. Pausing too early could prevent the bidding strategy from optimizing.” You decide whether the counter-argument outweighs the recommendation.

A risk score. Each recommendation is scored 1-5 based on how difficult it would be to reverse and what could go wrong:

ScoreLevelWhat It Means
1MinimalEasily reversible within minutes. Adding a negative keyword, minor bid adjustment.
2LowReversible within 24 hours. Budget changes, match type adjustments.
3ModerateRequires a monitoring plan. Bid strategy changes, campaign restructuring.
4HighCite past precedent, specify safeguards. Multi-campaign changes.
5CriticalComparable to known incidents that caused significant performance drops.

Recommendations scored 4 or 5 include specific safeguards and past account precedent where available.

A monitoring plan. Not “watch your CPA” (vague), but a specific, measurable plan: “If CPA exceeds $35 after 14 days, pause the campaign and revert to the previous bid strategy.” Every monitoring plan includes the metric to track, the threshold that triggers action, the date to evaluate, and the specific action to take if the threshold is breached.

Executable instructions. “Optimize your keywords” is not a recommendation. “In the Google Ads UI, navigate to Campaign X > Keywords tab > select keywords Y and Z > set bid to $0.90” is. Catalyst Audit recommendations include step-by-step actions you can execute directly.

Where References Come From

Catalyst draws from three categories of reference material:

Your account’s historical performance. Baseline metrics from prior periods, recent account changes and their measured impact, and longitudinal trends all inform the analysis. A recommendation to change a bid strategy carries more weight when the audit can cite how similar changes performed in your account previously.

Ad platform documentation. Google Ads and Microsoft Ads publish guidance on bid strategy behavior, learning periods, conversion thresholds, and match type dynamics. Catalyst references this documentation when evaluating whether a recommendation aligns with how the platform actually works. For example, automated bidding learning periods vary by strategy type — the audit respects these documented constraints.

Verified industry research. Sources are logged with their publication date, the date they were referenced, and a summary of how they informed the analysis. Each source must meet one of three criteria: it contradicted an assumption, it helped choose between competing approaches, or it revealed a platform limitation that constrained the recommendation. Routine lookups and generic best-practice articles don’t qualify.

Confidence Tiers

Catalyst gates its recommendations based on how much conversion data is available. More data means higher confidence in patterns and projections.

TierConversions in PeriodWhat Catalyst Will Do
Very High200+Full analysis with statistical projections and narrow confidence intervals
High100-199Full analysis with probability statements and wider intervals
Medium50-99Analysis with conservative projections (optimistic/expected/conservative scenarios)
Low15-49Monitor-only recommendations for extreme signals (>50% performance gaps)
InsufficientUnder 15No recommendations. Data summary only. Everything tagged [INSUFFICIENT DATA].

This gating exists because small sample sizes produce unreliable patterns. A campaign with 8 conversions might show a 50% CPA increase, but that could be normal variance rather than a real problem. Catalyst won’t tell you to act on noise.

What Catalyst Won’t Do

Catalyst does not make guaranteed claims. You will never see “this change will improve ROAS to 3.5x” or “budget increase will generate $5,000 more revenue.” Guaranteed outcomes violate the estimation methodology.

Catalyst does not extrapolate trends into the future. “Based on this trajectory, Q2 revenue will be $50,000” is the kind of statement that sounds precise but is built on assumptions that rarely hold. The audit sticks to what the data actually shows.

Catalyst does not recommend optimizations when conversion tracking accuracy is below 80%. If the data feeding the analysis isn’t reliable, the analysis itself can’t be trusted. The audit flags tracking issues and recommends fixing them before making any performance-based changes.

What’s Next