Skip to main content
Back to Docs

How it works

Reading Your Audit Report

How to read data labels, confidence tiers, quality scores, and recommendations in your Catalyst Audit report.


A Catalyst Audit℠ report is designed to be read, not decoded. Every number is labeled so you know where it came from, every recommendation includes the case against it, and a quality score tells you how much confidence to place in the overall analysis.

The Short Version

  • Every data point carries a label: [FACT] for measured data, [PROJECTED] for estimates, [INFERRED] for pattern recognition, or [INSUFFICIENT DATA] when the sample is too small.
  • Confidence tiers gate what Catalyst is willing to recommend based on your conversion volume. More data means tighter estimates.
  • A quality score (0-100) rates the audit itself across seven dimensions, from data accuracy to recommendation quality.
  • Attribution verification and product cost data (COGS) sharpen the analysis — but Catalyst still works without them and tells you when gaps limit its conclusions.
  • Start with the highest-risk recommendations. Use the conservative scenario for projections. Check monitoring thresholds before acting.

What the Report Contains

Every audit produces a narrative analysis organized by topic — findings, context, and recommendations covering bid strategy, budget allocation, keyword performance, and conversion tracking. Each section leads with the data, then interprets it.

Recommendations are pulled out separately so you can scan them without reading the full narrative. Each recommendation includes the evidence basis, a risk score, a counter-argument, and a monitoring plan.

A quality score appears at the top of the report and reflects how much confidence to place in the analysis as a whole.

Data Labels: Know What You’re Looking At

Every number in a Catalyst Audit carries a label that tells you how much weight to place behind it.

LabelWhat It MeansExample
[FACT]Measured directly from your ad platform or order data. No estimation involved.”Your CPA was $18.40 over the last 30 days.”
[PROJECTED]Calculated from facts using stated assumptions. The math is shown.”Eliminating $120 in wasted spend could recover 3-5 conversions per month.”
[INFERRED]Derived from your data combined with known industry behavior. Always given as a range.”Quality Score improvement of ~15-25% based on landing page alignment.”
[INSUFFICIENT DATA]Not enough information to draw a reliable conclusion.”Fewer than 15 conversions — monitor before optimizing.”

A good rule of thumb: act on [FACT]-tagged findings first, verify the assumptions behind [PROJECTED] estimates, and treat [INFERRED] insights as areas to monitor. When you see [INSUFFICIENT DATA], the recommendation is always the same — wait, collect more data, then revisit.

Confidence Tiers

Catalyst gates its recommendations based on how much conversion data is available. A 20% CPA shift means something different when it’s based on 200 conversions versus 12.

TierConversions in PeriodWhat It MeansWhat You Should Do
Very High200+Small changes are detectable and reliableAct with confidence
High100-199Changes of ~20% or more are statistically meaningfulAct on clear signals
Medium50-99Only larger shifts (~30%+) are distinguishable from noiseAct with monitoring plan
Low15-49Only very large swings are meaningfulMonitor only — don’t optimize yet
InsufficientFewer than 15Too few data points for reliable analysisGather more data before deciding

These tiers directly affect what Catalyst recommends. A campaign with 150 conversions might receive a specific bid adjustment recommendation. The same percentage shift on a campaign with 12 conversions gets flagged as [INSUFFICIENT DATA] with a monitoring plan instead.

This gating exists because reacting to noise is one of the most common — and most expensive — mistakes in ad management.

Quality Score

Every audit receives a composite quality score from 0 to 100, reflecting the depth and reliability of the analysis. The score is built from seven dimensions, each weighted by its importance to the overall audit:

DimensionWeightWhat It Measures
Data Accuracy25%Conversion counts verified against actual order data
Analytical Depth20%Whether findings reflect causal analysis, not surface-level reporting
Recommendation Quality20%Recommendations are executable, evidence-backed, and risk-scored
Structure10%Report is organized and logically flows from data to conclusions
Estimation Compliance10%Proper use of data labels and confidence tiers throughout
Cross-Validation10%Findings checked across multiple data sources
Verification Checks5%Automated checks that confirm data consistency across sources

A score of 70 or above indicates a reliable audit with actionable recommendations. Scores below 70 typically reflect data gaps — low conversion volume, missing order data, or unverified attribution — rather than problems with the analysis itself.

The score isn’t a grade on your advertising performance. A well-run campaign can produce a low-scoring audit if the data feeding it is incomplete.

How Attribution and COGS Sharpen the Analysis

Two data sources improve every audit when available:

Attribution verification compares what your ad platforms report as conversions against what your store actually processed as orders. When tracking accuracy is 80% or above, the audit proceeds with full confidence. Below that threshold, Catalyst adjusts its analysis and may block certain recommendations until tracking is fixed.

Product cost data (COGS) shifts the analysis from revenue to real profitability. A campaign generating $5,000 in revenue at 3x ROAS looks strong — until you factor in that it’s primarily driving sales of low-margin products at 25% gross margin. With cost data, Catalyst catches this and reclassifies the campaign accordingly.

Neither is required, but both sharpen the picture. When they’re missing, the audit tells you what it can’t see.

Acting on Recommendations

Each recommendation in a Catalyst Audit includes everything you need to decide whether to act:

  • Evidence basis — the specific data points supporting the recommendation
  • Risk score (1-5) — how difficult to reverse and what could go wrong
  • Counter-argument — the strongest reason not to follow the recommendation
  • Monitoring plan — the metric, threshold, and date to check

Start with the highest-risk recommendations — they represent the biggest potential impact and the most urgent decisions. For any [PROJECTED] estimates, use the conservative scenario rather than the optimistic one. And before acting on any recommendation, check whether recent account changes are still in their evaluation window. Layering new changes on top of unsettled ones makes it harder to know what’s working.

What Catalyst Doesn’t Do

  • Catalyst does not guarantee outcomes. Projections include ranges and stated assumptions. No audit claims a specific result will happen.
  • Catalyst does not make recommendations when conversion data is insufficient. You’ll see [INSUFFICIENT DATA] labels and monitoring plans instead.
  • Catalyst does not extrapolate trends into the future. It reports what the data shows and where the patterns point — not what next quarter’s revenue will be.

What’s Next