Skip to main content
Back to Docs

How it works

How the Three Audit Types Work Together

Account Sync profiles your account, a Catalyst Audit analyzes performance, and an Investigation answers a specific question. Here is how they work and compound.


How the three audit types work together

The short version

  • Trellis runs three distinct audit types, each with a different purpose: Account Sync profiles your account structure, a Catalyst Audit℠ analyzes performance against your business economics, and an Investigation answers a specific question about a single campaign or ad group.
  • Account Sync is automatic. It runs when you connect a platform and captures your account’s structure so every subsequent audit is calibrated to your context.
  • A Catalyst Audit℠ is the full profitability analysis — your margins, your baseline, the changelog, your statistical evidence — assembled into a scored report with phased recommendations.
  • An Investigation takes a finding from a Catalyst Audit℠ and examines one entity with the complete, uncompressed data the broader audit had to summarize.
  • The three types are accretive. Each builds on what the previous one established — the Account Sync profile feeds into Catalyst, and Catalyst findings feed into Investigations. The datamart accumulates context with every cycle.

Three audits, one system

Every advertiser’s relationship with Trellis follows the same sequence: connect, profile, analyze, investigate. The three audit types map to this progression.

Account SyncCatalyst Audit℠Investigation
PurposeProfile your account structureAnalyze performance against your economicsAnswer one question about one entity
TriggerAutomatic on platform connectYou request it (or schedule it)You select a question from a completed Catalyst Audit℠
Data window60 days (fixed)Your chosen date rangeParent Catalyst Audit℠ data
Data scopeAccount structure and keyword summaryFull account, summarizedSingle entity, complete and uncompressed
OutputAccount profile (baseline, keyword summary)Scored report with PDF, recommendations, and email notificationMini-report (500–1,500 words) with finding, evidence, and recommendation

The system is designed so that each audit type produces something the next one consumes. Account Sync creates the profile that calibrates a Catalyst Audit℠. A Catalyst Audit℠ produces the findings that an Investigation examines in depth.

Account Sync — the handshake

When you connect a Google Ads or Microsoft Ads account, Trellis runs an Account Sync automatically. There is no cost and no action required on your part.

What it does

Account Sync pulls 60 days of data and profiles your account’s structure — campaign count, keyword coverage, bid strategies in use, spend levels, and conversion volume. Trellis uses this profile to adapt analysis depth on every subsequent audit. An account with three campaigns and a single bid strategy gets a different level of detail than an account with twenty campaigns across multiple strategies.

What it creates

Account Sync produces three things that persist across every future audit:

  1. Account profile — your campaign and keyword counts, bid strategies in use, performance fingerprint, and an executive summary of account structure.
  2. Performance baseline — a snapshot of your key metrics (spend, conversions, CPA, ROAS, conversion rate) that Catalyst Audits℠ compare against.
  3. Campaign configuration snapshot — the settings for each campaign at the time of sync (bid strategy, budget, status). When settings change later, Trellis detects the difference and records it in the changelog.

Validation

The profile goes through schema validation after the analysis completes. If any required fields are missing or malformed, Trellis fills defaults and logs a warning. If the classification confidence is below the threshold, Trellis defaults to the most conservative profile and flags the account for re-profiling when more data is available.

Catalyst Audit℠ — the full analysis

A Catalyst Audit℠ is the core product. It reads your ad data through the lens of your actual margins, compares against your historical baseline, and delivers a scored report with evidence-backed recommendations.

What happens before the report is written

Most of what makes a Catalyst Audit℠ different from ad-hoc analysis happens before the report is written. Six layers of programmatic work prepare the data and constrain what the report can claim.

Attribution validation. Trellis compares what your ad platform reports as conversions against what your store actually processed as orders. When the gap is too large, the audit flags it and adjusts the analysis. When the gap is critical, the audit pauses until tracking is addressed.

Drift detection. If your campaign count has changed significantly since the last Account Sync, Trellis flags the drift and schedules a re-profiling. An account that has added several campaigns since its last profile needs its analysis depth recalibrated.

Baseline comparison. Every Catalyst Audit℠ compares the current period against the performance baseline from your Account Sync (or a prior audit). The comparison includes a staleness advisory — if the baseline is from six months ago, the audit tells you how much weight to place on the comparison.

Trailing trends. Trellis computes 7-day, 14-day, and 30-day metric trajectories from your daily performance data. A CPA that looks high in isolation might be on a downward trajectory — the audit distinguishes between a problem getting worse and a problem resolving itself.

Statistical evidence. For campaigns with sufficient conversion volume, Trellis runs Bayesian analysis to estimate the probability that metrics have genuinely shifted (not just noise). Control charts flag data points that fall outside historical bounds. The results are injected into the analysis as structured evidence with credible intervals — not left for the report to guess at.

Deterministic analysis. Before the report is written, Trellis checks the changelog for recent changes that are still in their evaluation window. If a bid strategy was changed five days ago, the report is constrained from recommending additional changes on top of it. Prior audit recommendations are checked for follow-through — did the actions that were recommended last time actually happen, and what was the measured impact?

Where the methodology meets the report

This is important to understand: the analytical work and the report authoring are separate layers.

The six layers described above — attribution validation, baseline comparison, trailing trends, Bayesian estimation, control charts, and deterministic analysis — are computed programmatically. Trellis runs conjugate Bayesian models (Gamma priors for CPA and ROAS, Beta-Binomial for conversion rate), Shewhart control charts with Western Electric trend rules, and rule-based claim gates. These are statistical and rule-based methods. They produce structured evidence: credible intervals, probability statements, trend signals, constraint flags.

The report is then authored by a language model that receives this pre-computed evidence as input — along with your data summaries, baseline comparison, changelog constraints, and business context. The model’s job is to synthesize these inputs into a narrative report with recommendations. It does not perform the statistical analysis. It writes from conclusions the methodology has already reached.

This separation matters. The statistical evidence has guardrails — estimation tiers gate what claims are possible, the deterministic layer blocks recommendations that conflict with recent changes, and claim gates constrain what the report can assert based on data freshness. The language model writes within these constraints, not around them.

After the report is written, a 7-dimension quality validator scores the output. If the report references data that was not in the input, makes claims the estimation tier does not support, or proposes recommendations that conflict with each other, the quality score reflects it. Reports that score below 70 are flagged.

This is not passing your data to a chatbot for an opinion. It is a structured pipeline where programmatic analysis produces the conclusions, a language model authors the narrative, and automated validation checks the result.

What the report contains

The analysis produces a narrative report organized by topic — account overview, campaign performance, budget allocation, keyword analysis, and recommendations. Every recommendation includes:

  • The finding and the specific data behind it
  • A risk score (1–5) reflecting how difficult the change is to reverse
  • A counter-argument — the strongest case against acting
  • A monitoring plan with a specific metric, threshold, and evaluation date

Quality scoring

Every Catalyst Audit℠ receives a composite quality score from 0 to 100, built from seven weighted dimensions: data accuracy, analytical depth, recommendation quality, structure, estimation compliance, cross-validation, and verification checks. A score of 70 or above indicates a reliable audit with actionable recommendations.

Two modes

  • Standard — single-pass analysis. The default for routine health checks and scheduled audits.
  • Enhanced — adds an investigation phase before the report, where Trellis examines anomalies and cross-references historical patterns in more depth. Use Enhanced for monthly strategic reviews or when something looks off.

Both modes produce the same report format and quality scoring. Enhanced provides deeper investigative context behind the findings.

Investigation — the focused answer

An Investigation starts from a completed Catalyst Audit℠. You select a specific question about a specific entity — a campaign, an ad group — and Trellis answers it with the complete, uncompressed data for that entity.

How it works

A Catalyst Audit℠ summarizes your entire account. To fit everything into the analysis, the data is compressed — top keywords by performance, top search terms by traffic, aggregated device and geographic breakdowns. This compression is necessary for the full-account view, but it means some detail is lost.

An Investigation reverses this tradeoff. Instead of summarized data for the full account, it provides complete data for a single entity. Every keyword, every search term, every metric for that one campaign or ad group — nothing compressed, nothing omitted.

You select from a registry of predefined questions grouped by category:

  • Budget allocation — which campaigns earn their spend and which do not
  • Bid strategy — whether the current strategy fits the campaign’s data volume and goals
  • Negative keywords — where search term waste is concentrated
  • Quality score — what is dragging scores down and what would improve them
  • Performance analysis — why a specific metric shifted
  • Device performance — whether device-level patterns warrant bid adjustments

Questions are only available when the parent Catalyst Audit℠ has the required data. If no search term data exists, the negative keyword questions are disabled.

What the report contains

An Investigation produces a mini-report structured as four sections:

  1. Finding — what the data shows
  2. Evidence — the specific numbers, tagged with their source
  3. Recommendation — what to do about it
  4. Caveats — what assumptions the recommendation depends on and when they might not hold

How statistical modeling works across audit types

The three audit types share a statistical infrastructure, but they use it differently.

Estimation tiers

Trellis gates its recommendations based on how much conversion data is available. These tiers apply to Catalyst Audits℠ and Investigations:

TierConversionsWhat Trellis will do
Very high200+Full statistical analysis with narrow credible intervals
High100–199Full analysis with wider intervals
Medium50–99Conservative projections with stated assumptions
Low15–49Monitor only — act on extreme signals
InsufficientFewer than 15No recommendations. Data summary only.

Account Sync does not gate by estimation tier — it profiles structure and spend, not performance trends.

Bayesian estimation

For Catalyst Audits℠ with sufficient conversion volume, Trellis computes Bayesian posterior estimates for CPA, ROAS, and conversion rate. These estimates use conjugate prior models (Gamma for CPA and ROAS, Beta-Binomial for conversion rate) and produce credible intervals rather than point estimates.

The posteriors are stored between audits. Each subsequent Catalyst Audit℠ updates the prior with new data, so estimates sharpen over time. This is the accretive layer — the third audit for the same account has tighter intervals than the first, because the statistical model has more history to draw from.

History gates

The number of prior audits — what Trellis calls audit depth — determines what analysis methods are available:

Audit depthAnalysis available
0 (first audit)Observation and baseline recording
1Period-over-period comparison
3+Bayesian projections and elasticity estimates
6+Control charts and budget impact modeling

This gating prevents Trellis from making confident claims about trends when the history is thin. As your audit depth grows, the analysis gets deeper.

How the three types compound

The value of Trellis grows with each audit cycle. Here is how the three types reinforce each other over time.

  1. Account Sync establishes the foundation. Your account structure profile, keyword summary, performance baseline, and campaign configurations are recorded. The changelog begins tracking changes from this point forward.

  2. The first Catalyst Audit℠ reads the baseline. It compares your current performance against the Account Sync snapshot, flags gaps, and produces recommendations. The report is solid, but limited — the Bayesian models have no prior data to draw from, so projections are blocked by the history gate.

  3. Investigations dig into specific findings. A Catalyst Audit℠ might flag a campaign as underperforming, but the summarized data limits how deep it can go. An Investigation pulls the complete data for that campaign and answers the specific question.

  4. The second Catalyst Audit℠ is sharper than the first. The Bayesian priors now have one audit cycle of history. The changelog shows what changed between audits and whether previous recommendations were acted on. The baseline comparison has a real prior period.

  5. By the third Catalyst Audit℠, projections are available. The statistical models have enough history for Bayesian projections and elasticity estimates. The analysis surfaces not just what happened, but what is likely to happen if current trends continue.

  6. By the sixth, the full analysis suite is available. Control charts detect anomalies against established historical bounds. Budget impact modeling estimates the effect of spend changes. The audit report at this stage draws on a depth of context that no starting-from-scratch analysis can match.

This compounding is why Trellis runs its own pipeline rather than treating each audit as a standalone analysis. The Bayesian posteriors, the changelog, the baseline comparisons — these are persistent infrastructure that accumulates value with every audit cycle. The sixth audit is fundamentally better than the first, not because the methodology changed, but because it has more to work with.