Skip to main content

How Trellis Works

Ad insight built on your actual business economics.

Trellis connects your ad spend to what your store actually records — product costs, gross margins, verified orders. Every audit is tuned to your account's maturity, grounded in statistical evidence, and enriched with your first-party data. This is how it works.

Grounded in statistical evidence, not summaries.

Every recommendation in a Trellis audit passes through a confidence gate before it reaches your report. The gate has five estimation tiers, determined by your account's conversion volume: campaigns with 200 or more conversions can detect changes as small as 15% at a 95% credible interval. Campaigns below 15 conversions receive no numeric recommendations at all — Trellis labels them [INSUFFICIENT DATA] and explains what threshold they need to cross.

Behind the gate, the statistical methods match the data. Bayesian conjugate models — Gamma-Poisson for CPA, Beta-Binomial for conversion rate, Gamma for ROAS — produce credible intervals, not point estimates. Welch's t-tests validate group comparisons when sample sizes permit. Shewhart control charts detect trend violations using Western Electric Rules. Every claim is tagged with its evidence basis: [FACT], [BAYESIAN: 95% CI], [PROJECTED], [INFERRED], or [INSUFFICIENT DATA].

Every significant recommendation includes the strongest case against following it. Not as a disclaimer — as part of the methodology. A recommendation that hasn't survived its own counter-argument isn't ready for yours.

Every recommendation shows its evidence basis and the strongest case against following it.
Each audit compares against the last, surfacing what improved, what declined, and whether prior recommendations produced results.

Every audit builds on the one before it.

Your first audit establishes a baseline — the foundational snapshot of account performance. Trellis tags all claims with [OBSERVED — BASELINE] because there is no prior context to compare against. The second audit opens period comparison: directional changes (improved, declined, unchanged) with percentage shifts in ROAS, CPA, and profitability counts.

By the third audit, Bayesian projection and elasticity modeling become available. Credible intervals tighten. By the sixth, the full model suite is active — control charts track trend violations, budget impact models quantify reallocation scenarios, and projections carry the precision of a rich history. The system gates these methods automatically: no model runs until the data depth supports it.

A recommendation that hasn't survived its own counter-argument isn't ready for yours.
The changelog surfaces what changed, when, and whether it helped — organized by platform with impact and status tags.

Track every change. Cross-reference every impact.

The changelog captures bid strategy switches, budget shifts, keyword additions, campaign restructuring — any change that affects performance. Each entry carries platform, impact level, status, and categorical tags. Over time, this builds a decision trail that no platform UI offers.

What sets this apart: the audit recommendation pipeline reads your changelog. Active monitoring gates — such as "watch Fabric ROAS until it crosses 1.47x" — are extracted and enforced. If a recent change triggers a stabilization window, Trellis withholds related recommendations until the window closes. The result is not just "ROAS dropped" but "ROAS dropped 3 days after switching from Target ROAS to Maximize Conversions, and the stabilization window has 8 days remaining."

Audit recommendations reference changelog entries as evidence — citing the entry title and date. Month one, the changelog is a log. Month six, it is the decision trail that makes every recommendation contextual.

Your margins. Your costs. Your actual profitability.

Ad platforms report revenue and ROAS. They do not report whether that revenue covered the cost of the products sold. A campaign with 4x ROAS at 35% gross margin is losing money — but the platform dashboard will never tell you that. Trellis connects your ad spend to actual product costs, computing contribution margin, break-even CPA, and true profitability per campaign.

You provide the economics: average order value, gross margin, target CPA, and target ROAS through your account settings. For deeper precision, upload COGS data at the SKU level — Trellis applies Tier 1 (SKU-level) margins where available and falls back to Tier 2 (category-level) margins for the rest. Break-even CPA is derived directly: AOV multiplied by gross margin. Every recommendation that follows is margin-aware.

This is what first-party data enrichment means in practice. The ad platform provides the spend data. Your store provides the order data. Trellis merges both, adds your cost structure, and produces the profitability picture that no platform UI will ever assemble for you.

Account-level business metrics — AOV, gross margin, target CPA, and target ROAS — calibrate every audit to your economics.
Campaigns are tiered by actual profitability: Star, Profitable, Marginal, or Unprofitable — based on your break-even ROAS.
Platform-reported conversions compared against verified order data. When the numbers diverge, Trellis surfaces the gap.

Uncover the gap between reported and actual.

Google Ads blends paid Shopping clicks with free product listing results into a single conversion count — over-reporting Shopping conversions by approximately 25%. Microsoft Ads bundles non-purchase goals (Add to Cart, page views) alongside actual purchases under "All conversions," inflating the number by up to 8.8x. These are not edge cases. They are structural features of how the platforms report.

Trellis runs attribution validation before every audit. It cross-references platform-reported conversions against your Shopify order data — the actual transactions your store recorded, attributed by UTM parameters. The check surfaces the gap, identifies which campaigns receive unearned credit, and gates audit recommendations on the accuracy of the underlying data.

This is validation, not attribution modeling. Trellis does not track customer journeys or build multi-touch models. It checks what the platform told you against what your store actually recorded. When those numbers diverge, you know before you optimize.

Built for where you are right now.

Solopreneur

$500–$3K/mo · Under $200K revenue

  • One clear answer: which campaigns are profitable after product costs.
  • Confidence gating ensures you only act on what the data supports.
  • Every claim tagged with its evidence basis — no guesswork disguised as insight.
  • Audits that compound: your third audit sees patterns your first one couldn't.
Start with one audit

Small Business

$3K–$10K/mo · $200K–$1M revenue

  • Audit reports rigorous enough to present to leadership with confidence.
  • Cross-platform visibility — Google and Microsoft in a single audit.
  • Changelog tracks every change and its impact, closing the accountability loop.
  • Break-even CPA derived from your actual gross margin, not platform assumptions.
Start with Core

Growth Business

$10K–$20K/mo · $1M–$3M revenue

  • SKU-level COGS integration reveals which products drive real margin, not just revenue.
  • Bayesian projections with credible intervals replace guesswork on budget reallocation.
  • 12 months of audit history gives context no quarterly freelance audit can replicate.
  • Attribution validation catches the 25% over-count before you scale the wrong campaign.
Start with Pro

Agency

3–20 client accounts · Per-account pricing

  • Consistent methodology across every client — same evidence rigor, zero manual calibration.
  • Per-client isolation: each account's economics, history, and changelog stay separate.
  • Changelog and recommendation tracking replace the 3–5 hours per client per audit.
  • Reports grounded in first-party data elevate your strategic conversations.
Talk to us

Trellis does not replace your ad platform. It reads what both platforms report, applies the context they cannot — your costs, your prior decisions, your verified orders — and produces the analysis that makes your next move informed. Every audit builds on the last. Every recommendation carries its evidence. Every change is tracked to its outcome.

This is what it means to audit with context.