April 2026 · ~12 min read

The Attribution Problem: can AI tell what's actually driving revenue?

Every meaningful business decision is a causal question: if we change this, what will happen to that? Most analytics tools answer a correlational question instead, and the gap between the two is where billions of dollars in marketing spend goes sideways. We built a controlled simulator of that gap and put 29 language models through it for a full simulated year, frontier and mid-tier, closed-source and open-weight, reasoning and not. Here is what they did, and what their reasoning traces reveal about how AI actually handles causality under uncertainty.

A live render of the simulated cafe the LLMs are allocating budget for. Same simulator, same scoring, same ten seeds, whether a human or a model is in the seat.

Why this matters: every business decision is really a causal one

How to split a marketing budget. Whether to hire two more engineers. Whether to cut a product's price. Each of these is a causal question in disguise: if we change this input, what happens to the output we care about? Acting well requires an answer. Acting poorly is what happens when the answer is wrong.

The trouble is that the data those decisions are made on (CRMs, ad reports, warehouses, dashboards) is almost entirely observational: it records what happened alongside what else was happening, with no clean separation between cause and consequence. Spend more on a channel during a busy quarter and the channel looks productive; run a campaign during a competitor's stumble and the campaign looks like it worked. Standard regression on observational data conflates the two, and most marketing and finance dashboards sit downstream of standard regression. The decision-maker walks away with a number that looks causal and isn't. The way out is deliberate variation, A/B tests, geo holdouts, on/off rotations, anything that breaks the natural correlation between treatment and outcome. Few teams actually do it.

Why marketing attribution is the cleanest example

This same pattern shows up wherever allocations meet measurements. Marketing attribution is the loudest version, for two reasons. First, the spend is enormous: US digital advertising is now ~$300B a year, global digital ad spend is north of $700B, and a misattributed channel keeps "earning" credit it didn't actually earn, so the error compounds with reinvestment.

Second, the measurement is uniquely confounded. Major channels like search and social are auctions, so bid prices respond to your own demand and spending more makes a channel look more efficient. Customers are exposed to multi-touch sequences (display Monday, search Tuesday, email Wednesday) and the same conversion can be credited to any of them depending on the attribution rule. Adstock (the lagged effect of spend on later revenue) means today's campaign reshapes next month's numbers, well after anyone is looking. The two main causal-inference fixes (marketing mix modeling and incrementality testing) both have well-known ways to fail.

Where AI fits in

LLMs are increasingly being placed inside business decision loops as copilots, agents, advisors, and dashboards-with-chat. If they are going to advise on resource allocation, they have to navigate exactly this problem: read noisy aggregated data, infer causal structure under partial observability, and avoid conflating "the channel was up because we were up" with "spending more there will push us up." This study puts 29 LLMs (frontier and mid-tier, $0.20/run and <$0.01/run, reasoning and not) into a controlled simulator of that exact loop, scores every choice against a causal oracle, and reports what we found.

The game

Every month for a year, you (or a model in your seat) allocate a $300,000 marketing budget across three channels: Discovery (top-of-funnel / brand), Conversion (direct-response), and Social (influencer). At month-end you observe just two numbers (total revenue and total customer count) and nothing else. There is no per-channel revenue breakdown, no attribution report, no audit trail of which dollar produced which conversion. You make next month's call from those two numbers and whatever you can infer about what's working. Repeat twelve times.

Scores are oracle-normalized, which means we compare your total annual revenue against what the best possible fixed monthly allocation would have produced under that seed's true (hidden) channel elasticities and adstock parameters. A score of 100% means you matched that oracle; 0% means you matched a zero-budget baseline that just runs the business with no marketing spend. The same ten seeds are used for every model, the simulator is fully deterministic, and every run can be replayed turn-by-turn. You can play one yourself in about eight minutes.

Each model is run twice on every seed: once with no analytics, where it sees only the raw monthly revenue numbers, and once with analytics, where its prompt also includes three statistical summaries computed from the simulator's internal daily stream: a correlation matrix, an OLS regression with weather and event controls, and an adstock-aware regression. The with-analytics condition is the closest the model gets to a "real" attribution dashboard. Total across the lineup: 29 models × 10 seeds × 2 conditions = 580 full 12-month trajectories.

The leaderboard is surprisingly flat at the top

Before getting into the structure of how models behave, here is the headline result: averaging across both prompt conditions, the top ten models all sit inside a single ~9-point band, and the differences between them are not as large as the gaps between vendors or generations might suggest.

#ModelNo-anW-anOverallCost/run
1claude-3.7-sonnet83.187.485.3$0.20
2gpt-5.4-nano73.792.483.0$0.016
3deepseek-v3.275.788.682.2$0.014
4gpt-4o-mini85.277.581.4$0.009
5claude-haiku-4.572.588.180.3$0.073
6gemini-2.5-flash72.584.978.7$0.024
7mixtral-8x22b70.284.877.5$0.143
8gemini-2.0-flash73.980.177.0$0.007
9qwen3-235b-a22b67.086.276.6$0.005
10claude-3.5-haiku66.186.276.1$0.053
→ full 29-model leaderboard on the live site

A few things jump out of the full data:

None of this means newer models are worse; it means this specific task (sequential allocation under noisy feedback) doesn't separate models the way you'd expect from MMLU or coding benchmarks. Capability differences between frontier generations get compressed when the task has a narrow optimum.

Analytics helps almost everyone

In the with-analytics condition, the model sees three regression summaries appended to its prompt before each monthly decision: a correlation matrix, an OLS fit with weather and event controls, and an adstock-aware regression. The effect across the 29-model lineup is remarkably consistent:

Bar chart ranking models by the percentage-point improvement in oracle-normalized score going from no-analytics to with-analytics conditions
Per-model analytics benefit, ranked. Each bar is a model's mean with-analytics score minus its mean no-analytics score, measured in percentage points of oracle-normalized score (10 seeds per condition).

This is the cleanest result in the dataset: a regression summary handed to the model at each step reliably turns mid-tier models into upper-tier models. It's less clear what it's doing to the already-strong ones: sometimes nothing, sometimes a small lift, occasionally a degradation.

Cost has nothing to do with how much analytics helps

Scatter plot of mean cost per run on the x-axis (log scale) against analytics benefit in percentage points on the y-axis
x: mean cost per 12-month run (log scale). y: analytics benefit (with-analytics minus no-analytics, pp). The two red points are the only models analytics does not help. The cluster is essentially flat across three orders of magnitude of cost.

If frontier models had a built-in advantage at using a regression summary, you'd expect the y-axis to trend up with cost. It doesn't. The biggest gains belong to the cheapest weak models (GPT-3.5-turbo, Qwen 3-30B, Llama 4-Maverick) because they have the most room to grow; the most expensive model on the chart (Claude 3.7 Sonnet) sees only a +4.3pp gain because it was already strong without analytics. The lift from regression summaries is roughly orthogonal to model price.

Cost barely predicts capability on this task

Cost-score Pareto frontier across 29 models
Mean cost per 12-month run (log scale) vs. mean oracle-normalized score, 29 models. Each model's score is averaged across both prompt conditions (no-analytics and with-analytics). The orange Pareto frontier contains 7 models. Reasoning models (o3-mini, DeepSeek R1) sit below the frontier at higher cost.

Fitting a log-linear regression gives a slope of roughly +4 points per 10× cost increase, with r = 0.36, present but weak. The Pareto frontier is only 7 models deep:

The jump from GPT-5.4-nano at ~1.6¢/run to Claude 3.7 Sonnet at ~20¢/run buys you 2.3 percentage points. Above that, there is nothing on the frontier; every more-expensive model we tested is dominated by at least one cheaper one.

If you're deploying for production, the three points at $0.01/run (GPT-4o-mini, DeepSeek V3.2, GPT-5.4-nano) are probably where you start. Claude 3.7 Sonnet is a reasonable stretch if the last few points matter; anything above it becomes hard to justify on this task.

What going right looks like: Mixtral 8×22B, no analytics, 94.2%

Mixtral 8×22B is a 2024-era open-weight mixture-of-experts model from Mistral. Mid-pack on most modern general benchmarks, and not anyone's first pick when "best model" comes up. On this task, on seed 5, with no analytics, meaning it sees only the raw monthly revenue numbers and nothing more, it scored 94.2%. That is within striking distance of the optimal fixed allocation under the seed's true (and hidden) parameters. Here is the entire 12-month trajectory, lightly trimmed:

mistralai/mixtral-8x22b-instruct · seed 5 · no analytics Score 94.2% (A+) · $4.27M revenue vs $4.41M oracle
M1
A $100k · B $100k · C $100krev $224k
"Starting with a balanced allocation to test market response across all channels. This diversified approach helps gauge which channels perform best without overcommitting to unproven strategies."
M2
A $150k · B $120k · C $30krev $253k
"Revenue ($224K) is below spend ($300K), suggesting inefficiency. Social spend underperformed last month, so I'll shift more budget to Discovery and Conversion, while reducing Social for a leaner test."
M3
A $180k · B $110k · C $10krev $312k
"Revenue improved in Month 2 ($253K vs $224K) with higher Discovery and Conversion spend. I'll further increase Discovery for broader reach."
M4
A $200k · B $90k · C $10krev $362k
"Revenue continues to grow ($362K in Month 3). To sustain momentum, I'll further increase Discovery for maximum brand reach, while keeping Social minimal to avoid dilution."
M5–M7
A ~$220k · B ~$80k · C $0rev $348k → $391k → $398k
"Discovery and Conversion allocations proving effective. I'll maintain high Discovery spend and keep Social at $0K to avoid wasted spend." Over three months, the model ratchets Discovery up from $200K toward $240K while cutting Social entirely.
M8–M12
A ~$200k · B ~$100k · C $0rev $414k → $362k → $403k → $404k → $397k
"Revenue growth is slowing in Month 7, suggesting Discovery may be approaching saturation. I'll reduce Discovery to $220K and shift budget to Conversion to capture more direct sales." The model settles into a stable ~$200K/$100K/$0 mix for the remaining months.

The mechanics are clean and readable even in hindsight, and they look exactly like a textbook adaptive policy:

  1. Month 1 is pure exploration. An even one-third split across all three channels with no prior claims about what works. The model is buying information.
  2. Month 2 is the first revision. The model notices that revenue ($224K) came in below spend ($300K) and reads that as inefficiency. It cuts Social, which it labels "experimental", and tilts toward Discovery and Conversion. The reasoning is vague ("Social spend underperformed last month") but directionally correct.
  3. Months 3–5 are compounding on signal. Each month, the model observes revenue rise after the previous reallocation, treats that as confirmation, and doubles down: Discovery share keeps creeping up, Social drops to zero, and Conversion holds in the middle.
  4. Months 6–8 are equilibrium-seeking. Small adjustments to Conversion when revenue dips, small pullbacks on Discovery when growth visibly slows. The model never swings hard, even when a month's revenue surprises it.
  5. Months 9–12 are maintenance. The mix has stabilized at roughly 67%/33%/0%, within spitting distance of the true oracle ratio (~73%/27%/0%). The model recognizes the plateau and stops trying to improve on it.

Everything the model "knows" about which channel actually drives revenue, it learned from the sequence of twelve monthly revenue numbers, without a regression, a correlation matrix, or any formal causal-inference machinery. The behavior isn't flashy: it is steady hypothesis revision, appropriately cautious, with large changes early (when there is the most to learn) and small changes late (after convergence). For a 2024 mid-tier open-weight model, that is a remarkably disciplined run.

What going wrong looks like: Gemini 2.5 Flash, with analytics, 45.1%

Gemini 2.5 Flash is a current-generation Google model and ranks #6 overall on the leaderboard. On most seeds it scores in the low 80s, well above mid-pack. On seed 6, with analytics turned on, it scored 45.1%, more than 30 points below its own average. This wasn't a slow drift; it was a single bad run that the model never recovered from.

What went wrong here is instructive, because it wasn't a reasoning failure in the usual sense. The model reasoned fluently throughout. It cited the regression outputs by name. It used phrases like "statistically significant" correctly. The problem is what it reasoned about.

google/gemini-2.5-flash · seed 6 · with analytics Score 45.1% (C) · $3.08M revenue vs $4.42M oracle
M1
A $100k · B $100k · C $100krev $228k
"I will start with an even split across all three channels to gather initial performance data." Standard.
M2
A $150k · B $50k · C $100krev $242k
"The adstock regression shows Discovery having a significant positive impact… Conversion, however, consistently shows negative or negligible impact. I will significantly reduce Conversion." The model is reading a regression fit to 30 days of data and treating it as definitive. Conversion is actually the second-most-effective channel.
M3–M5
A $180k→$250k · B $20k→$10k · C $100k→$40krev $243k → $256k → $253k
"Conversion's coefficient is now positive in the adstock regression, but not significant, and still negative in the OLS with controls." The model keeps Conversion effectively zeroed out because the p-values aren't significant, then starts cutting Social too. Revenue plateaus.
M6
A $250k · B $0 · C $50krev $252k
"Conversion is now consistently negative in the OLS regressions, suggesting it's detrimental. Social shows a positive trend in the adstock regression, nearing significance. I will completely cut Conversion. I will increase Social to see if it can become significant and contribute positively." Two wrong turns in one month: kills the good channel, doubles the bad one.
M7
A $100k · B $0 · C $200krev $248k
"Discovery Adstock's significance has dropped… I will significantly reduce Discovery to test if the adstock effect is truly diminishing. I will allocate a substantial portion to Social to see if it can become a viable channel." The model reacts to noise in a month-to-month regression coefficient by performing the largest reallocation of the entire run, away from the only channel actually working.
M8–M9
A $150k→$200k · B $0 · C $150k→$100krev $252k → $244k
"The previous month's drastic shift to Social did not yield positive results. I need to try a more balanced approach." The model rotates back toward Discovery after Social fails to deliver, but still never touches Conversion.
M10–M12
A $250k–$300k · B $0 · C $0–$50krev $275k → $290k → $296k
"Discovery Adstock continues to be the only channel with a statistically significant positive impact… I will maximize investment in Discovery." The model converges to 300/0/0 in the final months, better than earlier, but with Conversion still zero, well off the oracle mix.

Four things to notice about what happened in this run:

GPT-4o-mini's analytics-hurts pattern (rank #4 without analytics, −7.7pp once analytics is added) is the same story at a smaller scale. Without analytics, the model has already settled on roughly the right mix from informal pattern-matching on revenue. Add regression summaries and the model starts over-reacting to small-sample fluctuations in coefficients, and ends up worse than it was before.

The common thread

Analytics helps models that are poor at inferring channel effectiveness from raw revenue feedback (GPT-3.5-turbo and Qwen 3-30B are the clearest examples). It does not reliably help models that are already good at that inference, and in a handful of cases it actively makes them worse by replacing a working heuristic with an over-confident reading of a noisy regression. The lesson is not "analytics is good" or "analytics is bad", it is "analytics is a tool, and using it well requires understanding what its outputs mean and what their uncertainty is." That is itself a causal-reasoning skill, not a separate one.

Takeaways

  1. The capability differences between frontier models on this kind of task are smaller than you'd think. The top ten models all fit inside a ~9-point band, and many of the gaps within that band are within per-seed confidence intervals. If you are choosing a model for an applied resource-allocation task, do not expect the spec sheet, context length, parameter count, headline benchmark numbers, to be a reliable guide. Run it on your actual problem.
  2. Giving the model statistical analytics is a large, broadly positive lever. A short regression summary handed to the model at each decision lifts the median model by ~12 points, and lifts several mid-tier models by 20–30. That is a bigger swing than almost any model swap we ran. If your AI pipeline includes this kind of decision, instrumenting the prompt with simple causal summaries is the cheapest large improvement you can make.
  3. Cost tracks capability only weakly on causal-reasoning tasks. Under 2¢ per run, three models score within 2 points of the overall leaderboard leader. Reasoning-effort models like o3-mini and DeepSeek R1, despite costing 5–50× more, do not sit at the top, their step-by-step thinking does not translate into a reliable advantage on a problem of this shape.
  4. How a model uses a statistical tool matters as much as whether it has one. The same set of regression summaries that rescued GPT-3.5-turbo from the bottom of the lineup sent Gemini 2.5 Flash into one of the worst runs in the dataset. The difference is not size, vendor, or generation. It is whether the model knows how to wait for evidence before changing its mind.
  5. This is the kind of reasoning real business decisions actually require. Sequential, noisy, partially-observable, and causal. Standard LLM benchmarks do not measure it. AttributionBench is one stylized data point on what the gap looks like, and the gap is real.

Try it yourself

The same simulator the models ran against is fully playable in the browser, with the same oracle, the same scoring rule, and the same ten seeds. A run takes about eight minutes. Your final score lands on the same leaderboard as every model in this writeup, directly comparable.

Play a run Full leaderboard Repo on GitHub

AttributionBench, April 2026. Code and data are MIT-licensed; the full 29-model sweep costs ~$118 of OpenRouter credits to reproduce, and a single mid-tier model takes ~$1.


Appendix: how this was built

The simulator

The environment is a deterministic, seeded simulator written in plain JavaScript. Each run advances day by day for twelve months; channel allocations decided once per month flow into a daily revenue and customer-count update. Stochasticity comes from a seeded RNG that drives weather, demand noise, and a competitor cafe's behavior. The same simulator runs in the browser (so a human can play it) and in Node (so an LLM can play it), the two engine files are byte-identical, which means a model's leaderboard score and a human's leaderboard score were produced by exactly the same code path.

The oracle is computed once per seed by an exhaustive search over fixed monthly allocations under that seed's hidden parameters. Scores are normalized to that oracle and to a zero-budget baseline, so 100% means matching the best possible fixed strategy and 0% means matching no spend at all.

The LLM pipeline

All canonical leaderboard runs go through OpenRouter's OpenAI-compatible API. Routing every vendor (Anthropic, OpenAI, Google, Meta, Mistral, DeepSeek, xAI) through a single schema means every model faces an identical pipeline: same request format, same retry behavior, same provider pinning, same cost accounting. Cross-vendor comparisons stay clean.

A “run” is one full 12-month trajectory under one (model, seed, analytics-bucket) tuple, and counts as 12 model calls. A full canonical evaluation per model is 240 calls: 10 seeds × 2 analytics buckets × 12 months. The 29-model sweep behind this writeup is 580 runs (29 × 20). The model receives a system prompt with the rules, then a fresh state message each month containing year-to-date totals, last month's revenue and customer count, and (in the analytics bucket) three regression summaries computed from the simulator's internal daily stream. Conversation history is kept across all twelve months, the model sees everything it has previously said and observed. Decisions come back as strict JSON; parse failures are recorded and the run continues.

For reasoning-capable families (OpenAI's gpt-5 series, o-series, DeepSeek R1, Qwen3 thinking variants, Anthropic extended-thinking) the canonical setting is reasoning.effort = medium. For non-reasoning models, temperature is 0. Both choices are part of the canonical spec; overriding them produces non-canonical runs that don't pool into leaderboard averages.

Storage and verification

Run metadata, per-month turns, and full prompt/response traces land in Supabase (Postgres with row-level security). Every model's reasoning is queryable and replayable, the AI Replays page on this site streams the saved traces back month by month. Anyone with a clone of the repo and an OpenRouter key can re-run any model and check that the score reproduces.

Cost is tracked per run by diffing the OpenRouter /credits endpoint before and after. The per-generation cost field on individual completions is unreliable across upstreams, but the credits delta is exact, at the cost of being noisy if multiple runners share a key.

Bucketing and the analytics view

The two leaderboard buckets, no analytics and with analytics, differ in exactly one thing: whether the monthly state message includes three regression summaries (correlations, an OLS regression with weather and event controls, and an adstock-aware regression). The information content matches what a human in the analytics-enabled condition sees on the live site. Both buckets are required for a model to count as a canonical entry.

Frontend, hosting, deployment

The site is plain HTML, ES-modules JavaScript, and CSS, no framework. The 3D cafe scene at the top of the play page uses Three.js with custom isometric sprite work. Hosting is GitHub Pages; data reads go directly from the browser to Supabase via the public anon key with read-only RLS. There is no application server. The leaderboard query filters strictly on the canonical-config flag in run metadata, so research sweeps and legacy runs sit in the database but never surface on the public table.

Reproducing the leaderboard

Adding a model is one command:

OPENROUTER_API_KEY=sk-or-... node sdk/run.mjs \
  --model anthropic/claude-sonnet-4.6 \
  --submitter "Your Name"

The runner is idempotent, re-running the same model skips already-completed (model, seed, bucket) tuples. Cost lands between $0.20 and $20 depending on tier; the 29-model sweep behind this writeup ran on roughly $118 of OpenRouter credits across about a day of wall-clock time. The full canonical spec (route, temperature, max tokens, reasoning effort, seeds, conversation policy, provider pinning) is in BENCHMARK.md.

Related work

AttributionBench sits at the intersection of three existing literatures. Static causal-reasoning benchmarks ask whether LLMs can produce text consistent with a correct causal argument. Interactive agent benchmarks measure multi-turn decision-making against typically deterministic state transitions. Contextual-bandit studies probe stochastic reward but usually at horizons of one or two steps. None of these simultaneously tests sequential decision-making, stochastic aggregated feedback, and an applied causal structure with a well-defined oracle. The works closest to ours along each of these axes:

References

  1. Chen, W., Koenig, S., & Dilkina, B. (2025). Solving Multi-agent Path Finding as an LLM Benchmark: How, How Good and Why. Transactions on Machine Learning Research. openreview.net/forum?id=8hAxEFRVQT.
  2. Felicioni, N., Maystre, L., Ghiassian, S., & Ciosek, K. (2024). On the Importance of Uncertainty in Decision-Making with Large Language Models. Transactions on Machine Learning Research. openreview.net/forum?id=YfPzUX6DdO.
  3. Kapoor, S., Stroebl, B., Siegel, Z. S., Nadgir, N., & Narayanan, A. (2025). AI Agents That Matter. Transactions on Machine Learning Research. openreview.net/forum?id=Zy4uFzMviZ.
  4. Kıcıman, E., Ness, R., Sharma, A., & Tan, C. (2024). Causal Reasoning and Large Language Models: Opening a New Frontier for Causality. Transactions on Machine Learning Research. openreview.net/forum?id=mqoxLkX210.
  5. Krishnamurthy, A., Harris, K., Foster, D. J., Zhang, C., & Slivkins, A. (2024). Can Large Language Models Explore In-Context? Advances in Neural Information Processing Systems (NeurIPS). arXiv:2403.15371.
  6. Liu, X., Yu, H., Zhang, H., Xu, Y., Lei, X., Lai, H., Gu, Y., Ding, H., Men, K., Yang, K., et al. (2024). AgentBench: Evaluating LLMs as Agents. International Conference on Learning Representations (ICLR). arXiv:2308.03688.
  7. Yang, S., Zhao, B., & Xie, C. (2025). AQA-Bench: An Interactive Benchmark for Evaluating LLMs' Sequential Reasoning Ability in Algorithmic Environments. Transactions on Machine Learning Research. openreview.net/forum?id=W22g6Ksmbi.
  8. Zečević, M., Willig, M., Dhami, D. S., & Kersting, K. (2023). Causal Parrots: Large Language Models May Talk Causality But Are Not Causal. Transactions on Machine Learning Research. openreview.net/forum?id=tv46tCzs83.