FAQ about Smart Recommender A/B Testing View

Prev Next

Suggested reading:

Why is there a discrepancy between Smart Recommender Analytics and the A/B Testing view?

The two dashboards use different logging logics and serve different purposes:

A/B Testing View (Campaign-level Analytics)

Smart Recommender Product-level Analytics

Focuses on comparing variants vs. the control group

Focuses on detailed product-level performance

Logs one impression per eligible session

Logs impressions per product, only when 50% of the product is in view

Control group doesn’t log product-level events

No control group comparison — only variants and product-level behavior

Conversion metrics include all site purchases after an impression

Conversion metrics include only direct or assisted conversions tied to specific recommended products

CTR and CR are based on session-level exposure

CTR and CR are based on product-level engagement

Because of these differences:

  • You may see higher click or conversion counts in product analytics due to per-product logging.

  • A/B Testing impressions may appear lower, since they're logged once per session — not per product.

  • Revenue attribution in A/B Testing may include purchases unrelated to the recommended product (if “Purchases” is selected), whereas Smart Recommender Analytics filters revenue based on click attribution.

Use A/B Testing to evaluate campaign impact against a no-recommendation baseline (control).

Use Smart Recommender Analytics to drill into the behavior of recommended products, widgets, and strategies.

Why are there no impressions for the control group of my Smart Recommender campaign?

Control group users don’t see any recommendation widgets. As a result, they do not generate product-level impressions or clicks. Instead, the system tracks their behavior at the session level for comparison purposes (e.g., site-wide purchases, revenue).

Why is the impression count the same across multiple variants?

Impressions in A/B testing are logged per eligible session, not per product card. If multiple variants are split across users evenly, and users qualify equally, you might see the same number of impressions per variant — especially in low-traffic Smart Recommender campaigns.

Why does my conversion rate seem low even though I see purchases?

Conversion Rate (CR) in this report is calculated as conversions divided by eligible sessions (impressions). If many users are eligible but don’t engage with the Smart Recommender widget or convert, your CR may look low — even with some purchases. Try narrowing your audience or improving widget visibility as a solution.

Why is “Purchases from Clicks” lower than “Purchases” for my Smart Recommender campaign?

"Purchases" includes all purchases on your site after a user is eligible to see a campaign. “Purchases from Clicks” includes only purchases from users who clicked the widget before buying. It helps you understand the direct influence of engagement vs. passive exposure.

Why is the control group showing revenue even though there are no impressions or clicks for my Smart Recommender campaign?

The control group doesn’t see the widget, but still navigates your site and may purchase. This is expected — they serve as a no-exposure baseline so you can measure how much additional value is driven by showing recommendations.

Why does my revenue change when I switch conversion criteria?

Revenue is recalculated based on your selected conversion logic:

  • Purchases” include all purchases made after impression

  • Purchases from Clicks” include only purchases after a click

This helps you evaluate both the direct and indirect impact of your Smart Recommender widget.

Why is my uplift or significance score low even though one variant of my Smart Recommender campaign performs better?

Uplift shows relative improvement, but significance depends on the number of impressions and the margin of difference. If your sample size is small or the uplift is marginal, statistical confidence will be low.

Why is AOV higher in the control group than in the variants of my Smart Recommender campaign?

Variants might encourage the discovery of lower-priced or add-on items, affecting average order value. A higher AOV in control doesn't necessarily mean better performance — review total revenue and conversion rate alongside AOV for the whole picture.

Can I compare campaign-level analytics results with the Smart Recommender analytics dashboard?

Not directly. The Campaign A/B Testing dashboard uses session-based logging and site-level conversions, especially for control group logic. In contrast, the Smart Recommender Analytics dashboard uses product-level logs (impressions, clicks, direct/assisted conversions). Use them together, but interpret them separately.