Interpret the Smart Recommender Analytics Results

Prev Next

Smart Recommender provides detailed analytics that go far beyond vanity metrics. These insights allow you to break down your campaign performance at the variant, product, and category levels—giving you a full-funnel view of what’s working, what’s not, and where to optimize next.

This guide helps you understand how to:

  • Compare algorithm and placement performance across variants

  • Identify high-performing (or underperforming) products

  • Uncover the real business impact of different recommendation strategies

  • Tailor strategies by platform, season, and category

You’ll also find real-world use cases and optimization flows based on Smart Recommender data, which are actionable recipes for boosting engagement, improving conversion rates, and increasing revenue.

Whether you're fine-tuning a homepage carousel or building a high-intent campaign for a product detail page, this guide will help you make confident decisions backed by data.

Let’s explore how to put each analytics view to work—starting from campaign testing all the way down to product and category optimization.

Campaign Analytics

Use Case 1: Test Placement Impact (Homepage vs. Product Page)

Scenario: You’re running the same “User-Based” algorithm with the same design — but one campaign is placed on the Homepage, another on Product Pages.

What to check:

  • Product Impressions: Is one placement getting more visibility?

  • CTR & ATC Rate: Does one generate more engagement?

  • Direct Revenue: Which placement leads to more high-intent traffic?

How to act:

  • If the Product Page has lower impressions but 3x the Conversion Rate, shift more high-margin inventory there.

  • If the Homepage gets more clicks but low CR, try changing the algorithm or layout to show best-sellers instead of personalized recs.

Use Case 2: Compare Algorithms Side-by-Side

Scenario: You want to test User-Based vs. Most Popular Items on the same placement (say, category pages).

What to check:

  • Direct Revenue + Assisted Revenue: Which is actually driving sales?

  • CTR/PCTR (Product Click-Through Rate): Which algorithm generates more interest?

  • PCR (Product Conversion Rate): Which one converts better after a click?

How to act:

  • If User-Based has a lower CTR but higher PCR, it indicates that it shows fewer but more relevant products.

  • Scale the better performer as “Winner” or build a hybrid logic (e.g., User-Based first row + Most Popular second row).

Use Case 3: Validate “Copy” Strategy (Same Logic, Different Design)

Scenario: You create a second variant with the same logic and placement but test a new widget design (e.g., 2-row layout, more spacing, badges).

What to check:

  • CTR & PATC (Product Add-to-Cart Rate): Did the visual change increase product interest?

  • Revenue: Did this turn into actual value?

  • Product Clicks vs. Purchases: Did more visibility create decision fatigue?

How to act:

  • If your new variant has a higher CTR but flat purchases, consider retesting the layout or A/B testing the number of visible products.

  • If you have a lower CTR but a higher PCR, your new design might be filtering noise, which is a win.

Use Case 4: Discover When Assisted Revenue Peaks

Scenario: You want to uncover which campaigns support the buyer journey, even if they don’t drive direct clicks to purchase.

What to check:

  • Assisted Revenue / Direct Revenue Ratio per variant

  • Variants with low CR but high assisted sales

How to act:

  • These are campaigns that create a halo effect (e.g., helping users discover categories or brands they eventually buy).

  • Keep them running even if they don’t “win” an A/B test in direct sales. They create depth in the funnel.

Use Case 5: Optimize Platform-Specific Strategies

Scenario: Your widget is running on both Mobile and Desktop with different user behaviors.

What to check:

  • Mobile: lower CTR, higher bounce? Try fewer products or a scroll layout.

  • Desktop: longer sessions? Show more products or test richer layouts (bigger images, badges).

How to act:

  • Customize each platform’s experience based on engagement trends.

  • Don’t force a single widget to “fit all” — you have the data to go tailored.

Top 100 Product Analytics

This view helps you zoom in on individual product performance inside your recommendation campaigns — not just which campaigns perform, but which specific items drive engagement and revenue.

Use Case 1: Identify High-Value “Hero” Products

What to look for:

  • High Direct Revenue

  • High PCTR (Product Click-Through Rate) and PATC (Product Add-to-Cart Rate)

  • High PCR (Product Conversion Rate)

How to act:

  • Boost their visibility across more widgets and placements.

  • Anchor your homepage or cart page strategy around these top performers.

  • Run retargeting/email with these products featured — they close.

Use Case 2: Spot "Clickbait" Products (High Clicks, No Sales)

What to look for:

High PCTR (Product Click-Through Rate) but low PATC (Product Add-to-Cart Rate) and low PCR (Product Conversion Rate)

What this tells you:

Users are interested but drop off post-click, possibly due to price, image mismatch, or stock issues.

How to act:

  • Check if PDP (Product Detail Page) is optimized.

  • Investigate price, availability, or UX issues.

  • Temporarily deprioritize these products in your widget to improve CR.

Use Case 3: Cold Products (High Impressions, Low Everything)

What to look for:

  • Very high product impressions

  • Very low PCTR (Product Click-Through Rate), PATC, PCR

What this tells you:

These items are often displayed but have no effect.

How to act:

  • Exclude them from top placements or deprioritize with manual rules.

  • If you have significant inventory (e.g., new arrivals), consider testing new images, discounts, or badges.

Use Case 4: Retest with Improved Assets

What to look for:

Products that recently underperformed

How to act:

  • Update product titles, images, or metadata

  • Re-introduce in a different widget variant to measure uplift

  • If it rebounds, scale!

Category Analytics

This view helps you understand which product categories are driving results, revealing the deeper merchandising trends inside your recommendation engine.

Use Case 1: Prioritize Categories with High ROI

What to look for:

  • High Direct Revenue

  • Healthy CTR, CR

How to act:

  • Make sure these categories get prime placements (homepage, cart page, etc.)

  • Ensure top-performing products in those categories get surfaced with a higher rank weight.

Use Case 2: Test Performance by Gender or Season

What to look for:

  • Compare categories like “Male” vs. “Female”

  • See how “Winter Gear” performs vs. “Summer” or “Essentials”

How to act:

  • Shift campaigns dynamically based on seasonality.

  • Create split-logic widgets (e.g., show Men’s Shoes on men-focused traffic).

Use Case 3: Identify Stagnant Categories

What to look for:

High Impressions but very low CTR and Revenue

What this tells you:

These categories may be overexposed or not suitable for placement logic.

How to act:

  • Reduce the frequency of these categories in default campaigns.

  • Consider refining targeting or changing layout/design for these categories only.

Use Case 4: Create Category-Level Strategies

What to look for:

Top performing CR categories (even with low volume)

How to act:

  • For high-converting but low-traffic categories, consider promoting more aggressively.

  • Create category-dedicated recommendation widgets.

  • Group categories by performance to apply different A/B strategies per group.

Example Optimization Flows

Optimization Flow 1: Top Product + Category Combo Test

Goal: Increase revenue by combining product-level and category-level insights

  1. Open Top Product Analytics

  • Sort by Direct Revenue

  • Flag Top 10 revenue-driving products

  • Flag Bottom 10 products with high impressions but low CR

  1. Open Category Analytics

  • Sort by Conversion Rate (CR)

  • Identify the Top 3 converting categories

  1. Create two new campaign variants

  • Variant A: Showcase Top 10 products from Top Product list

  • Variant B: Use dynamic algorithms (e.g., “Viewed Together”) filtered only to include the Top 3 converting categories

  1. Launch both variants for 7–14 days

  2. Review Variant Analytics

  • Compare Direct Revenue, CTR, and Conversion Rate

  • If Variant A drives higher direct revenue but Variant B has better CR, consider combining them or testing a platform split (e.g., A on desktop, B on mobile)

Optimization Flow 2: Maximize Assisted Revenue Impact

Goal: Leverage discovery behavior to increase assisted conversions

  1. Open Campaign Analytics

Filter for campaigns with high Assisted Revenue / low Direct Revenue

  1. Open Product Analytics

Flag products that generate clicks but rarely get purchased directly

  1. Adjust product presentation

  • Test different design placements for those products

  • Move them to mid-funnel placements (Category Pages instead of PDP)

  1. Create a dedicated campaign

Use “Contextual Recommendations” to surface these products in broader shopping contexts

  1. Track changes in assisted revenue and total order value over 2 weeks

If assisted revenue remains high, these campaigns are playing a strategic discovery role. Maintain them as assister campaigns.

Optimization Flow 3: Platform-Specific Experience Split

Goal: Improve performance across mobile and desktop

  1. Open Campaign Analytics

  • Use the Device Filter

  • Compare the same campaign's performance on Mobile vs. Desktop

  1. Note UI differences

  • Is CTR significantly lower on mobile?

  • Does Desktop outperform on AOV?

  1. Duplicate the campaign

  • Variant A for Desktop: Wide layout, more products per row

  • Variant B for Mobile: Scrollable carousel, fewer products per slide

  1. Monitor for 10 days

  • Use the platform filter to track CR, PATC, and revenue per user

  • Keep both if results are improved by platform targeting