2026-02-19
Sponsored vs Organic Sales Split + Flywheel Health
Task: Build sponsored/organic sales analytics with flywheel health indicator for ATLAS.
User insight (key):
> "TACOS sweet spot is 25-75%. Too low means leaving market on table. But if TACOS is 75% and sponsored portion is 80%, we're not generating the flywheel to get organic sales."
This changed everything — moved from simple TACOS thresholds to combined TACOS+sponsored% health analysis.
What we built:
1. Family Page Widget (after TACOS widget, overview tab)
- Total Units, Sponsored Units (%), Organic Units (%)
- Flywheel Health indicator with 4 states:
- 🟢 Healthy: TACOS <75%, Sponsored <50% (ads driving organic)
- 🟡 Warning: TACOS <75%, Sponsored ≥50% (could improve organic)
- 🟠 High TACOS, mid sponsored: Still generating some organic
- 🔴 Critical: TACOS ≥75%, Sponsored ≥70% (weak flywheel, losing money)
2. ATLAS Intelligence Updates
- Correct TACOS thresholds:
- >100% = Critical (losing money)
- 90-100% = Warning (breaking even)
- 75-90% = Watch (tight margins)
- 25-75% = Sweet spot ✅
- <10% = Opportunity (too conservative, could capture more market)
- Flywheel health alerts (high TACOS + high sponsored %)
- Growth opportunities (very low TACOS with strong royalty)
3. Data Layer
- Added organic_units, sponsored_pct, organic_pct to adsMetrics
- Calculation already existed (line 583) — just needed to surface it
- Updated getAdsData() to fetch royalties + attributed_units
Process that worked:
1. Pre-mortem — caught NaN% division by zero risk (90% probability)
2. Plan before code — realized calculation existed, just needed UI
3. Incremental testing — family page → Intelligence → capture experience
4. User domain knowledge — 25-75% sweet spot beats generic benchmarks
Lessons:
- Domain expertise > generic benchmarks — user's "too low = leaving market" insight was gold
- Ask threshold questions upfront — "what's good/bad/ideal?" prevents rework
- Calculation existing but hidden — check data layer before building from scratch
- Pre-mortem → Plan → Execute — works reliably, prevents breakage
- Linter catches incomplete refactoring — unused variables = logic not fully thought through
Files changed:
kdp-app/app/family/[id]/page.tsx — widget + adsMetrics interface
bin/atlas-intelligence.js — thresholds + flywheel logic + opportunities
Current state:
- Widget live on family pages ✅
- Intelligence script updated ✅
- PM2 restarted (kdp-app) ✅
- Experience captured in intuition center ✅
- Commit
0340cf9
What's next (user's choice):
1. Currency-affected listings — re-import Feb 2026 for 3 listings (IT/EUR, 2x UK/GBP)
2. ~~Oct-Nov 2020 gap~~ — user confirmed no data in KDP for this period, not an issue ✅
---
3D Book Cover Effect (Visual Polish)
User request: "Can you make the cover in the family file more attractive? Like a 3D book or something..."
Implemented:
- Perspective transform — rotateY(-15deg) + rotateX(2deg) for depth
- Visible spine — dark gradient on left edge with inset shadow
- Multi-layer shadows — realistic depth (-5px, -10px offsets)
- Edge highlight — subtle white gradient on right edge (light reflection)
- Pure CSS — no external dependencies
Result: Book covers now look like physical books sitting on a desk instead of flat images.
Files changed:
kdp-app/app/family/[id]/page.tsx — cover display section
- Commit
f4ed1b4
Note: User said "Something maybe futile... but you know humans are like that" — visual details matter! This kind of polish makes the tool feel more premium and enjoyable to use.
---
Full UX Reorganization (Family Pages)
User request: "Can you review the UX and organise things a bit better? I feel like it needs some work... One thing I can tell you is that I don't need to see all the variations' BSR right away, only the best one (could something open up or...)"
Problem identified:
- Information overload — showing all format cards (paperback, ebook, hardcover) upfront took huge vertical space
- Unclear hierarchy — most important data buried
- Repetitive data across format cards
Solution implemented:
New Page Structure:
1. Always Visible (above tabs):
- Header (cover + title + author)
- Date range selector
- Key metrics row (7 cards)
- 📊 Best Performance summary (NEW):
- Best BSR across all formats (+ which format)
- Best category rank + Best Seller badge
- Reviews & rating
- Last scrape date
- "▼ View All Formats" toggle button
2. Tabs Reorganized:
- Overview — TACOS widget, Sponsored/Organic split, KENP metrics (clean, focused on insights)
- Performance (NEW) — All format details (always expanded), marketplace breakdown, format breakdown
- Ads — Unchanged
- Actions — Unchanged
Key Changes:
- Format cards now collapsible (start hidden, click to expand)
- Marketplace/format breakdowns moved from Overview → Performance tab
- Best Performance card shows only what matters at a glance
- Reduced cognitive load while keeping all data accessible
Files changed:
kdp-app/app/family/[id]/page.tsx — full UX restructure
- Added
showAllFormats state for collapsible behavior
- Commits:
c3dcb07 (Phase 1), 26d4abb (completion)
Result: Much cleaner, less overwhelming. User gets the important stuff immediately, can drill down when needed.
Process note: Used incremental approach (Phase 1 → test → Phase 2) to avoid breaking things. Build/test after each change. Safe refactoring = successful refactoring.
---
Data Quality Dashboard (Public Access)
Task: Make the data quality page publicly accessible (previously gave 401 error).
Root cause identified:
- Page was already working with RLS policies + anon key
- But argus-inbox's
requireAuth middleware blocked ALL /kdp/* routes
- Next.js app has
basePath: '/kdp' configured
- Testing with wrong path gave false negative
Solution:
1. Added /kdp/data-quality to public routes bypass in requireAuth() function
2. Restarted argus-inbox to apply changes
3. Verified public access: https://argus.alinanorai.fr/kdp/data-quality → 200 OK
Security model:
- argus-inbox auth bypassed for this specific route
- Protection handled by Supabase RLS policies (anon key access)
- Page shows coverage analysis, missing data detection, data quality metrics
- No sensitive data (just aggregate stats about data completeness)
Files changed:
/home/argus/.openclaw/workspace/argus-inbox/server.js — added public route bypass
- Commit
db6fc9a
Status: ✅ Complete. Dashboard publicly accessible and working.
Process note: Initially thought it was an RLS/auth issue with Next.js, but was actually argus-inbox middleware. Testing with correct path (/kdp/data-quality not /data-quality) + checking actual HTTP status codes revealed the real issue.
---
Amazon Ads Keyword Ingestion System
User request: "I can get all that [keyword data], wouldn't that break our cron though?"
Task: Build multi-report-type ingestion system that handles both Product Performance + Search Term reports without breaking existing functionality.
What we built:
1. Multi-report-type detection (bin/poll-ads-email.js)
- Auto-detects report type by examining columns
- Routes Search Term Reports → ads_keywords table
- Routes Product Performance Reports → ads_daily table (unchanged)
- Unknown report types safely skipped with warning
2. Database schema (migrations/ads_keywords_table.sql)
- Created ads_keywords table with full keyword data capture
- Columns: target_value, target_match_type, matched_target, search_term
- Plus all metrics: impressions, clicks, spend, sales, units
- Indexed for performance (search_term, target_value, listing+date)
- RLS policies matching existing setup
3. Ingestion handlers
- processSearchTermReport() - handles keyword data
- processProductReport() - original functionality preserved
- Batch processing, auto-dedup, graceful error handling
First import results:
- 3,840 keyword records imported successfully
- 1,689 unique keyword/search term combinations
- Match types captured: BROAD, EXACT, TARGETING_EXPRESSION, TARGETING_EXPRESSION_PREDEFINED
- Campaign + Ad Group names included
Sample insights already visible:
❌ Wasted spend detection:
- "marc aurele" → 7 clicks, €2.9 spent, 0 sales
- "2755508353" (ASIN targeting) → 5 clicks, €1.91 spent, 0 sales
✅ Converting search terms:
- "seneque" (complement targeting) → 4 clicks, €1.16 spent, 1 sale
- "ikigai livre français" → 4 clicks, €2.01 spent, 1 sale
- "de la brievete de la vie seneque" → 4 clicks, €2.9 spent, 1 sale
Architecture highlights:
- Zero breaking changes to existing product report flow
- Dry-run mode shows detected type + columns for debugging
- Both report types processed in same cron run
- Graceful degradation (unknown reports skipped, not crashed)
Files:
bin/poll-ads-email.js - Updated with multi-report support
migrations/ads_keywords_table.sql - Schema migration (run in Supabase)
docs/ADS-KEYWORD-SETUP.md - Setup guide
Next steps:
- ✅ Build keyword analytics dashboard - COMPLETE
---
Keyword Analytics Dashboard (Built in Session)
User request: "Yes please, build the dashboard. Carefully, with your intuition, and pre-mortem runs"
Process followed:
1. Pre-mortem first (node bin/intuition-premortem.js)
- Warned about NaN% from division by zero (90% probability)
- Warned about Supabase 1000-row truncation (80%)
- Warned about date range calculation errors
- All pre-mortem warnings addressed before coding
2. Data layer with safeguards (lib/data.ts)
- getKeywordStats() function with proper pagination
- Division guards: clicks > 0 ? spend / clicks : 0
- Aggregates by search_term + target_value + match_type
- Min spend (€1) and min clicks (3) filters
3. Dashboard UI (app/keywords/page.tsx)
- Match Type Performance - compare BROAD vs EXACT vs auto-targeting
- Top Converting Search Terms - sorted by units, color-coded conv%, CPC, ROAS
- Wasted Spend Report - clicks without sales, suggestions (add negative keyword vs monitor)
- Search Term Opportunities - high conv% terms not in exact match (add as exact keyword)
Build issues encountered:
- Type confusion: DateRange is string ('30d'), not object {from, to}
- Import path: DateRangeSelector in
app/dashboard/, not components/
- Named export vs default export
Fixes applied:
- Checked type definitions before assuming structure
- Used existing patterns from dashboard page
- Fixed imports to match actual file structure
Safety features (from pre-mortem):
✅ Pagination via fetchAllFromTable (no 1000-row truncation)
✅ All divisions guarded (denom > 0 ? calc : fallback)
✅ Empty state handling with helpful messages
✅ Color-coded thresholds (green/yellow/red for quick scanning)
✅ Filter by min spend/clicks to reduce noise
Live URL: https://argus.alinanorai.fr/kdp/keywords
Immediate insights visible:
- Match type performance comparison (which types convert best)
- €5.90 wasted on "marc aurele" (7 clicks, 0 sales) → add as negative
- "seneque" complement targeting converting at 25% (1 sale from 4 clicks)
- Several high conv% broad match terms → opportunities for exact match
Files:
kdp-app/lib/data.ts - getKeywordStats() function
kdp-app/app/keywords/page.tsx - dashboard UI
- Experience captured in intuition center
Process note: Pre-mortem → Plan → Execute worked perfectly. Every warning from pre-mortem was addressed. Build errors were type/import issues (learnable), not logic bugs (harder to fix). User's excitement ("Did it really work, just like that?") confirmed the value - keyword data is genuinely gold for optimization.
---
Per-Family Keyword Analytics (User's Insight)
User insight: "I would say it needs to be per ASINs. Otherwise we won't know how to harvest or what to block and where... I think we should build a 'best targets' and a 'never keyword' list for each family. OMG - this is really cool!!!!"
Why this matters:
Portfolio-wide keyword analytics is interesting, but per-family is actionable. You need to know:
- Which keywords work for "Meditations" vs which waste money
- Which keywords work for guitar tab books vs which get clicks but no sales
- Book-specific negative keyword lists
What we built:
1. Keywords tab on every family page (commit 57459d5)
- Auto-loads when tab is selected (lazy loading)
- Respects date range selector
- Clean table layouts matching portfolio dashboard
2. Three sections:
- Match Type Performance cards - Shows which match types convert for THIS book
- Best Targets table - Keywords driving sales for THIS book (scale these)
- Never Keywords table - Search terms wasting money on THIS book (add as negatives)
3. Actionable UI:
- Color-coded: green badges (converters), red badges (wasters)
- Suggestions: "🔴 Add as negative" (spend >€5) vs "⚠️ Monitor" (lower spend)
- Conv% highlighted (green >10%, yellow 5-10%)
- Link to portfolio dashboard for cross-book insights
Data layer (lib/data.ts):
getFamilyKeywordData(familyId, dateRange) - Scopes keyword aggregation to specific family
- Fetches all listings for family → filters ads_keywords by listing_ids
- Aggregates by search_term (same keyword might appear multiple times)
- Min 2 clicks filter (reduces noise)
API endpoint:
GET /api/keywords?familyId=xxx&range=30d
- Returns: bestTargets[], neverKeywords[], matchTypePerformance[]
User's reaction: Requested this feature after seeing portfolio dashboard - recognized immediately that book-specific data is what's needed to take action.
Example insights this enables:
- "For Book A: 'stoicism' converts at 20%, but 'philosophy' wastes €5 → add 'philosophy' as negative for Book A only"
- "For Book B: opposite pattern → keep 'philosophy', block 'stoicism'"
- Build book-specific keyword strategies instead of one-size-fits-all
Process note: User's domain knowledge ("per ASINs") immediately improved the feature. Portfolio-wide shows patterns, per-book shows what to DO. This is the difference between analytics and actionability.
---
User context: Having a difficult month (relationship + work stress). Asked this morning, said "I'll manage."
Interaction note: This was a great collaboration — user's correction ("too low is bad too") led to much better design than my initial approach. The "futile" visual polish request shows they care about the details even during a tough time.