RD Station

Jan 2016 — Jan 2021

MarTechSaaSAnalyticsUX
+50% satisfaction Analytics feature score: 2.8 → 4.2 in 3 months

A stable product with an engagement problem hiding in plain sight

RD Station is the dominant marketing automation platform in Brazil — email campaigns, lead scoring, CRM integration, landing pages. I joined in 2016 and spent five years as PM of the Analytics & Reports squad, owning the dashboards and metrics that marketers relied on to understand whether their campaigns were working.

By mid-way through my tenure, the most pressing technical problems were resolved. The product had a stable foundation. But the engagement picture was troubling: only 20% of accounts were monthly active on the analytics features.

Active users who weren't actually satisfied

The 20% MAU figure split into two distinct groups when I looked more closely. The first group was analytics-dependent customers — mature accounts and marketing agencies who needed to prove ROI to their clients. They used what existed, but they complained loudly: the analytics were too shallow, they were constantly exporting raw data to Excel, and they kept requesting more export features.

The second group was data-immature customers — smaller accounts who simply didn't engage with analytics at all.

The intuitive move was to invest in the 80% non-engaged. Bigger number, bigger opportunity. But I had a different read: the 20% who were active weren't loyal — they were stuck. They returned because they had no alternative, not because the product served them well. Analytics had one of the worst NPS tag scores across the product, and customer success had flagged analytics complaints as one of the top churn risk drivers in the highest-paying accounts. What looked like retention was actually churn risk hiding in usage data.

Investing in growing the 80% would just expose more customers to a product that didn't have product-market fit with the customers who most needed it.

Measuring the right thing before building anything

The first challenge was measurement. NPS and churn data were too slow and too noisy to use as a feedback loop within a product iteration cycle. I needed something that would generate a signal within weeks.

I designed a two-question in-product survey, and the design of the two questions was deliberate: one asked about satisfaction with the result of landing page analysis, the other about satisfaction with the time spent doing it. That split mattered. Users were nominally satisfied with the results — they'd built Excel workarounds that got them to the answer eventually. But the effort cost was where the real frustration lived, and a single satisfaction question would have averaged it away.

Baseline on the landing page report (the most-used, most feature-poor): 2.8 out of 5.

I recruited five heavy-user marketing agency accounts as standing design partners. These were the loudest complainers — which made them the most valuable signal source. I ran sessions with them to understand which specific capabilities were missing, used their feedback to sequence the subfeature roadmap, and ran a weekly delivery cadence: ship a subfeature, roll it out to the focus group, collect feedback, iterate, move to the next.

A focused feature package for the most-used report

Rather than a broad redesign, we built a targeted package of subfeatures for the landing page report — the one the analytics-dependent cohort used most and found most limiting. Each subfeature was sequenced by design partner priority, shipped incrementally, and validated before the next one started.

The full package was deployed to all customers three months after the first design partner interview.

The numbers that followed

+50% satisfaction score — landing page report score from 2.8 to 4.2 out of 5

+30% engagement (events per session) — users were doing more in the product rather than exporting to Excel

MAU grew from 20% to 25% (+5pp) — the PMF-first investment started pulling in the non-engaged cohort too

What I specifically did

  • Made the prioritization argument — invest in PMF with the 20% before growing the 80% — and built the case for it with engagement and NPS data
  • Designed the two-question survey instrument and the result/effort distinction that revealed what standard satisfaction metrics would have missed
  • Recruited and ran the design partner sessions, converting the loudest complainers into the most useful signal source
  • Owned the subfeature sequencing, using design partner feedback as the primary input rather than internal assumptions

Other significant work in this role

  • Analytics query latency — Resolved a critical performance issue: analytical queries running on a transactional PostgreSQL database were hitting p90 latency above 30 seconds — past the browser timeout, causing reports to crash. First move: benchmarked competitors (if they solved it at larger scale, we could too). Commissioned a research sprint; the team proposed migrating to a more performant analytics architecture. I proposed a prototype before committing to a 3-month build — 1-2 tables, the 4 most-used queries, 1 week of data. Prototype confirmed the direction. Result: p90 from 30s+ to ~3s.
  • First-party web tracker — Proposed replacing the Google Analytics integration with a first-party event tracker. The GA integration created reliability issues for customers — but more importantly, owning raw pageview events was the difference between the product saying "20 visitors today" and "this lead in your CRM visited your pricing page 3 minutes ago." A dedicated squad was hired to build the tracker, and by the time I left, it had reached +95% precision compared to GA.