What we did, why it worked
The platform had excellent content and tools, but discoverability assumed users knew what to search for. We replaced “search-first” with guided discovery—using intent signals to recommend what mattered most, while earning trust through transparency and control.
A library no one could find
The platform had years of white papers, insights, calculators, videos, and planning tools — all high quality, with low discovery. Search returned hundreds of results sorted by recency, and categories were too broad to be helpful.
- Only 12% of customers engaged with educational content in a given month
- High-value tools had <5% adoption despite prominent placement
- Surveys asked for guidance, but users didn’t know where to start
In this domain, users often don’t know what to ask. They’re not searching for a regulation term — they’re trying to make a decisions. The system needed to reduce cognitive load and help users take the next best step.
Flip the model: from search to guidance
The breakthrough was structural, not technical: stop making customers search; start surfacing what’s relevant based on intent signals. That required defining “relevance” in business terms, earning trust, and proving impact before scaling.
The decisions that mattered
The differentiator wasn’t “we used AI.” It was the decision system: what we optimized for, what we refused to compromise, and how we proved value.
Build a system, not a feature
The system needed to be explainable, governable, and measurable. We designed it as a pipeline with constraints, so recommendations were useful.
- Transparency: users can see why something was recommended
- Control: dismiss or mark “not relevant” to refine future recs (collaborative filtering)
- Privacy: avoid surprising inferences; minimize intrusion
How we proved value before scaling
We ran a controlled experiment with 10% of users: half saw recommendations, half saw the existing experience. We measured leading indicators (clicks, adoption) and lagging indicators (contribution over time).
- Engagement rate on recommended modules (and dismissal rate)
- Adoption of high-value tools (e.g., retirement planning)
- Satisfaction signals (helpful vs intrusive)
- Business impact over time (AUM contribution lift)
The recommendation cohort delivered 3× content engagement and +11% AUM contribution over six months. Only after that proof did we expand exposure.
What changed
The metrics mattered —but the deeper win was behavior change: users found and used tools that improved decision quality and increased contribution.
High-value tools got adopted
A flagship retirement tool moved from <5% adoption to 18% within six months by surfacing it to users approaching retirement—without requiring the right search query.
Engagement became durable
Average session duration increased 40%, and engaged users returned 2.5× more frequently—translating into measurable business lift.
A repeatable blueprint for responsible AI
The playbook became standard: problem-first, guardrails defined early, measurement in the product, and experiments before scale.
AI wins when the product does
Model sophistication is not the win. Adoption, trust, and measurement is. This worked because we built a product system that made AI useful, governable, and proved its value.
- Trust compounds: transparency + control improved adoption more than “smarter” recommendations
- Outcomes over accuracy: engagement and contribution were the north stars
- Start small: controlled rollout created learning without reputational risk
Templates I use to ship AI responsibly
These are the kinds of lightweight artifacts that made the work repeatable. (Link to redacted versions if you want to share publicly.)