I led end to end feature development for a $35M+ ARR product in Instructure’s Canvas Catalog ecosystem, partnering with PMs and engineers from first concept through post-launch validation. My remit was to design AI-powered capabilities that were genuinely useful, fully accessible, and ethically safe for 2M+ users. I integrated AI-driven workflows into our design process (Uizard, Galileo, Lovable) to accelerate prototyping and validation, and championed clear guardrails so AI felt transparent, reviewable, and easy to adopt at scale.
Operating at platform scale meant zero tolerance for “black-box” behavior: every AI suggestion had to be explainable, editable, and auditable. Instructors were drowning in static analytics; dashboards lacked “what to do next.” Accessibility and privacy requirements (higher-ed, healthcare, public sector) constrained interaction patterns. We also needed consistent AI UX across SKUs, resilient state design (empty/error/loading/latency), and a rollout plan that wouldn’t disrupt high-traffic usage.
We anchored use-cases in jobs-to-be-done (draft an assignment, extract key takeaways, turn data into actions). From day one, we defined guardrails: human-in-the-loop, clear AI labeling/disclosure, editable outputs, source cues, and recovery paths. We co-reviewed flows with Legal/Compliance/Data Science, then codified reusable patterns (prompts, review states, confirmations, feedback loops). Prototyping with Uizard/Galileo/Lovable shortened concept→test cycles; we instrumented adoption/completion and released behind safe flags, iterating on evidence rather than hype.