Over a two-year period, a $35M+ ARR Instructure product was transformed into an AI-native solution and deployed at scale for more than 2 million users worldwide — introducing explainable and compliant AI into the learning ecosystem without compromising trust, accessibility, or operational stability.
AI Product Development and Design
Depot
I was focusing on revamping Depot's comprehensive CI/CD developer experience, covering everything from pipeline creation and secrets management to diagnostics, logs, artifacts, and governance. The goal was to minimize friction in essential processes, simplify intricate steps, and provide an accessible, scalable design system that could grow with the product.
AI Product Development and Design
Raiffeisen
Over 2 years the full mobile banking ecosystem was rebuilt and deployed across 16 countries, resulting in a +670% increase in app downloads and more than €300M in additional annual digital transaction volume — driven by restored trust, a new design system, and a redesigned end-to-end experience.
AI Product Development and Design
Bitpanda
Bitpanda is the trading platform that Raiffeisen Bank customers use when they buy and sell digital assets through the bank’s app. During the 2 redesign I worked on both the direct Bitpanda product and the bank ready variant used inside Raiffeisen. The same system that later contributed to 53% user growth (3.4M → 5.2M) and more than €140M+ annual digital revenue across the period.
AI Product Development and Design
Benker
Benker is a digital banking platform built on blockchain and tailored for secure, multi currency account management across Europe. As design lead and hands on designer, I redefined onboarding and transaction flows, which increased digital transaction volume by 150%, improved KYC success rates by 30%, and reduced time to first successful transaction by 50%.
AI Product Development and Design
OnRobot
Built a tablet-first HMI in 12 weeks to eliminate teach-pendant scripting and spreadsheet math. In pilots it cut first-time setup by 42%, reduced input errors by 63%, slashed training time by 75%, and drove a +36 NPS among operators.
AI Product Development and Design
SportsGambit
SportsGambit is a decentralized prediction market leveraging advanced AI to give you an edge. The platform's core feature is the ability where a user can build, train, and deploy custom prediction agents. These autonomous agents continuously learn and improve, analyzing specific sports or events to deliver a curated feed of high-probability outcomes directly to your dashboard. This eliminates manual research and allows for swift, one-tap wagering, combining AI-driven insights with the security and transparency of the blockchain.
My Role & Mission
Led end-to-end feature development for a $35M+ ARR product within the Canvas Catalog ecosystem, partnering with PMs and engineers from first concept through post-launch validation.
Challenges
Operating at platform scale allowed no “black-box” behavior. Every AI suggestion needed to be explainable, editable, and auditable. Instructors were overwhelmed by static analytics dashboards showed “what happened,” not “what to do next.” Accessibility and privacy requirements (higher ed, healthcare, public sector) constrained interaction patterns. The work had to land as consistent AI UX across products, with resilient state design (empty/error/loading/latency) and a rollout plan that avoided disruption during high-traffic usage.
The Process
Use cases were anchored in Jobs to Be Done (e.g., draft an assignment, extract key takeaways, turn data into actions). Guardrails were set on day one: human-in-the-loop, clear AI labeling/disclosure, editable outputs, source cues, and recovery paths. Flows were co-reviewed with Legal/Compliance/Data Science, then codified into reusable patterns (prompts, review states, confirmations, feedback loops). Prototyping with Uizard/Galileo/Lovable shortened concept→test cycles. Adoption and completion were instrumented, and features shipped behind safe flags, iterating on evidence rather than hype.
Who We Designed For
Instructors and admins who need actionable guidance, not raw charts
Learners who benefit from clear, accessible feedback and materials
Institutions with strict accessibility, privacy, and audit requirements
UX Methods & Why They Were Used
Contextual inquiry & task shadowing – to see real classroom/admin workflows and find the moments where “AI help” is truly needed.
JTBD & task mapping – kept scope focused on outcomes (draft → review → publish; insight → action), not features.
Heuristic reviews & cognitive walkthroughs – early, low-cost checks for feedback, error prevention, and safe defaults in AI flows.
Rapid prototyping (low → high fidelity) – compared prompt patterns, review states, and disclosure models before build.
Moderated usability tests – validated that AI guidance was understandable, editable, and trusted (no “magic,” clear next steps).
Accessibility audits (WCAG 2.1 AA) – ensured keyboard flow, semantic structure, and screen-reader clarity for all AI states.
Stakeholder co-reviews (Legal/Compliance/DS) – aligned privacy, licensing, data retention, and auditability from the outset.
Instrumentation planning – guaranteed that adoption, completion, and user sentiment were measurable post-launch.
Design Principles (AI UX)
Explain, then suggest. Show the “why” behind suggestions (signals, sources).
Human in control. Outputs are editable and reversible with clear review states.
State clarity over surprise. Empty, loading, error, and latency states are explicit.
One AI pattern, many contexts. Reusable prompt/review/confirm patterns across SKUs.
Privacy by default. Data handling is disclosed; opt-outs and retention rules are visible.
Accessible by design. All interactions meet WCAG 2.1 AA and work great with keyboard/screen readers.
What Was Shipped
AI pattern library (prompting, sources, review/confirm, feedback loops)
Guardrail framework (human in the loop, editability, audit, recovery)
Accessible component states (empty/error/loading/latency) across SKUs
Design tokens & content guidelines for AI disclosures/help
Instrumentation hooks and survey triggers for ongoing evaluation
Behind the Decisions: Reflections & Trade-offs
Most teams start AI projects asking, “What can we automate?” This one started with a different question: “What must stay human?”
We could have gone all-in on automation and had spectacular demos. But education, public sector, and healthcare customers don’t buy demos they buy trust. That’s why we drew a very hard line: no invisible decisions, no uneditable outputs, no “just trust the model.” Every AI suggestion had a label, an edit state, and a way to back out.
The messy part was not technical, it was ethical. We had to design for failure modes: What if the AI is wrong? Biased? Misused? Rather than pretending those cases wouldn’t happen, we mapped them explicitly and built flows around them. Data Science, Legal, and Compliance weren’t blockers at the end; they were co-authors of the guardrails from the start.
If you want AI that looks impressive in a slide deck, this isn’t that story. If you want AI that real institutions can roll out to millions of people and sleep at night, this is the kind of product thinking I brought.
AI-assisted interpretation on top of performance and engagement data to reduce cognitive load and response time
Raw data converted into insights with immediate recommended next steps for instructors and admins.
AI-extracted competency profile that builds a personalized skill graph without manual labeling overhead.
Skill-level alignment maps course content to competencies, turning curricula into measurable progress graphs.
Reference state before AI: static legal material without summarization, classification or guided interpretation.
AI applied to regulated content with explicit disclosure, human-override, and audit-ready controls.
AI-assisted page and assignment generation embedded directly into the LMS with human-review safeguards.