I'm a product and engineering leader. I build consumer products and AI infrastructure at global scale. Currently Senior Director of Engineering at Coinbase, where I lead Wallet, Base App, and AI acceleration across 1,000+ engineers. Previously Uber and Qualcomm.
Husband, father, lover of wine, pizza, and bikes (human powered).
Coinbase · Uber · Qualcomm
The interface layer between humans and AI is the highest-leverage design problem of our time. I want to build the surfaces
Every new user's first experience with Claude happens through the surfaces this team owns. Their mental model of what AI can do, whether it's trustworthy, whether it's worth coming back to, is shaped by latency, layout, and the quality of a first response. I've spent 12+ years earning trust at that layer, and I want to do it at the company I trust most to steward it.
Vignettes on why I'm a fit for Anthropic.
Trust, once lost, is expensive to rebuild. I know this from building products where a single bad experience doesn't just cost a user — it costs them money, their safety, or their belief that technology is on their side.
At Coinbase, I led the launch of a self-custody wallet and trading app to millions of users. Not just the engineering — the trust and safety systems, content moderation policies, and protective UX that kept people safe from pig-butchering scams, phishing, and fraud. I was in the trenches with Legal, T&S, and compliance building policies from scratch because we were first to market and no rulebook existed.
At Uber, I learned that reliability is trust but expressed as uptime. When your system serves every rider and driver on earth, downtime is measured in revenue per second and human frustration per millisecond. You build a culture of operational rigor or you lose users. There's no middle ground.
This is why the Claude product surfaces role is one of the highest-leverage jobs in AI right now. Every new user's mental model of what AI can do gets formed in the first 30 seconds. That model determines whether they come back. Whether they trust it with real work. Whether they tell someone else to try it.
I've spent my career at that exact junction — where product quality, safety, and trust are inseparable. At Coinbase, we didn't have the luxury of treating them as separate workstreams. The wallet launch wasn't just an engineering milestone. It was a promise: your money is safe, your keys are yours, and we built every surface to make that legible without requiring you to understand cryptography.
That same instinct — build fast, but build responsibly — is what draws me to Anthropic. Not AI for its own sake, but AI that earns the right to be used by making the experience trustworthy from the first interaction.
Everyone talks about AI transformation. Very few people have actually done it inside a company with 1,000+ engineers, layered compliance frameworks, and production systems that serve millions of users holding real money.
I have and here's what it looked like.
At Coinbase, I led the vision and team to build our in-house agentic harness, powered by Claude. We called it ClaudeBot -- I promise we are picking a new name, this just stuck at the time. It was an infrastructure bet: build the right primitives, make them good enough that engineers adopt voluntarily, and let the results talk.
The adoption curve told the real story. Engineers didn't use it because they were told to. They used it because it worked where they worked -- in Slack and async. So effective that Brian Armstrong uses it to ship to production — not a demo, not a sandbox, a real commit. Zero onboarding required.
But adoption inside a high-trust enterprise isn't just about making a good tool. It's about navigating organizational reality. Coinbase doesn't just have security policies — it has sprawling attack surfaces, regulatory obligations across dozens of jurisdictions, and internal processes that evolved over years to protect millions of users and billions of dollars.
When introducing any new system into that environment, it means understanding not just technical integration, but how teams make decisions, where risk tolerance lives, and who owns what guardrail. I've done that work repeatedly — bridging the gap between what a product can do and what an enterprise will actually adopt, given everything we have to protect.
And because of Claudebot, we saw a burgeoning new class of internal tooling being built too. Mux for parallel agents. Agent Control Plane as a Linear-driven dashboard. CI-Watcher to auto-fix failing PRs. The primitives were right, so the internal ecosystem grew organically.
Ticket-to-production time dropped from ~19 days to 1.8 days. For Coinbase Wallet, we shipped 240+ features and 1,200+ bug fixes across 104 mobile app releases in 2025. We migrated $7B of AUC with zero incidents. We achieved a fundamentally different operating velocity for our enterprise.
This is the operating model I wrote and am deploying at the company. These are the core principles:
If an agent can't build it, the spec is wrong — not the agent. Every spec must have user journeys, edge cases, and eval criteria to pass. The spec is the agent's blueprint. The quality of your output is determined by the quality of your input.
Any feature should ship in one day. Agents work 24 hours. Humans get 2-3 hours of deep work. That's a 10x leverage difference. If you're not moving 10x faster, you're using the system incorrectly.
Humans should not review code line-by-line. Four agents review in parallel — Bar Raiser, Security, Performance, QA — each with veto power. Humans validate outcomes, not implementations. Focus on scaling quality without scaling headcount.
Invest more energy in product clarity, design, and critical decisions — not coding. Handoffs are failure points. The build is truth, everything else is an expression of intent. The most valuable engineers are the ones who think the hardest before the agent starts.
Old: Figma → PRD → Code → PRD → Code → Figma → Code → Polish → Code → Ship.
New: Specs + Tests → Working Build → Design Polish → Ship.
Optimize for immediate prototypes. Polish after. Bring design into code, not code into design.
The majority of people don't know what code is, nor do they care. Non-technical people are shipping because they focus on outcomes. The superpower is building systems that scale, thinking rigorously, and shipping things that work. The medium has changed, but the craft didn't.
Most enterprises are still deploying copilots — autocomplete for code, chat wrappers for docs. That's layering AI into processes that were never designed for AI. I see it differently and see the future like this:
Authoring code effectively has zero cost now. But human reviews can't scale with this. An approach we've taken is agent councils — four specialized agents reviewing in parallel, each with veto power: Bar Raiser, Security, Performance, QA. We've started deploying this and engineers feel the reviews are 99% better than humans. It also allows the team to focus on the most sensitive parts of the code too for human review.
Because engineers spent time writing/reviewing code, there was no bandwidth to identify the real bottleneck -- bad specs. A great spec — with user journeys, edge cases, eval criteria, and acceptance tests that an agent can fully validate — becomes the entire input. The agent does everything else. This is a different discipline where the spec is the product.
ClaudeBot autonomously fixed 181 TypeScript errors across our full monorepo in a single run — 40,586 files changed, running for 8 hours without human intervention, iterating, compiling, fixing, recompiling. Claude Code subagents ran a BFS-style algorithm across 90+ repos for company-wide migrations. We are in the early innings, but are seeing exponential returns.
A native Apple Pay funding flow — a feature that previously was scope for 2.5 months — shipped in a day and a half using a Ralph loop and a custom /prd skill. Speed without sacrificing quality should be the new baseline.
This is why I want to be at Anthropic. The surfaces and infrastructure that make this possible for every developer, not just the ones inside Coinbase, is the highest-leverage product work in AI.
Coinbase is not a startup that moves fast and breaks things. It's a publicly traded financial services company operating under SEC oversight, with money transmission licenses in nearly every US state, serving millions of users who trust us with real money. Every feature I ship passes through compliance review, security audit, legal sign-off, and trust & safety analysis.
Shipping AI into that environment is a different kind of problem than most people in AI have ever encountered.
When I built the agentic harness, the question wasn't "can Claude write good code?" It was: Can we prove to our security team that agent-generated code meets the same bar as human-authored code? Can we demonstrate to compliance that automated PRs don't introduce regulatory risk? Can we assure legal that the system handles customer data appropriately?
I built trust by building results. The agent review council — four agents reviewing every PR in parallel with veto power — gave security exactly what they needed: automated enforcement of standards, audit trails, and a system that catches more issues than human reviewers. Not less safe. More safe. That's how you get institutional buy-in.
This experience is directly relevant to Anthropic because Claude's enterprise customers operate under similar constraints. Banks, healthcare companies, government agencies — they all want AI, and they all have layered compliance frameworks, sprawling attack surfaces, and internal processes that resist change by design. I know how to navigate that. I know how to earn trust from security, legal, compliance, and engineering leadership simultaneously. And I know that the path isn't around the guardrails. It's through them.
Anthropic's willingness to make hard decisions — including walking away from revenue when values are at stake — tells me this is a company that understands the same thing I've learned over 20 years: trust is the product. Everything else is a feature.
The Monday morning quarterbacks of the AI industry have a model fixation. Every week, a new benchmark. Every quarter, a new long running milestone. But the users I've watched interact with AI don't care about benchmarks. They care about whether the thing helps them.
Whether it helps them is a design question.
At Coinbase, we launched Coinbase Wallet to millions of users with zero downtime. What made it work was the product design. A social trading app built on self-custody, where the complexity of blockchain technology is invisible to the user. They see a simple, interface that helps them. We see the massive infrastructure underneath. That gap — between what's technically true and what's experientially real — is where leaders earn their keep.
Claude has the same challenge. The model is extraordinary. But the surfaces where people meet Claude — the chat interface, the API, the integrations — are where the experience is won or lost. And right now, most AI products feel like technology demos, not tools people love - EXCEPT Claude.
The surfaces where people meet Claude and how they talk about Claude in amongst their friend circles is where the user is won or lost. How does AI meet the user where they are? How does Claude take a skeptical user and turn them into a power user?
The TAM on AI is not just the entire human population, but an infinite number of agents that haven't even been summoned. But only if the products are good enough that people actually use them.
I've spent 20 years making complex systems feel simple and safe for millions of users. LTE infrastructure that wirelessly moves bits for most humans on the planet. Real-time ride systems that serve every rider and driver on earth. A crypto wallet that lets you trade, send, and store without understanding a single technical concept. Each time, the challenge was the same: take something powerful and make it legible. Make it trustworthy. Make it feel like it was built for you.
That's what I want to do for Claude.
My career has a through-line: take complex systems and make them work for people at scale. Each chapter taught me something different about what that requires.
Twelve years of progressive depth. I started as an engineer and eventually led mass-scale deployments for the world's biggest events. I bridged what our R&D department dreamed of with the physical realities of mass scale wireless deployments. There is nothing more powerful than seeing a demo work in a lab to knowing how to scale the tech and systems for 100K+ people, packed into a stadium, who all want to do the same thing at once -- take a video of Bad Bunny during the half-time show and post it on social media. I led the scale out for Super Bowl 48, 49, 50 and World Cup's to make what's possible actually real.
Qualcomm taught me that technical judgment has to run deep, not just wide. And that the distance between a working system and a system that works at scale is where most people give up.
I led teams building high-availability, real-time systems under global SLAs. Productionized ML-driven workflows with data science. Built the reliability culture and operational rigor that comes from operating at a scale where downtime is measured in revenue per second.
Uber taught me that reliability is trust expressed as uptime. And that the best engineering cultures are built around accountability, not process.
I lead product engineering for Coinbase Wallet and Base App — our flagship self-custodial social trading app. I lead AI acceleration company-wide and am redefining how the company builds from first principles. 10M+ users, $80M+ in direct org revenue ownership, and the agentic infrastructure that makes high shipping velocity possible.
Coinbase taught me that you can move at startup speed inside a public company if you build the right systems and hire the right people. And that trust & safety aren't constraints on velocity -- they are the difference between keeping or losing your users.
Every chapter prepared me for this. I've moved bits wirelessly for most humans on earth. Built real-time systems for most riders and drivers on earth. Launched a consumer app to millions while maintaining the highest security bar in fintech. And rewired a 1,000-person engineering org to be AI-first, powered by Claude.
Now I want to build the surfaces where the rest of the world meets Claude. The interface layer is where trust is earned. That's where I've spent my career. And Anthropic is where that work matters most.