The standard software product team - PM, engineers, designer - needs meaningful additions when you're building AI products. The additions aren't just headcount; they change how decisions get made, who has authority over what, and how the team communicates internally and externally.

I've led and worked within AI product teams across multiple organizations and industries. The team configurations that consistently ship are different from those that consistently struggle in specific, identifiable ways.

The Core AI Product Team

For a standalone AI product or AI feature within a larger product, you need these roles:

Product Manager (AI-literate)

The PM in an AI product team needs to understand enough ML to translate between technical and business language. This doesn't mean writing model code - it means understanding what it costs to improve accuracy by 5%, why the model behaves differently on edge cases, and what "this will take more data" actually implies for the roadmap.

The specific AI literacy gaps I've seen hurt PMs most:

  • Not understanding the training vs inference cost distinction (leads to wrong business cases)
  • Treating model evaluation metrics as equivalent to product success metrics
  • Not accounting for labeling time in roadmap estimates
  • Making accuracy commitments without understanding what's technically feasible

ML Engineer or Data Scientist

The distinction between these two roles matters. Data scientists are typically stronger at exploration, model selection, and statistical reasoning. ML engineers are typically stronger at production deployment, optimization, and MLOps infrastructure. Small teams often combine them; larger teams should separate them.

The most important quality: ability to communicate uncertainty and limitations clearly. An ML engineer who presents results as "the model performs at 87%" without explaining what that means for edge cases, what the error distribution looks like, or what inputs break it is creating a knowledge gap that will cause problems downstream.

Software Engineer

AI features require backend engineers who can build the infrastructure around the model: data pipelines, API endpoints, caching layers, monitoring, and the user-facing integration. This is different from model development. Many AI teams underinvest in this role, leading to models that work in notebooks but never make it to production.

Domain Expert (often overlooked)

For domain-specific AI - clinical AI, legal AI, financial AI, industrial AI - you need someone who deeply understands the domain and can evaluate whether model outputs make sense. This person is often not a full-time team member; they might be an embedded physician, a compliance consultant, or a subject matter expert who reviews outputs periodically. But they need to be in the loop.

In clinical AI work at HCLTech, the clinical advisory input was the difference between a model that was technically accurate on the benchmark and one that actually fit into clinical workflows. The model would sometimes suggest treatments that were medically valid but contraindicated given patient history patterns that the clinical experts immediately recognized. Without that input, we would have shipped something that worked in evaluation and failed in use.

UX Designer (AI-aware)

Designing for AI requires specific skills that general UX designers may not have: how to communicate uncertainty to users, how to design fallback states when the model is low-confidence, how to build interfaces that collect implicit feedback, how to manage user trust calibration. If your UX designer hasn't worked on AI products before, invest in onboarding them on AI-specific design patterns early.

Decision Rights in AI Product Teams

Decision rights in AI teams are more complex than in traditional software teams because more decisions have both technical and business dimensions that aren't clearly separable.

Be explicit about who decides what:

DecisionAuthorityInput From
Accuracy threshold for shippingPM (with business case)ML Engineer, Domain Expert, Stakeholders
Model architecture and training approachML EngineerPM (constraints), Software Engineer (infra implications)
Which errors to optimize for (precision vs recall)PM (business impact analysis)ML Engineer, Domain Expert
Data labeling guidelinesDomain Expert (with ML input)ML Engineer, PM
When to rollback a model in productionML Engineer (pre-agreed thresholds)PM notified, not required to approve
User-facing uncertainty communicationPM + UX DesignerML Engineer (calibration data)
Model infrastructure choicesSoftware Engineer + ML EngineerPM (cost constraints)

The ambiguous cases are usually around accuracy thresholds and precision/recall trade-offs. These need explicit PM leadership - they're fundamentally business decisions about what kind of errors you're willing to make, even though they feel like technical decisions.

Communication Rhythms for AI Teams

Standard sprint ceremonies need modification for AI products.

Model Review (Weekly)

15-30 minutes. ML engineer presents current performance metrics vs targets. Error analysis on a sample of recent failures. Plan for next iteration. This is technical but PM must attend - the performance trajectory is the most important signal on whether you'll hit your ship threshold.

Alignment Check (Bi-weekly)

30 minutes. PM, domain expert, and ML engineer together review a sample of model outputs. The goal is catching alignment issues before they become user problems - cases where the model is technically accurate by the metric but behaviorally wrong for the use case. This is where the domain expert adds the most value.

Stakeholder Update (Monthly)

30 minutes. Translate model performance into business language. Use the status update format: current performance vs threshold, what we learned, what we're changing, risks. Avoid model-specific jargon.

Hiring for AI Product Teams

The most important hiring criteria by role:

  • ML Engineer: Communication clarity above model sophistication. A 9/10 communicator with 7/10 ML skills beats the reverse every time on cross-functional teams.
  • PM: Intellectual honesty about uncertainty. PMs who are comfortable saying "we don't know yet" are rare and invaluable in AI product work.
  • Software Engineer: MLOps experience or willingness to develop it. Building model serving infrastructure is different from building microservices.
  • UX Designer: Portfolio that includes uncertainty communication or AI-adjacent products. Or demonstrated willingness to learn AI design patterns.

My take

Add ML engineer, domain expert, and an AI-aware designer to your standard product team. Make decision rights explicit - especially around accuracy thresholds and precision/recall trade-offs. Run a weekly model review that the PM attends, plus a bi-weekly alignment check with the domain expert. Hire ML engineers for communication clarity above model sophistication.


Related reading