The Operating Room Is a Leadership Laboratory
Pressure makes the diamond, or so we said, but surgical residency was not just the intense hours and high demands you see dramatized on TV. It was a years-long immersion in high-stakes decision-making under uncertainty and resource constraint. That applies equally well to leading a healthcare organization through technology adoption or operational redesign.
I completed my plastic and reconstructive surgery training at Stanford, an accelerated six-year program that selected three candidates per cycle. The technical training was demanding, learning to craft with your hands with high degrees of precision. The less obvious curriculum was learning how to think clearly when the consequences of a wrong decision are permanent.
How Surgical Decision-Making Maps to Executive Strategy
In the operating room, you make irreversible decisions with incomplete information. You assess tissue viability, vascular anatomy, and wound tension in real time. You commit to a flap design knowing that once you elevate it, the blood supply is redirected and there is no undo button.
Compare that to what happens when a health system evaluates a new clinical AI tool. The data is incomplete. The vendor's accuracy claims were tested on populations that may not reflect your patient mix. Implementation will disrupt existing workflows, and reversing course after system-wide deployment is expensive. The cognitive framework is identical: assess the anatomy of the problem, identify what you can verify, commit, execute, and adjust when reality diverges from expectation. Move fast in the fast parts and slowly in the delicate parts to optimize for time.
The ACGME structures residency around six core competencies. The practical output is a physician who can synthesize ambiguous data, act decisively, and own the outcome.
From Microsurgery to Medical AI
My career arc, from Stanford residency to reconstructive surgery at the Palo Alto VA to private practice in the Bay Area to medical AI executive leadership, looks like a series of pivots on paper. In reality it was the cross-polination that strengthens our biologic neural nets for generalization and innovation.
Reconstructive surgery requires you to think in three dimensions about tissue transfer, vascular anatomy, and functional restoration. You plan backward from the desired outcome and sequence every step to preserve optionality. When I began evaluating clinical AI systems, I found myself doing the same thing: working backward from the patient outcome that mattered, mapping dependencies, and identifying failure modes the sales team would never mention. Over 1000s of clinical procedures across a decade built pattern recognition that now informs how I assess whether an AI tool's outputs will integrate into a physician's workflow or simply create noise.
The Credentials Gap in Health Tech
Many health tech executives bring strong backgrounds in software or data science. Few have personally managed a deteriorating free flap at 2 AM or navigated a reconstruction where the margin between functional recovery and permanent disability was measured in millimeters. This matters because clinical AI sits inside a workflow involving a physician making a decision that affects a specific patient.
The American Medical Association's augmented intelligence principles emphasize that AI design should include practicing physicians at every stage. I think the principle does not go far enough. Physicians should not just be consulted. They should lead.
The Through Line
As a Stanford-trained surgeon and medical executive, I see surgical training and leadership as two expressions of the same discipline: make good decisions with imperfect information, execute precisely, and take responsibility for the outcome. Publishing peer-reviewed research, including work on scar management in Neligan's Plastic Surgery and curriculum design in Plastic and Reconstructive Surgery, taught me to interrogate methodology and distinguish between statistically significant and clinically meaningful results. That rigor applies directly to evaluating vendor claims.
Technology changes. The weight of a clinical decision does not.
Frequently Asked Questions
How did Dr. Sina Bari's Stanford residency shape his approach to healthcare leadership?
Dr. Bari completed an accelerated six-year plastic and reconstructive surgery residency at Stanford, where training emphasized decision-making under uncertainty in high-consequence environments. He applies the same framework of rapid assessment, commitment under incomplete information, and real-time adaptation to evaluating clinical technology and leading health system strategy.
Why should clinical AI companies have physicians in executive roles?
Physicians who have managed complex clinical workflows understand failure modes that non-clinical executives often miss, including alert fatigue, documentation burden, and accuracy metrics that do not translate to real patient encounters. The AMA recommends physician involvement at every stage of AI design, but deployment benefits most when physicians hold decision-making authority rather than advisory roles alone.
What is the difference between statistical significance and clinical significance in AI validation?
A model can show statistically significant accuracy improvements while producing clinically meaningless results in practice. For example, a 2% improvement in sensitivity may come at the cost of a 15% increase in false positives, translating to unnecessary procedures and wasted clinician time. Prospective validation on representative patient populations is the only reliable way to bridge this gap.
How does reconstructive surgery experience inform technology evaluation?
Reconstructive surgery requires planning backward from a desired functional outcome while accounting for anatomical constraints and preserving optionality at each step. This maps directly to assessing whether a clinical AI tool will integrate into existing workflows or create friction. Surgeons are trained to identify where a plan will fail before committing, which is precisely the skill needed for health technology evaluation.