AI in surgery triggers two reflexes: "the inevitable future" and "disdain from fear". Hope that it will solve complexity; fear that it will dehumanize care. Both reflexes miss the real story.
AI will not replace orthopaedic decision-making — because orthopaedic decision-making isn’t just calculation. It includes judgment about function, tolerance, growth, biology, and human goals. What AI can do is make parts of the workflow more precise, more reproducible, and less wasteful.
What clinical decision-making actually is
Let’s use an example to break it down. In limb deformity work, the decision process includes:
- interpreting imaging and measurements,
- understanding the patient’s function and goals,
- predicting growth, remodeling, and response,
- weighing risks of options,
- and finally, executing a plan with adaptability.
AI has strengths in the first category (pattern recognition, measurement).
Humans dominate the last categories (values, tradeoffs, strategy).
So the goal isn’t replacement. It’s augmentation.
Where AI already helps (quietly)
Even today, many parts of practice use “AI-like” automation:
- automated image registration,
- noise reduction,
- semi-automated segmentation,
- templating in arthroplasty.
These tools don’t feel threatening because they don’t pretend to be surgeons. They do narrow tasks.
The future is more of that: narrow AI that removes friction.
The three best roles for AI in orthopaedics:
-
Automation of repetitive interpretation
Segmentation and landmarking don’t require human creativity. They require careful consistency. Automating them frees expertise for what matters. -
Reproducible measurement
Two clinicians should get the same mechanical axis, MAD, and joint line orientation from the same dataset. AI can enforce that reproducibility while still allowing editable correction. -
Predictive simulation
AI-assisted simulation can compare “what if” scenarios quickly:
- hinge position changes,
- opening angle changes,
- multi-level correction interaction.
The surgeon chooses the plan; AI accelerates the exploration.
What AI should not do
AI should not:
- issue final surgery recommendations,
- hide uncertainty,
- obscure its method,
- or operate without surgeon override.
Black-box “diagnostic AI” in a high-variance surgical context is risky. Research-grade systems must earn trust through transparency. Validation is not optional. In surgery, failure has real consequences.
So AI validation has to be:
- multi-dataset (different scanners, protocols),
- multi-population (age, anatomy, pathology),
- stress-tested for rare shapes and artefacts,
- and compared to human inter-observer variability.
A useful question is:
Is AI at least as consistent as two good surgeons measuring the same case?
If yes — it’s valuable. If no — it belongs in the lab.
The surgeon-editable model: the safest architecture
Clinically safe AI will look like:
- AI proposes segmentation + landmarks + measurements.
- Surgeon edits/approves.
- System recomputes.
- Plan exports.
This keeps:
- speed from AI,
- control from the surgeon.
It is the core design philosophy behind useful AI research sandboxes: not to replace, but to amplify planning clarity.
Why patients will benefit
The benefit chain is practical:
- faster planning,
- more consistent measurement,
- clearer simulation,
- better communication.
Patients don’t care if the tool is “AI.”
They care if the plan is right.
The cultural shift we need
If clinicians treat AI as a threat, they won’t shape it.
If they treat it as a partner, they will.
Orthopaedics is a field of geometry and function. That makes it one of the most natural places for AI to add value — as long as surgeons remain the authors of decisions.
Closing thought
The future is not “AI surgeons.”
It’s AI-assisted surgeons who plan better, measure more consistently, and communicate more clearly.
That’s a future worth building.
By Dr. Yaser Jabbar