Why 75% of Medical Device Manufacturers Are Disappointed with Their AI Results
The Gap
"The results we're seeing from AI are only as good as the data and structure we put behind it. Most manufacturers haven't done that work yet." — Marc Miller, Division President, TransPerfect Medical Device
Artificial intelligence has arrived in medical device operations (sort of). A recent survey of more than 100 clinical and regulatory professionals across the industry found that nearly half have already implemented or are actively evaluating process AI. That sounds pretty good, right?
Yet, of these respondents, only about 20 percent of those using AI say it's performing at or above expectations. The remaining 75 percent describe results that fall short of what they hoped for. Still, the industry keeps investing, keeps deploying, and keeps getting underwhelmed.
The question worth asking isn't whether AI works. It's why it isn't working for most of the people using it, despite increasing investment in AI in the medical device industry.
Three Patterns of AI
Most people using AI don’t fully understand it
Nearly 80 percent of survey respondents rated themselves very unfamiliar or only somewhat familiar with AI, despite a substantial portion of them already having it in their workflows. This is using a tool without reading the manual, applied at scale.
The downstream consequences are predictable: unguided use, limited results, and growing frustration that the technology isn't living up to the hype. In regulated environments like clinical and regulatory affairs, that frustration carries real operational risk.
Most implementations skip the first foundational step
Structuring data prior to implementing AI, regardless of the workflow, use case, or team doing it, is a critical investment. It creates the very foundation for AI systems to perform at their best, grounded in an organization’s own content: indexed, pre-digested, and fed through architectures like retrieval-augmented generation (RAG).
However, 75 percent of manufacturers admitted they are not using structured data to support their AI.
This could be hugely problematic, particularly in such a heavily regulated space. Without structured data, a large language model is essentially working from scratch every time—whether that’s a submission document, a clinical evaluation report, or a medical device translation workflow—drawing on its general training rather than a specific regulatory history, clinical portfolio, or submission record.
The result? Outputs that are generic at best and inaccurate at worst.
Many are using AI they haven’t paid for
More than a third of survey respondents are using free versions of AI tools for operational work. Free models can't be trained on your content, can't be architecturally grounded in your data, and (critically) have data privacy implications that most manufacturers haven't fully thought through.
If you're putting confidential product information or early-design meeting notes into an open-source AI tool, that information is now out in the open. For medical device companies with IP exposure, this could open them up to real risk, with catastrophic consequences.
What the 20% Are Doing Differently
Manufacturers that report strong results from AI share a common approach. They've treated implementation as a project with defined phases, not as a tool rollout—an approach that reflects a more mature medical device AI implementation strategy. That typically means:
- Investing in a paid, configurable AI environment
- Structuring internal content before training the model
- Establishing a formal AI policy that governs what can and cannot be put through the system
- Keeping humans in the loop to review, verify, and validate outputs
- Planning for ongoing model maintenance, not treating AI as a one-time deployment
One manufacturer quoted in the survey data described the unexpected challenge of their implementation: the ongoing training and adjustment of the AI model turned out to be more involved than anticipated.
That's not a failure. That's what a functioning AI program looks like. It requires upkeep. Models drift. Data changes. Regulatory guidance evolves. The manufacturers who build maintenance into the plan are the ones who sustain good results.
Where the Biggest Wins Are Coming From
In clinical operations, the most active current use is AI-assisted literature search for clinical evaluation reports. The volume of published literature required for CER development is enormous, and AI tools are showing real ability to identify, surface, and pre-summarize relevant studies. That said, the extraction still requires human verification. Study populations, endpoints, and reporting methodologies vary too widely for AI to reliably standardize without review.
In regulatory content, the clearest immediate impact is in labeling—specifically in AI-assisted medical device translation, where a trained model generates a draft that is then reviewed and finalized by expert human translators. TransPerfect has been implementing this model for years, and the efficiency gains are substantial. The same pattern (AI draft, human verification) is expected to expand into other regulatory content types as structured data strategies mature.
It’s important to note, however, that AI and automation are most valuable when leveraged not to replace expertise, but rather to redirect it. When automation handles volume, and AI handles initial drafts, the clinical writers, regulatory specialists, and quality engineers on your team get their time back—not to do less work, but to focus on higher-value tasks.
Post-market surveillance is another area gaining traction: AI's ability to process large, unstructured datasets from electronic health records and published literature makes it well suited to the continuous monitoring demands of MDR and IVDR compliance.
The Bottom Line
AI works. And it can be a very powerful tool.
But functional AI requires infrastructure. Most of the disappointment in the industry right now is not a product of AI's limitations; it's a product of implementations that skipped the foundational steps.
Structured data. A clear policy. A paid, configurable environment. Human oversight. Ongoing maintenance. These aren't optional add-ons to an AI strategy. They are the strategy.
The manufacturers who treat AI as a project, with requirements, planning, and governance, are the ones seeing results. The rest are still waiting for the tool to do the work on its own.
For teams looking to move from experimentation to consistent results, that shift starts with the right foundation.
TransPerfect Medical Device helps MedTech manufacturers structure their content and architect AI environments built for clinical and regulatory operations. Contact us to discuss your content strategy.