TL;DR: Philips is deploying AI literacy training across its 70,000-person workforce through a staged adoption approach—toy to tool to transformation—with executive leadership trained hands-on and grassroots idea challenges accelerating experimentation whilst maintaining strict healthcare safety standards.
The 134-year-old healthcare technology company is transforming AI from specialised product capability into organisation-wide literacy, targeting administrative burden reduction that currently sees clinicians spending as much time documenting care as delivering it.
Strategic Adoption Curve
Philips already embedded specialised AI and machine learning systems in products across personal health, diagnostics, image-guided therapy, and patient monitoring. What changed was the requirement for AI capability beyond expert teams—making every employee confidently AI-literate.
The company is intentionally moving employees along a progression: Toy → Tool → Transformation. Employees were already using OpenAI tools privately, so curiosity existed—the challenge was channelling it into productive work.
The rollout combined top-down and bottom-up momentum: executives trained hands-on first to lead by example, a company-wide challenge invited use case proposals, and access to enterprise-grade ChatGPT increased both demand and momentum.
Trust-Building Through Controlled Experimentation
As a healthcare technology company, Philips operates under strict safety, privacy, and regulatory expectations. Trust and responsible AI use are foundational requirements, not optional enhancements.
The company built confidence incrementally: beginning with low-risk internal workflows, encouraging controlled experimentation, formalising responsible AI principles around transparency, fairness, and human oversight, then allowing confidence and skill to grow before AI touched patient-impacting workflows.
“You can’t just implement AI as technology,” Patrick Mans, Head of Data Science & AI Engineering, explained. “You have to shift the culture—how people think, and how they trust.”
Reducing Clinical Administrative Burden
Philips’ immediate priority targets administrative burden in clinical environments, where time directly correlates with patient outcomes.
Mans cited a hospital visit where a clinician spent 15 minutes saving a life—then spent another 15 minutes documenting it. “He could have saved two lives in that same time,” Mans noted.
The company’s focus is explicit: give clinicians time back to care for patients. This represents the fastest path to meaningful impact, addressing a universal healthcare pain point where administrative tasks compete with patient care.
Implementation Lessons
Philips’ approach yields specific lessons for enterprise AI deployment: train leadership hands-on so they model usage rather than mandate it; fuel bottom-up momentum by giving people ways to propose, test, and own their use cases; align stakeholders upfront so momentum becomes an advantage rather than a blocker; make responsible AI principles operational through transparency and human oversight; and focus where time matters most.
The company is now moving from individual productivity gains to workflow-level automation and agent-supported processes, with AI policy and responsible AI principles firmly established.
“We want to deliver better care for more people,” Mans stated. “AI is one of the most powerful tools we have to do that.”
Source: OpenAI