TL;DR
BNY, one of the world’s largest financial institutions managing $57.8 trillion in assets, has deployed AI agents across its entire organisation. With 20,000 employees actively building agents and 125 live use cases in production, the firm demonstrates how enterprise AI can scale safely through robust governance frameworks.
Major Financial Institution Goes All-In on AI
When ChatGPT launched in late 2022, BNY made a decisive move to embrace generative AI enterprise-wide. Rather than limiting experimentation to technologists, the firm created a centralised AI Hub and launched an internal platform called Eliza to train employees on responsible AI use.
“Our mantra is ‘AI for everyone, everywhere, and in everything,’” says Sarthak Pattanaik, Chief Data and AI Officer at BNY. “This technology is too transformative, and we decided to take a platform-based approach for execution.”
The platform now supports over 125 live use cases, with 99% of the workforce trained on generative AI.
Governance as an Enabler, Not a Barrier
What sets BNY’s approach apart is how governance enables rather than restricts innovation. The firm established cross-disciplinary review boards including a data use review board, an AI release board, and an Enterprise AI Council providing senior oversight.
“Some might see AI governance as a barrier, but in our experience, it’s been an enabler,” says Watt Wanapha, Deputy General Counsel. “Good governance has allowed us to move much more quickly.”
Early wins include a Contract Review Assistant that reduces legal review time by 75%, from four hours to one, across 3,000+ annual vendor agreements. The firm has also introduced “digital employees” — advanced AI agents with identities, access controls, and dedicated workflows.
Looking Forward
BNY’s success offers a blueprint for enterprise AI deployment. Key lessons include leveraging existing risk frameworks rather than creating AI-specific governance from scratch, creating shared responsibility through cross-functional councils, and making governance visible through platform-level enforcement.
“With AI, we are all encountering new questions that have not been answered,” Wanapha notes. “So it’s very important to have the right partner and an open channel of communication.”
Source: OpenAI