Anthropic Launches Claude for Excel as AI Finance Battle Intensifies

TL;DR: Anthropic has embedded its Claude AI assistant directly into Microsoft Excel whilst securing data partnerships with major financial information providers including LSEG, Moody’s, and Aiera. The company reports 20% productivity gains at Norway’s $1.6 trillion sovereign wealth fund and aims to capture share of the $97 billion finance AI market, competing directly with Microsoft Copilot and OpenAI.

Anthropic is making its most aggressive push into the trillion-dollar financial services industry, unveiling tools that embed its Claude AI assistant directly into Microsoft Excel and connect it to real-time market data from influential financial information providers.

The San Francisco-based AI startup announced Monday the release of Claude for Excel, allowing financial analysts to interact with the AI system directly within their spreadsheets—the quintessential tool of modern finance. Beyond Excel, select Claude models are also available in Microsoft Copilot Studio and Researcher agent, expanding integration across Microsoft’s enterprise AI ecosystem.

Excel as Strategic Battleground

The decision to build directly into Excel addresses AI adoption friction in finance. Excel remains the lingua franca of the industry, the digital workspace where analysts construct financial models, run valuations, and stress-test assumptions. By embedding Claude into this environment, Anthropic meets financial professionals where they work rather than requiring application switching.

Claude for Excel Capabilities:

  • Sidebar interface for reading, analysing, modifying, and creating workbooks
  • Full transparency through change tracking and cell-level explanations
  • Formula dependency preservation during modifications
  • Cell-level debugging
  • Template population with new data
  • Entirely new spreadsheet construction from scratch

This transparency feature addresses a persistent anxiety around AI in finance: the “black box” problem. When billions of dollars ride on a financial model’s output, analysts need to understand not just the answer but how the AI arrived at it. By showing its work at the cell level, Anthropic attempts to build the trust necessary for widespread adoption in an industry where careers and fortunes can turn on a misplaced decimal point.

Data Partnership Moat

Perhaps more significant than the Excel integration is Anthropic’s expansion of its connector ecosystem, which now links Claude to live market data and proprietary research from financial information giants. The company added six major new data partnerships spanning the entire spectrum of financial information that professional investors rely upon.

New Data Partnerships:

Aiera: Real-time earnings call transcripts, investor event summaries (shareholder meetings, presentations, conferences), plus Third Bridge data feed providing expert insights interviews, company intelligence, and industry analysis from experts and former executives

Chronograph: Private equity operational and financial information for portfolio monitoring and due diligence, including performance metrics, valuations, and fund-level data

Egnyte: Secure search of permitted data for internal data rooms, investment documents, and approved financial models whilst maintaining governed access controls

LSEG (London Stock Exchange Group): Live market data including fixed income pricing, equities, foreign exchange rates, macroeconomic indicators, and analysts’ estimates

Moody’s: Proprietary credit ratings, research, and company data covering ownership, financials, and news on more than 600 million public and private companies for compliance, credit analysis, and business development

MT Newswires: Latest global multi-asset class news on financial markets and economies

These partnerships amount to a land grab for the informational infrastructure that powers modern finance. Previously announced in July, Anthropic had already secured integrations with S&P Capital IQ, Daloopa, Morningstar, FactSet, PitchBook, Snowflake, and Databricks.

Together, these connectors give Claude access to virtually every category of financial data an analyst might need: fundamental company data, market prices, credit assessments, private company intelligence, alternative data, and breaking news.

Strategic Rationale: The quality of AI outputs depends entirely on the quality of inputs. Generic large language models trained on public internet data cannot compete with systems having direct pipelines to Bloomberg-quality financial information. By securing these partnerships, Anthropic builds moats around its financial services offering that competitors will find difficult to replicate.

This is a direct challenge to the “one AI to rule them all” approach favoured by some competitors, betting instead that domain-specific AI systems with privileged access to proprietary data will outcompete general-purpose AI assistants.

Pre-Configured Workflows for Analyst Tasks

The third pillar of Anthropic’s announcement involves six new “Agent Skills”—pre-configured workflows for common financial tasks. These skills productise the workflows of entry-level and mid-level financial analysts who spend their days building models, processing due diligence documents, and writing research reports.

New Agent Skills:

  1. Discounted Cash Flow Models: Complete free cash flow projections, weighted average cost of capital calculations, scenario toggles, and sensitivity tables
  2. Comparable Company Analysis: Valuation multiples and operating metrics that can be easily refreshed with updated data
  3. Data Room Processing: Convert documents into Excel spreadsheets populated with financial information, customer lists, and contract terms
  4. Company Teasers and Profiles: For pitch books and buyer lists
  5. Earnings Analysis: Extract important metrics, guidance changes, and management commentary from quarterly transcripts and financials
  6. Initiating Coverage Reports: Industry analysis, company deep dives, and valuation frameworks

Performance Benchmark: Anthropic’s Sonnet 4.5 model now tops the Finance Agent benchmark from Vals AI at 55.3% accuracy, a metric testing AI systems on tasks expected of entry-level financial analysts. Whilst 55% accuracy might sound underwhelming, it represents state-of-the-art performance and highlights both promise and limitations of AI in finance.

The technology can handle sophisticated analytical tasks but isn’t yet reliable enough to operate autonomously without human oversight—a reality that may reassure both regulators and analysts whose jobs might otherwise be at risk.

Trillion-Dollar Client Validation

Anthropic’s financial services strategy appears to be gaining traction with marquee clients. The company counts among its clients AIA Labs at Bridgewater, Commonwealth Bank of Australia, American International Group, and Norges Bank Investment Management—Norway’s $1.6 trillion sovereign wealth fund, one of the world’s largest institutional investors.

Client Results:

NBIM (Norway Sovereign Wealth Fund): CEO Nicolai Tangen reported approximately 20% productivity gains, equivalent to 213,000 hours, with portfolio managers and risk departments now able to “seamlessly query our Snowflake data warehouse and analyse earnings calls with unprecedented efficiency.”

AIG: CEO Peter Zaffino said the partnership has “compressed the timeline to review business by more than 5x in our early rollouts whilst simultaneously improving our data accuracy from 75% to over 90%.”

These aren’t pilot programs or proof-of-concept deployments; they’re production implementations at institutions managing trillions of dollars in assets and making underwriting decisions that affect millions of customers. Their public endorsements provide the social proof that typically drives enterprise adoption in conservative industries.

Regulatory Uncertainty Creates Both Opportunity and Risk

Anthropic’s financial services ambitions unfold against heightened regulatory scrutiny and shifting enforcement priorities. In 2023, the Consumer Financial Protection Bureau released guidance requiring lenders to “use specific and accurate reasons when taking adverse actions against consumers” involving AI, and issued additional guidance requiring regulated entities to “evaluate their underwriting models for bias.”

However, according to a Brookings Institution analysis, these measures have since been revoked with work stopped or eliminated at the current downsized CFPB under the current administration, creating regulatory uncertainty.

The pendulum has swung from the Biden administration’s cautious approach, exemplified by an executive order on safe AI development, toward the Trump administration’s “America’s AI Action Plan,” which seeks to “cement U.S. dominance in artificial intelligence” through deregulation.

Regulatory Flux Impact: This creates both opportunities and risks. Financial institutions eager to deploy AI now face less prescriptive federal oversight, potentially accelerating adoption. But the absence of clear guardrails also exposes them to potential liability if AI systems produce discriminatory outcomes, particularly in lending and underwriting.

The Massachusetts Attorney General recently reached a $2.5 million settlement with student loan company Earnest Operations, alleging that its use of AI models resulted in “disparate impact in approval rates and loan terms, specifically disadvantaging Black and Hispanic applicants.” Such cases will likely multiply as AI deployment grows, creating a patchwork of state-level enforcement even as federal oversight recedes.

Human-in-the-Loop Emphasis

Anthropic appears acutely aware of regulatory risks. In an interview with Banking Dive, Jonathan Pelosi, Anthropic’s global head of industry for financial services, emphasised that Claude requires a “human in the loop.” The platform is not intended for autonomous financial decision-making or to provide stock recommendations that users follow blindly.

During client onboarding, Anthropic focuses on training and understanding model limitations, putting guardrails in place so people treat Claude as helpful technology rather than a replacement for human judgement.

Competitive Landscape Intensifies

Anthropic’s financial services push comes as AI competition intensifies across the enterprise. OpenAI, Microsoft, Google, and numerous startups are all vying for position in what may become one of AI’s most lucrative verticals. Goldman Sachs introduced a generative AI assistant to its bankers, traders, and asset managers in January, signalling that major banks may build their own capabilities rather than rely exclusively on third-party providers.

The emergence of domain-specific AI models like BloombergGPT—trained specifically on financial data—suggests the market may fragment between generalised AI assistants and specialised tools. Anthropic’s strategy appears to stake out a middle ground: general-purpose models enhanced with financial-specific tooling, data access, and workflows.

Partnership Strategy: The company’s partnership strategy with implementation consultancies including Deloitte, KPMG, PwC, Slalom, TribeAI, and Turing is equally critical. These firms serve as force multipliers, embedding Anthropic’s technology into their own service offerings and providing the change management expertise that financial institutions need to successfully adopt AI at scale.

The integration with Microsoft is particularly noteworthy given that Microsoft has its own Copilot AI assistant embedded across its Office suite and counts OpenAI—Anthropic’s direct competitor—as its largest investment. This positions Anthropic to compete directly with Microsoft whilst simultaneously partnering with it, a delicate strategic balance.

CFO Trust Gap Remains Challenge

The broader question is whether AI tools like Claude will genuinely transform financial services productivity or merely shift work around. The PYMNTS Intelligence report “The Agentic Trust Gap” found that chief financial officers remain hesitant about AI agents, with “nagging concern” about hallucinations where “an AI agent can go off script and expose firms to cascading payment errors and other inaccuracies.”

“For finance leaders, the message is stark: Harness AI’s momentum now, but build the guardrails before the next quarterly call—or risk owning the fallout,” the report warned.

A 2025 KPMG report found that 70% of board members have developed responsible use policies for employees, with other popular initiatives including implementing recognised AI risk and governance frameworks, developing ethical guidelines and training programmes for AI developers, and conducting regular AI use audits.

Industry Risk Management Confidence

Speaking at the Evident AI Symposium in New York last week, Ian Glasner, HSBC’s group head of emerging technology, innovation and ventures, struck an optimistic tone about the sector’s readiness for AI adoption:

“As an industry, we are very well prepared to manage risk. Let’s not overcomplicate this. We just need to be focused on the business use case and the value associated.”

Market Position and Valuation

The financial services industry faces a delicate balancing act: move too slowly and risk competitive disadvantage as rivals achieve productivity gains; move too quickly and risk operational failures, regulatory penalties, or reputational damage.

Anthropic’s latest moves suggest the company sees financial services as a beachhead market where AI’s value proposition is clear, customers have deep pockets, and the technical requirements play to Claude’s strengths in reasoning and accuracy. By building Excel integration, securing data partnerships, and pre-packaging common workflows, Anthropic reduces the friction that typically slows enterprise AI adoption.

The $61.5 billion valuation the company commanded in its March fundraising round—up from roughly $16 billion a year earlier—suggests investors believe this strategy will work.

Looking Forward

The expansion comes just three months after Anthropic launched its Financial Analysis Solution in July, and it signals the company’s determination to capture market share in an industry projected to spend $97 billion on AI by 2027, up from $35 billion in 2023.

Financial services may prove to be AI’s most demanding proving ground: an industry where mistakes are costly, regulation is stringent, and trust is everything. If Claude can successfully navigate the spreadsheet cells and data feeds of Wall Street without hallucinating a decimal point in the wrong direction, Anthropic will have accomplished something far more valuable than winning another benchmark test. It will have proven that AI can be trusted with the money.

The real test will come as these tools move from pilot programmes to production deployments across thousands of analysts and billions of dollars in transactions.

Source Attribution:

Share this article