TL;DR: HPE’s deployment of AI-ready networking infrastructure at the 2025 Ryder Cup demonstrates the critical role of high-performance networks in real-time AI applications. The system processed data from 67 AI-enabled cameras and over 650 access points, highlighting how inference-ready networks are becoming essential for turning AI potential into performance. Survey data shows 84% of organizations are reevaluating deployment strategies due to AI growth.
Modern AI applications demand infrastructure fundamentally different from traditional enterprise networks. While email and file sharing tolerate latency, AI inference workloads require ultra-low latency and lossless throughput—where even half-second delays can cascade into system-wide bottlenecks.
The 2025 Ryder Cup at Bethpage Black provided a real-world stress test for AI networking at scale. HPE deployed a two-tiered architecture spanning 650 WiFi 6E access points, 170 network switches, and 67 AI-enabled cameras across the sprawling venue. The front-end layer captured live video and movement data, whilst a back-end on-site data centre linked GPUs and servers in high-speed, low-latency configuration.
“Disconnected AI doesn’t get you very much; you need a way to get data into it and out of it for both training and inference,” explains Jon Green, CTO of HPE Networking. The system ingested data from ticket scans, weather reports, GPS-tracked golf carts, concession sales, and spectator queues—providing tournament staff with instantaneous operational intelligence.
Physical AI Drives On-Premises Return
The rise of physical AI—applications moving from screens to factory floors and autonomous vehicles—is fueling a wave of operational repatriation. Workloads once relegated to cloud infrastructure are returning on-premises for enhanced speed, security, and cost reasons.
“Physical AI is one of the use cases that we believe will bring a lot of IT back on-prem,” predicts Green, citing the example of AI-infused factory floors where cloud round-trips would be too slow to safely control automated machinery. By the time cloud processing completes, the machine has already moved.
Research from Enterprise Research Group supports this shift: 84% of respondents are reevaluating application deployment strategies due to AI growth. IDC forecasts the AI infrastructure market will reach $758 billion by 2029.
Self-Driving Networks on the Horizon
Networks themselves are becoming AI-enabled. HPE’s platform processes over a trillion telemetry points daily from billions of connected devices, analyzing anonymized data to identify performance trends and refine network behavior over time.
Today’s AIOps systems surface insights as administrator recommendations. Tomorrow’s “self-driving networks” will autonomously handle repetitive, error-prone tasks—detecting and fixing port issues, misconfigurations, and connectivity problems without human intervention.
“AI isn’t coming for the network engineer’s job, but it will eliminate the tedious stuff that slows them down,” notes Green. The vision: network administrators directing high-level strategy whilst AI handles the 130-switch configurations and automatic fault remediation.
As enterprises move toward distributed, real-time AI applications, the network has emerged as the third critical leg alongside models and data readiness. More than half of organizations still struggle to operationalize real-time data pipelines—down from 93% last year, but highlighting the infrastructure gap that AI-ready networking must close.
Source: MIT Technology Review