HPE to Offer AMD Helios AI Racks Globally by 2026
HPE plans to offer AMD Helios AI racks globally by 2026, integrating EPYC CPUs and Instinct GPUs for high-density AI infrastructure.

Background
AMD introduced Helios as a rack‑scale AI architecture to compete with existing vendor offerings, providing an open, scalable alternative that integrates AMD compute, accelerator, and networking assets for large model training and inference. HPE and AMD have positioned Helios as a continuation of their long-standing HPC partnership, targeting cloud service providers (CSPs), “neoclouds,” research institutions, and enterprises requiring turnkey, high-density AI infrastructure. HPE plans to offer Helios globally in 2026, showcasing the platform at HPE Discover Barcelona 2025.
Key Features and Technical Highlights
- Helios integrates AMD Instinct MI455X GPUs and EPYC “Venice” CPUs, supporting a 72-GPU rack with double‑wide, liquid‑cooled chassis for high GPU counts and thermal efficiency.
- The HPE‑branded Helios rack claims aggregated scale‑up bandwidth of 260 TB/s, up to 2.9 exaFLOPS FP4 per rack, 31 TB of HBM4 memory, and 1.4 PB/s of memory bandwidth for demanding AI/HPC workloads.
- Unlike many NVLink/NVIDIA-centric designs, Helios uses an Ethernet-based scale‑up fabric developed with Broadcom, integrated with HPE’s networking, to optimize AI performance over standard Ethernet switches.
- The stack includes AMD Pensando for advanced networking offload and ROCm as the open software layer for GPU acceleration and deployment portability.
Commercial Rollout and Partners
HPE is the first major OEM to commit to offering Helios racks, promising worldwide availability in 2026. The integration includes Broadcom (Tomahawk‑class silicon), Juniper components, and HPE’s networking/software for turnkey deployment to CSPs and neoclouds. HPE also announced the Herder system for the High‑Performance Computing Center Stuttgart, using AMD Instinct MI430X GPUs and Venice CPUs, for delivery in 2H 2027.
Market and Competitive Implications
- Helios positions AMD/HPE as a competitor to NVIDIA’s rack-scale systems by promoting an open‑standards, Ethernet‑centric approach.
- Helios’ performance metrics aim to attract operators training large models, but success depends on OEM deployments, customer adoption cycles, and delivery timing in 2026–2027.
- This move could invigorate competition in AI infrastructure and reduce single‑vendor lock‑in, though it raises execution risks around supply chain, liquid‑cooling integration, and software ecosystem maturity.
Context and Analyst Perspective
Analysts view Helios as AMD’s strategic move beyond chips to deliver an integrated hardware/software architecture, capturing more value in the AI infrastructure stack. While Helios strengthens AMD’s product narrative amid accelerating AI adoption, investors should monitor commercial milestones before assuming significant revenue impact.
Implications and Outlook
- Short term: Helios rollout announcements and vendor partnerships enhance AMD’s credibility in large‑scale AI racks.
- Medium term: Market impact will depend on delivery timing, system integration success, and ecosystem adoption of ROCm and Ethernet scale‑up fabrics.
- Long term: Broad adoption could lead to higher‑margin system revenues, but conversion of design wins into sustained sales must be tracked.
What to Watch Next
- HPE customer announcements and pilot deployments referencing Helios in 2026.
- Benchmarks and real‑world training runs validating the claimed 2.9 exaFLOPS FP4 and memory‑bandwidth figures.
- ROCm ecosystem growth and software parity with competitor toolchains.
Sources include HPE’s Helios press release and coverage from TechRadar, ITPro, and datacenter media.



