AMD touts Instinct MI430X, MI440X, and MI455X AI accelerators and Helios rack-scale AI architecture at CES — full MI400-series family fulfills a broad range of

AMD touts Instinct MI430X, MI440X, and MI455X AI accelerators and Helios rack-scale AI architecture at CES — full MI400-series family fulfills a broad range of

The Instinct MI430X, MI440X, and MI455X accelerators are expected to feature Infinity Fabric alongside UALink for scale-up connectivity, thus making them the first accelerators to support the new interconnect. However, practical UALink adoption will depend on ecosystem partners such as Astera Labs, Auradine, Enfabrica, and Xconn.

If these companies deliver UALink switching silicon in the second half of 2026, then we are going to see Helios machines interconnected using UALink. In the absence of such switches, UALink-based systems will use UALink-over-Ethernet (which is not exactly a way UALink was meant to be used) or stick to traditional mesh or torus configurations rather than large-scale fabrics.

As for scale-out connectivity, AMD plans to offer its Helios platform with Ultra Ethernet. Unlike UALink, Ultra Ethernet can rely on existing network adapters, such as AMD’s Pensando Pollara 400G and the forthcoming Pensando Vulcano 800G cards that can enable advanced connectivity in data centers that can already use the latest technology.

Follow Tom's Hardware on Google News , or add us as a preferred source , to get our latest news, analysis, & reviews in your feeds.

Anton Shilov Social Links Navigation Contributing Writer Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

Stomx Impression is that already by the next CES when somebody will start talking about AI the audience will threw up Reply

Stomx By the way can somebody explain me how they plan to increase the AI performance 10,000x by the next 5 years without at least 1000x increase of power consumption when during last 5 years they were not able to increase Ryzen single core or multi-core performance even 2x ? Reply

George³ Stomx said: By the way can somebody explain me how they plan to increase the AI performance 10,000x by the next 5 years without at least 1000x increase of power consumption when during last 5 years they were not able to increase Ryzen single core or multi-core performance even 2x ? It's easy with zero-precision calculations like FP0 and INT0. /s Reply

Penzi George³ said: It's easy with zero-precision calculations like FP0 and INT0. /s Can’t seem to get the :LOL: instead of the (y)… ah well 🤷‍♂️ Reply

bit_user Stomx said: By the way can somebody explain me how they plan to increase the AI performance 10,000x by the next 5 years without at least 1000x increase of power consumption Fair question. The usual answers are things like: Compute-in-memory (or near memory), as most energy in computers is spent on moving data around. Newer data formats that increase information density and are cheaper to process. Possibly more use of compression, although that overlaps with number 2. Dataflow-like processing architectures that don't suffer the inefficiencies of a cache hierarchy or the latency penalties of random access. Exotic forms of computation. I've lost track of which, but one of the HBM4 variants is supposed to make progress on compute/memory integration. We've already seen several iterations of new data formats, so I'm not sure there's too much more mileage in numbers 2 & 3. Number 4 is something AMD just started dabbling with, in the PS5 and RDNA4. The upcoming PS6 and UDNA architecture should go a lot further, in this direction. I think that's what "Neural Arrays" is getting at: https://www.tomshardware.com/video-games/console-gaming/sony-and-amd-tease-likely-playstation-6-gpu-upgrades-radiance-cores-and-a-new-interconnect-for-boosting-ai-rendering-performance Stomx said: when during last 5 years they were not able to increase Ryzen single core or multi-core performance even 2x ? Improving single-thread performance is hard . As for multi-core, they really did , if you look at their server or workstation CPUs. Not desktop, but those have artificially-restricted core counts, based on the usage patterns and needs of typical desktop users & apps. However, by focusing on CPUs, you're really missing the point. This isn't about CPUs. You should look at GPU compute performance, instead. That said, I think all of this isn't nearly enough to cover the gap cited in your question. Reply

Key considerations

  • Investor positioning can change fast
  • Volatility remains possible near catalysts
  • Macro rates and liquidity can dominate flows

Reference reading

More on this site

Informational only. No financial advice. Do your own research.

Leave a Comment