AMD ROCm CES 2026 press Q&A roundtable transcript — ‘ROCm from 2023 is completely unrecognizable to ROCm today’ company details, as it seeks to break down barri

AMD ROCm CES 2026 press Q&A roundtable transcript — 'ROCm from 2023 is completely unrecognizable to ROCm today' company details, as it seeks to break down barri

AMD clarifies that RDNA 1 and 2 will still get day zero game support and driver updates

So the standard question at some conferences, they're not completely technical, that I try to do is I ask them, "How many of the attendees use some AI tool five times a day?" Ten or fifteen people come up. Then I ask them, "How many times you actually start a browser, like Google Chrome or something like that?" And of course, everybody raises up. Did they use Gemini by doing that? But they don't know.

Journalist 1: So I remember the high-level slides, there was a comment about ROCm 7.2 becoming a "common platform." Does that mean that the binaries between Windows and Linux — is that the same binary between Windows and Linux, for ROCm?

Andrej Zdravkovic (AMD): Yes. Well, the same binary is — well, the operating system are different, so the underlying kernel-level stuff needs to be different, since they're so different, but it is the same source, compiled for Windows and for Linux. It's kind of interesting — I don't know if that's us or you, but we tend to go into these numbers, like ROCm 7.2, and there was ROCm 7, and 6.5; there will be a ROCm 8. I don't think it really matters very much. I think what matters is which functionality that is offered, and which — each and every release is going to give you a little bit more functionality. What we are trying to do is to link that functionality on the big system to functionality on the PC to functionality on each individual platform.

Journalist 1: You actually just preempted one of my questions with the different GFX versions. So Strix Halo being 1150 versus RDNA 4 being GFX1200, versus MI350 being GFX950; these are all disparate, different IPs with different capabilities, and something that you guys don't have is a — at least, that's mainline/mainstream, at the moment, is a [NVIDIA] PTX equivalent. So you can't just write a program on Strix Halo and run it on MI350 because of stuff like different matrix instructions. What are you guys doing to improve that beyond just AMD, GC, and SPIR-V?

Andrej Zdravkovic (AMD): Well, at this point in time, you're right; this is a recompile for the target product. When we devised ROCm as — the the idea was, we want the fastest possible way from the application, from the user, to the hardware, and any introduction of the intermediate layer, like SPIR-V or PTX, to a certain extent, slows down that path. When we designed ROCm, it was originally designed for high-performance computing. In the eye was systems like El Capitan; so these huge research machines where, it doesn't really matter whether you're going to compile or not, it's — once you do it, you then solve the weather patterns for North America.

So, as it started from that and started expanding, we are looking at various options. SPIR-V is one of the options to actually get that portability. At this point in time, we're just not offering — I think we are looking at what the right options are. We found that at the user level. So what we're trying to do right now on Windows and Linux, with the AI bundle. The idea is you have your Ryzen Max notebook, or you have the new, what's the name?

Andrej Zdravkovic (AMD): The Halo box. Sorry, I'm an engineer; I don't make product names. So you will be able to have the system that is pre-configured for either Linux or for Windows, with the right version of ROCm already installed. So we will abstract the difference for somebody who is going to use it in the application layer. But yes, if you are programming at the ROCm layer, it will still require recompile for some time. Great question, it's just, I don't have a perfect answer. I wish I had, but it is always a race, and the decision: what is more important? Because the resources are limited, and our technology is moving fast, and the use of our technology moves fast. So somewhere in between, you have this layer of ROCm and everything, and you have to make the decisions. What is the most important at this point?

Journalist 1: If you want my opinion on it, it would be ease of development for developers to develop on AMD, and recompiling is a barrier to that, especially if you're developing on — let's say, you're a startup, and you're developing on an MI300 box, then deploying to Strix Halo users, that's — you're going to hit sort of a barrier there in terms of your deployment. So that's why I was bringing it up.

Andrej Zdravkovic (AMD): Yeah, at minimum, we have one step, and it does happen sometimes that you have to touch up the code. Well, you probably don't know, but I actually used to be an engineer. So nobody allows me to code anything anymore, but I tend to know how things work reasonably well. So yeah, I agree, and thank you for that. But again — that's the balance of providing the next level, maybe the next format, or the next something, versus pausing to enable that commodity, which we at this time haven't finished yet.

Journalist 2: In the bundle, the AI bundle, is that focused more on developers or focused more on, like, an end user?

Andrej Zdravkovic (AMD): We're talking about two step approach. The first one right now, the one that we're releasing, is released on Windows, and on Windows, you basically have an option as you download the new Adrenalin driver for your notebook, to click there saying that [you want the optional AI bundle.] What you're going to get there is PyTorch with ComfyUI on top of it, you're going to get Amuse, you're going to get LM Studio, and going to get Ollama. The idea is that it kind of does a little bit of a balance. Because, of course, Amuse and ComfyUI are more creator or entertaining type of applications, and then you have LM Studio, a little bit more user-friendly, and then Ollama, a little bit more developer-friendly.

So, on top of the PyTorch, you can actually load the models and train some of the models; memory is quite high. As we move forward and as we are looking at the Ryzen Halo and the new box, there is the approach of Windows, where I think we need to go with the balance of user and developer, because that's — Windows is not the most-used developer platform in the AI world; we know that. I mean, there is a Windows and gaming guru in our team. But as you go on Linux, what we are going to do is going to be basically pre-installed. Most of the distributions right now already have ROCm in a way, like in the box or some so, so it will be a pre-installed or ability to go to distribution and kind of get the ROCm on the system. So with that, we're enabling developers to 'out of the box,' and it starts working and that, not worrying about the right version of ROCm for the system […] and then there will be tools at the top of that.

AMD Representative: I just want to add a little bit more, because you said "is it for developers or consumers?" For me, that's the traditional way I would look at the world. What I believe is happening, and I've seen it, is a new field called 'practitioners,' like people that are not developers, but — anyone can mess around with AI. You don't need coding, you don't need engineering, to mess around with AI. So we're trying to find a way to get these people that have the curiosity to get up and running with no barrier to entry, because what we've found is that some of these things that Andrej talked about would require — and we have it in our slides — it requires, like, 17 steps and configurations and pointing to GitHubs and this and that. So what we've done, essentially, our value-add, is we've taken all that away, and it's just like, optional: install or don't install, and everything's up and running.

In my mind, it's trying to — for developers, probably not so much; they don't — they wouldn't find this interesting, and for the moms and pops, probably not. But then the kind of enthusiasts that are like, "what can I do with AI?" Like everyone's done the image generation, that's kind of like a cheesy thing you do, and that's kind of the end of it, but what if you can develop your own Tetris game or something like that, with these tools, just with five or six commands? That's the kind of people that we want to show that Ryzen and Radeon are absolutely perfect to do that, and we're going to get you up and running in one click.

Andrej Zdravkovic (AMD): Well, and I'm sure there will be — sometimes in any innovation, you actually have the idea why you're doing that, but what ends up very interesting with all these different things is you actually — the use of it usually surprises you. Like the invention of electricity had various ideas why this is going to be very important, but one of the most important factors for the industry, you actually don't have to build your factory next to the water, where you actually have your running power. So it's a huge difference. What I think is going to happen is the applications like, I'm going to do my taxes. I don't want to do my taxes asking ChatGPT, because then all my tax information is in the cloud, and I don't know if [Revenue Canada or the IRS], they are watching that. I don't know. People are paranoid. If I know that I disconnected my computer from the Internet, and now I can ask for help, and how do I, whatever, "pay less tax," right? That's the cool thing. I think a lot of people will find these kinds of applications very, very interesting over time. All of us are trying to do something more, and what he said, we are trying to get across the barrier. So this is really the AI adoption, and the thing that's going to get people to understand AI and this revolution better.

Journalist 1: So on that — you said that PyTorch would be installed, or that's an option. Would — I assume that that would also install ROCm and all the prerequisite? [AMD representative nods.] Okay. Just double-checking that that would — okay.

Andrej Zdravkovic (AMD): The whole story, because […] We would want all our systems to be linked to the PyTorch stable. They're not yet, but they are linked to the latest and greatest steps, but in order to do the installation, you have to go to PyTorch, and then you need to pick the right version of ROCm. So that's what we are abstracting from you.

Journalist 1: Okay. I'm asking because I have — the laptop that I've been using is the HP ZBook. It's the G1A, so I've been playing with it and sort of seeing how the ROCm experience has developed in the past six months or so. It's come a lot further on Linux than it has on Windows, and for developers, I think that that's probably the correct move. But I guess, the largest part — my audience is much more technical, and — sort of, the developer community. Something that I see as lacking is documentation, and while this abstraction is good for the people that you're talking about, for the people that — yeah, need to go, like — that do AI daily, and develop for AMD, the documentation is sort of spread around 15 different places, and there's no [collated] source that I can just go, "bookmark this page," and I can just reference that page.

Andrej Zdravkovic (AMD): Great feedback. Well, the only thing I can say is that, about six months ago, it was 30 places.

Andrej Zdravkovic (AMD): Yes, so I think we are moving in the right direction.

Key considerations

  • Investor positioning can change fast
  • Volatility remains possible near catalysts
  • Macro rates and liquidity can dominate flows

Reference reading

More on this site

Informational only. No financial advice. Do your own research.

Leave a Comment