
We cannot definitively say that these delays are caused entirely by huge numbers of people buying these devices to run their own AI models, as Apple CEO Tim Cook admitted that it’s chasing memory supply to meet high customer demand. However, additional pressure from the consumer side will definitely not help with the memory chip shortage that, as of the moment, is primarily driven by AI hyperscalers and institutional buyers.
Follow Tom's Hardware on Google News , or add us as a preferred source , to get our latest news, analysis, & reviews in your feeds.
Jowi Morales is a tech enthusiast with years of experience working in the industry. He\u2019s been writing with several tech publications since 2021, where he\u2019s been interested in tech hardware and consumer electronics. ","collapsible":{"enabled":true,"maxHeight":250,"readMoreText":"Read more","readLessText":"Read less"}}), "https://slice.vanilla.futurecdn.net/13-4-13/js/authorBio.js"); } else { console.error('%c FTE ','background: #9306F9; color: #ffffff','no lazy slice hydration function available'); } Jowi Morales Social Links Navigation Contributing Writer Jowi Morales is a tech enthusiast with years of experience working in the industry. He’s been writing with several tech publications since 2021, where he’s been interested in tech hardware and consumer electronics.
ezst036 It is good to see that people are getting Macs instead of Winspyware boxes. Still, OpenClaw runs just fine on Linux as well. Reply
hwertz I'm not a Mac fan, but this is a good way to get a high memory system for GPU compute. Intel GPUs supoort large shared memory too (at least in linux) but… well, the modern intel gpus are a lot better than the old ones but i don't think it's near as fast as the gpus in the M series. Reply
Daniel15 ezst036 said: Still, OpenClaw runs just fine on Linux as well It does, but it's a lot harder to get 128GB+ VRAM (or unified memory) on a Linux system, and you'll need at least that much for a powerful local AI model. In theory you could get something like the Framework Desktop which uses those Ryzen AI chips with unified memory, but I'm not sure how well the AI models work on AMD. Reply
ThisIsMe The rationale here is a bit of a stretch. It’s more likely that this is more about general supply availability since getting fast, higher capacity RAM is just all around difficult for all OEMs at the moment. I’m not saying the clear speculation presented here is completely wrong, I’m sure it plays a small part in this minor issue. Hopefully that’s all it is, just innocent conjecture and not some weird fallout from somebody’s sunk cost fallacy related issues where they try and convince others that something is a great deal and they need to urgently get one as well before they’re all gone! Even though prices are stupidly sky high and nobody should currently be buying any of it for that reason to force market correction and give the power back to the buyers. …but I digress. Reply
ezst036 Daniel15 said: It does, but it's a lot harder to get 128GB+ VRAM (or unified memory) on a Linux system, and you'll need at least that much for a powerful local AI model. In theory you could get something like the Framework Desktop which uses those Ryzen AI chips with unified memory, but I'm not sure how well the AI models work on AMD. Which Macs have 128+GB UMA? I genuinely do not know, the Mac world isn't where I live. Also whats that going to cost? Reply
Daniel15 ezst036 said: Which Macs have 128+GB UMA? I genuinely do not know, the Mac world isn't where I live. Also whats that going to cost? Mac Mini goes up to 64GB and Mac Studio goes up to 512GB RAM. 128GB Mac Studio is currently $3500 new in the USA, but I wouldn't be surprised if the price goes up soon due to the memory storages. The 512GB RAM one is around $10000. I'm usually not a Mac person, but honestly I don't know of any cheaper way to get that much unified memory in one system. Reply
dada_dave ezst036 said: Which Macs have 128+GB UMA? I genuinely do not know, the Mac world isn't where I live. Also whats that going to cost? M4 Maxes can be equipped up to 128GB while M3 Ultra is up to 512GB. Minimum cost for an M4 Max with 128GB is $3500. Minimum cost for a (binned/full) M3 Ultra with 96GB of RAM is $4000/$5500. Binned/full Ultra with 256GB of unified memory is $5600/$7100. Only the full Ultra comes with 512GB which costs a minimum of $9499. Memory bandwidth is the same on binned/full Ultra models. Unclear if Apple will be keeping its pricing the same when the M5 variants come (Ultra thought to be skipping the M4). Reply
hwertz ezst036 said: It is good to see that people are getting Macs instead of Winspyware boxes. Still, OpenClaw runs just fine on Linux as well. Daniel15 said: It does, but it's a lot harder to get 128GB+ VRAM (or unified memory) on a Linux system, and you'll need at least that much for a powerful local AI model. In theory you could get something like the Framework Desktop which uses those Ryzen AI chips with unified memory, but I'm not sure how well the AI models work on AMD. To be honest, years back I ran Linux on a DEC Alpha, a PA-RISC, a PowerMac and an IBM POWER, I think I threw it onto a 68K system one time… I just assumed they were talking about Asahi on an M series (but realize now they probably weren't). It's not THAT hard to find PCs with 128GB RAM. That said, Macs are not cheap but neither are these systems. The likes of Dell have always marked their top shelf systems way up anyway (you won't get those models that can take 128GB without also paying for a 16 or 18" screen, probably a 1TB or larger SSD, for instance), and I'm sure with RAMPocalypse as an excuse, I see on Dell's site an already $2700 system (saving $75 by getting Ubuntu instead of Windows…), they with a straight face charge $3100 in addition to bump it up from 16GB to 128GB RAM. Reply
Daniel15 hwertz said: It's not THAT hard to find PCs with 128GB RAM It's not 128GB RAM that's the hard part; it's 128GB of VRAM . The Mac Studio's unified memory means that the RAM and VRAM are unified, so you can use a small(-ish) amount for the system and use the majority for AI models on the GPU/TPU. Some PCs with soldered RAM can do this too (like the Framework Desktop), but otherwise you need several expensive Nvidia GPUs to load large AI models. This is also why the Framework Desktop uses soldered RAM. DIMMs/SODIMMs just aren't fast enough at the moment, nor do they support such a wide bus (256 bits). Reply
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://www.tomshardware.com/tech-industry/artificial-intelligence/SPONSORED_LINK_URL
- https://www.tomshardware.com/tech-industry/artificial-intelligence/openclaw-fueled-ordering-frenzy-creates-apple-mac-shortage-delivery-for-high-unified-memory-units-now-ranges-from-6-days-to-6-weeks#main
- https://www.tomshardware.com
- MakerWorld launches Copyright Protection Program to help 3D designers fight stolen files being resold online — stamping out knockoffs is priority number one
- New SemiAnalysis InferenceX Data Shows NVIDIA Blackwell Ultra Delivers up to 50x Better Performance and 35x Lower Costs for Agentic AI
- Microsoft’s AI boss says AI can replace every white-collar job in 18 months — ‘We’re going to have a human-level performance on most, if not all, professional t
- Ponzi schemer behind $201 million Bitcoin scam sentenced to 20 years in federal prison — promised 3% daily returns on Bitcoin investments, left investors reelin
- NVIDIA DRIVE AV Raises the Bar for Vehicle Safety as Mercedes-Benz CLA Earns Top Euro NCAP Award
Informational only. No financial advice. Do your own research.