
Unveiling what it describes as the most capable model series yet for professional knowledge work, OpenAI launched GPT-5.2 today. The model was trained and deployed on NVIDIA infrastructure, including NVIDIA Hopper and GB200 NVL72 systems.
GPT-5.2 achieves the top reported score for industry benchmarks like GPQA-Diamond, AIME 2025 and Tau2 Telecom. On leading benchmarks targeting the skills required to develop AGI, like ARC-AGI-2 , GPT-5.2 sets a new bar for state-of-the-art performance.
It’s the latest example of how leading AI builders train and deploy at scale on NVIDIA’s full-stack AI infrastructure.
AI models are getting more capable thanks to three scaling laws : pretraining, post-training and test-time scaling.
Reasoning models , which apply compute during inference to tackle complex queries, using multiple networks working together, are now everywhere.
But pretraining and post-training remain the bedrock of intelligence. They’re core to making reasoning models smarter and more useful.
And getting there takes scale. Training frontier models from scratch isn’t a small job.
It takes tens of thousands, even hundreds of thousands, of GPUs working together effectively.
That level of scale demands excellence across many dimensions. It requires world-class accelerators, advanced networking across scale-up, scale-out and increasingly scale-across architectures, plus a fully optimized software stack. In short, a purpose-built infrastructure platform built to deliver performance at scale.
Compared with the NVIDIA Hopper architecture, NVIDIA GB200 NVL72 systems delivered 3x faster training performance on the largest model tested in the latest MLPerf Training industry benchmarks, and nearly 2x better performance per dollar .
And NVIDIA GB300 NVL72 delivers a more than 4x speedup compared with NVIDIA Hopper.
These performance gains help AI developers shorten development cycles and deploy new models more quickly.
The majority of today’s leading large language models were trained on NVIDIA platforms.
NVIDIA supports AI development across multiple modalities, including speech, image and video generation, as well as emerging areas like biology and robotics.
For example, models like Evo 2 decode genetic sequences, OpenFold3 predicts 3D protein structures and Boltz-2 simulates drug interactions, helping researchers identify promising candidates faster.
On the clinical side, NVIDIA Clara synthesis models generate realistic medical images to advance screening and diagnosis without exposing patient data.
Companies like Runway and Inworld train on NVIDIA infrastructure.
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://blogs.nvidia.com/blog/leading-models-nvidia/#content
- https://www.nvidia.com/en-us/
- https://blogs.nvidia.com/?s=
- U.S. cybersecurity experts plead guilty for ransomware attacks, face 20 years in prison each — group demanded up to $10 million from each victim
- Save over $500 on the cheapest RTX 5070 gaming laptop we've seen with an OLED screen — Grab Lenovo's Legion 5 with a Ryzen 7 350 CPU for as low as $1,137 with d
- D7VK reaches version 1.1 and adds new frontend and experimental Direct3D 6 support — Direct3D 7-to-Vulkan translation layer runs old games with native performan
- Cyberpunk 2077 transformed into a stunning VHS visual experience using new ReShade preset — so 1980s real you start watching for mullets, another enthusiast ben
- TSMC's average wafer prices increased by over 15% each year since 2019, report suggests — gross profit margins increase by 3.3x in 2025 alone, facing no real ch
Informational only. No financial advice. Do your own research.