
Launched today, NVIDIA Nemotron 3 Super is a 120‑billion‑parameter open model with 12 billion active parameters designed to run complex agentic AI systems at scale.
Available now, the model combines advanced reasoning capabilities to efficiently complete tasks with high accuracy for autonomous agents.
AI-Native Companies: Perplexity offers its users access to Nemotron 3 Super for search and as one of 20 orchestrated models in Computer. Companies offering software development agents like CodeRabbit , Factory and Greptile are integrating the model into their AI agents along with proprietary models to achieve higher accuracy at lower cost. And life sciences and frontier AI organizations like Edison Scientific and Lila Sciences will power their agents for deep literature search, data science and molecular understanding.
Enterprise Software Platforms: Industry leaders such as Amdocs , Palantir , Cadence , Dassault Systèmes and Siemens are deploying and customizing the model to automate workflows in telecom, cybersecurity, semiconductor design and manufacturing.
As companies move beyond chatbots and into multi‑agent applications, they encounter two constraints.
The first is context explosion. Multi‑agent workflows generate up to 15x more tokens than standard chat because each interaction requires resending full histories, including tool outputs and intermediate reasoning.
Over long tasks, this volume of context increases costs and can lead to goal drift, where agents lose alignment with the original objective.
The second is the thinking tax. Complex agents must reason at every step, but using large models for every subtask makes multi-agent applications too expensive and sluggish for practical applications.
Nemotron 3 Super has a 1‑million‑token context window, allowing agents to retain full workflow state in memory and preventing goal drift. Nemotron 3 Super has set new standards, claiming the top spot on Artificial Analysis for efficiency and openn ess with leading accuracy among models of the same size.
The model also powers the NVIDIA AI-Q research agent to the No. 1 position on DeepResearch Bench and DeepResearch Bench II leaderboards, benchmarks that measure an AI system’s ability to conduct thorough, multistep research across large document sets while maintaining reasoning coherence.
Nemotron 3 Super uses a hybrid mixture‑of‑experts (MoE) architecture that combines three major innovations to deliver up to 5x higher throughput and up to 2x higher accuracy than the previous Nemotron Super model.
Hybrid Architecture: Mamba layers deliver 4x higher memory and compute efficiency, while transformer layers drive advanced reasoning.
MoE: Only 12 billion of its 120 billion parameters are active at inference.
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://blogs.nvidia.com/blog/nemotron-3-super-agentic-ai/#primary
- https://blogs.nvidia.com/blog/author/karibriski/
- https://blogs.nvidia.com/blog/nemotron-3-super-agentic-ai/#disqus_thread
- Get an X870 Motherboard, 32GB of RAM, a 1050W Power supply, and a Montech King 95 Pro case for only $774 — save over $220 on this 4-item combo from Newegg
- Dynamic MFG comes to RTX 50-series GPUs to push monitor refresh rates to the max — more flexible mode with 5x and 6x multipliers arrives March 31
- Crafty AI tool caught repurposing its training GPUs for unauthorized crypto mining during testing — experimental agent breached safety, controllability, and tru
- Act now and save £29 ahead of the Apple MacBook Neo release — £569 Apple Neo preorders on Amazon are live in the Spring Deal Days sale
- The Nightmare Returns in the Cloud: GeForce NOW Unleashes Capcom’s ‘Resident Evil Requiem’
Informational only. No financial advice. Do your own research.