
Launched today, NVIDIA Nemotron 3 Super is a 120‑billion‑parameter open model with 12 billion active parameters designed to run complex agentic AI systems at scale.
Available now, the model combines advanced reasoning capabilities to efficiently complete tasks with high accuracy for autonomous agents.
AI-Native Companies: Perplexity offers its users access to Nemotron 3 Super for search and as one of 20 orchestrated models in Computer. Companies offering software development agents like CodeRabbit , Factory and Greptile are integrating the model into their AI agents along with proprietary models to achieve higher accuracy at lower cost. And life sciences and frontier AI organizations like Edison Scientific and Lila Sciences will power their agents for deep literature search, data science and molecular understanding.
Enterprise Software Platforms: Industry leaders such as Amdocs , Palantir , Cadence , Dassault Systèmes and Siemens are deploying and customizing the model to automate workflows in telecom, cybersecurity, semiconductor design and manufacturing.
As companies move beyond chatbots and into multi‑agent applications, they encounter two constraints.
The first is context explosion. Multi‑agent workflows generate up to 15x more tokens than standard chat because each interaction requires resending full histories, including tool outputs and intermediate reasoning.
Over long tasks, this volume of context increases costs and can lead to goal drift, where agents lose alignment with the original objective.
The second is the thinking tax. Complex agents must reason at every step, but using large models for every subtask makes multi-agent applications too expensive and sluggish for practical applications.
Nemotron 3 Super has a 1‑million‑token context window, allowing agents to retain full workflow state in memory and preventing goal drift. Nemotron 3 Super has set new standards, claiming the top spot on Artificial Analysis for efficiency and openn ess with leading accuracy among models of the same size.
The model also powers the NVIDIA AI-Q research agent to the No. 1 position on DeepResearch Bench and DeepResearch Bench II leaderboards, benchmarks that measure an AI system’s ability to conduct thorough, multistep research across large document sets while maintaining reasoning coherence.
Nemotron 3 Super uses a hybrid mixture‑of‑experts (MoE) architecture that combines three major innovations to deliver up to 5x higher throughput and up to 2x higher accuracy than the previous Nemotron Super model.
Hybrid Architecture: Mamba layers deliver 4x higher memory and compute efficiency, while transformer layers drive advanced reasoning.
MoE: Only 12 billion of its 120 billion parameters are active at inference.
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://blogs.nvidia.com/blog/nemotron-3-super-agentic-ai/#primary
- https://blogs.nvidia.com/blog/author/karibriski/
- https://blogs.nvidia.com/blog/nemotron-3-super-agentic-ai/#disqus_thread
- Flabbergasted GPU repair wizard highlights dangers of liquid metal after leak kills entire RTX 5070 Ti — user-applied TIM spread to every crevice of the PCB, ph
- Chinese GPU vendor Zephyr has cancelled its single-fan RTX 4070 Ti Super due to VRAM price hikes — memory shortage is forcing a pivot to an SFF RTX 4070 Super i
- Creative updates its Sound Blaster PCIe sound card line after 5 years — new $79.99 Audigy FX Pro 7.1 pitched as ‘clear upgrade over standard onboard audio’
- Jensen Huang says gamers are 'completely wrong' about DLSS 5 — Nvidia CEO responds to DLSS 5 backlash
- STMicro to deploy humanoid robots to its legacy fabs in Europe — over 100 humanoid robots to be used for routine and physically demanding tasks in fight for eff
Informational only. No financial advice. Do your own research.