
A 1 million-token context window, allowing the model to retain far more information for long, multistep tasks.
Nemotron 3 Super is a high-accuracy reasoning model for multi-agent applications, while Nemotron 3 Ultra is for complex AI applications. Both are expected to be available in the first half of 2026.
NVIDIA also released today an open collection of training datasets and state-of-the-art reinforcement learning libraries. Nemotron 3 Nano fine-tuning is available on Unsloth.
Download Nemotron 3 Nano now from Hugging Face , or experiment with it through Llama.cpp and LM Studio.
DGX Spark enables local fine-tuning and brings incredible AI performance in a compact, desktop supercomputer, giving developers access to more memory than a typical PC.
Built on the NVIDIA Grace Blackwell architecture, DGX Spark delivers up to a petaflop of FP4 AI performance and includes 128GB of unified CPU-GPU memory, giving developers enough headroom to run larger models, longer context windows and more demanding training workloads locally.
Larger model sizes. Models with more than 30 billion parameters often exceed the VRAM capacity of consumer GPUs but fit comfortably within DGX Spark’s unified memory.
More advanced techniques. Full fine-tuning and reinforcement-learning-based workflows — which demand more memory and higher throughput — run significantly faster on DGX Spark.
Local control without cloud queues. Developers can run compute-heavy tasks locally instead of waiting for cloud instances or managing multiple environments.
DGX Spark’s strengths go beyond LLMs. High-resolution diffusion models, for example, often require more memory than a typical desktop can provide. With FP4 support and large unified memory, DGX Spark can generate 1,000 images in just a few seconds and sustain higher throughput for creative or multimodal pipelines.
The table below shows performance for fine-tuning the Llama family of models on DGX Spark.
As fine-tuning workflows advance, the new Nemotron 3 family of open models offer scalable reasoning and long-context performance optimized for RTX systems and DGX Spark.
Learn more about how DGX Spark enables intensive AI tasks .
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://blogs.nvidia.com/blog/rtx-ai-garage-fine-tuning-unsloth-dgx-spark/#content
- https://www.nvidia.com/en-us/
- https://blogs.nvidia.com/?s=
- The Ultimate Black Friday Deal Is Here
- Opt-In NVIDIA Software Enables Data Center Fleet Management
- Commodore 64 Ultimate Review: 21st Century Computing from a 1982 perspective
- HDD prices spike as AI infrastructure and China's PC push collide — hard drives record biggest price increase in eight quarters, suppliers warn pressure will co
- NVIDIA and AWS Expand Full-Stack Partnership, Providing the Secure, High-Performance Compute Platform Vital for Future Innovation
Informational only. No financial advice. Do your own research.