NVIDIA has announced that leading Taiwanese system manufacturers will soon begin producing the innovative NVIDIA DGX Spark and DGX Station™ systems. This expansion in partnerships with top companies like Acer, GIGABYTE, and MSI will enhance the availability of DGX Spark and DGX Station—cutting-edge personal AI supercomputers designed to deliver unparalleled performance and efficiency for developers, data scientists, and researchers worldwide.
As demand for high-performance systems grows, enterprises, software vendors, government agencies, startups, and research institutions now require desktop-based solutions that can offer the power and functionality of an AI server. These systems must also protect data privacy, support proprietary models, and scale quickly without compromise.
The increasing prominence of agentic AI, which can autonomously make decisions and perform tasks, further heightens the need for these powerful systems. With the advanced NVIDIA Grace Blackwell platform, DGX Spark and DGX Station empower developers to seamlessly prototype, fine-tune, and deploy AI models—from desktop environments to large-scale data centers.
“AI has transformed every layer of the computing stack, from hardware to software,” said Jensen Huang, founder and CEO of NVIDIA. “The DGX Spark and DGX Station are the next evolution of the DGX-1 system, which sparked the AI revolution, and are specifically engineered to drive the future of AI research and development.”
This optimized version uses targeted keywords like “AI supercomputers,” “high-performance systems,” “developers,” and “research and development” to improve SEO while maintaining the integrity and technical details of the original announcement.

DGX Spark is all about Innovation
Powered by NVIDIA’s cutting-edge GB10 Grace Blackwell Superchip and next-gen Tensor Cores, DGX Spark packs up to 1 petaflop of AI horsepower and 128 GB of unified memory. Models train locally, then migrate effortlessly to NVIDIA DGX™ Cloud or any GPU-accelerated data-center or public cloud.
The result? A space-saving AI platform that empowers developers, researchers, data scientists, and students to supercharge generative-AI projects and speed breakthroughs across every industry
Engineered for today’s most intensive AI workloads, the DGX Station pairs the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip with a massive 784 GB of unified memory, unleashing up to 20 petaflops of AI performance. An integrated NVIDIA ConnectX-8 SuperNIC delivers lightning-fast networking up to 800 Gb/s, making multi-station scaling effortless.
Use it as a powerhouse personal workstation or share it as an on-demand compute hub: NVIDIA Multi-Instance GPU support lets you split the system into as many as seven isolated GPU instances, each with dedicated high-bandwidth memory, cache, and cores—your own private AI cloud.
DGX Spark and DGX Station share the same NVIDIA DGX OS and pre-loaded AI software stack found in enterprise-grade AI factories. With built-in access to NVIDIA NIM™ microservices and NVIDIA Blueprints, teams can prototype in familiar tools like PyTorch, Jupyter, and Ollama, then deploy seamlessly to NVIDIA DGX™ Cloud or any GPU-accelerated data center.
Major OEMs are on board. Dell Technologies is among the first to ship DGX Spark and DGX Station, answering growing enterprise demand for localized, high-performance AI. Michael Dell highlights how new Dell Pro Max systems—powered by GB10 and GB300—let organizations conquer large-scale AI tasks. HP Inc. echoes the vision with its HP ZGX lineup, bringing data-center-class AI power to the desktop.
Availability
- DGX Spark: Shipping in July from Acer, ASUS, Dell Technologies, GIGABYTE, HP, Lenovo, MSI, and channel partners. Pre-order now on NVIDIA.com.
- DGX Station: Rolling out later this year from ASUS, Dell Technologies, GIGABYTE, HP, and MSI.
The future of AI is here. Unlock next-generation AI performance wherever you work—desktop, lab, or scaled across the cloud.
For more such AI and Tech related news and updates, remain hooked into mortentechnologies.com!
Leave a Reply