Project DIGITS
NVIDIA caused a bit of a stir at CES 2025 when it announced Project DIGITS, a tiny computer for AI development. It combines a 20-core ARM CPU with a Blackwell-generation GPU/AI accelerator and 128 GB unified memory for a price starting at US$ 3,000. The main draw here is the large amount of memory, which enables running large AI models. Compare this to an RTX 5090 with 32 GB RAM and a street price that isn’t much lower, and you see why this looks like an attractive proposition. There has been much echo from the large language model crowd, but it could be equally of interest to people working with point clouds, where many models also require considerable amounts of memory.
DGX Spark
At the 2025 GTC, NVIDIA announced more details. DIGITS has now been renamed DGX Spark, with the model with 4TB of storage being available for pre-order for US$ 4,000 (€3,689 in Germany). A (much) more powerful larger version called DGX Station has also been announced. No price has been mentioned yet, but given the 784 GB of memory, it would not be surprising if this will be in the US$ 50,000 range.
Partner models
Somewhat unexpected to me was the announcement that basically the same system as DGX Spark will also be available from Asus, Dell, and HP. It is unclear to me if this will only be repackaging of the same hardware in a different shell, or if there are actual differences between the models of the various brands. But considering that this is a relatively low volume product, I would be somewhat surprised if there are large differences. To me it’s more likely that this approach was chosen so companies can buy the hardware from their usual supplier.
The Asus offering is called Ascent GX10 and can be pre-ordered on the NVIDIA website for US$ 3,000 (€ 2,760) for a 1 TB model. Not much information is available on the Asus website. The same is true for the Dell offerings, which are called Dell Pro Max with GB10 for the DGX Spark equivalent, and Dell Pro Max with GB300 for the Dell version of the DGX Studio. At least here the website states that the latter’s RAM will be divided between 496GB of CPU memory and 288GB of HBM GPU memory, and that the FP4 performance will be 20 Petaflops vs. the 1 PFLOPS of the smaller system.
Even less is known about the HP version, which is called the ZGX Nano AI Station G1n. Here the confusion starts as HP calls the Spark equivalent Station, a name NVIDIA uses for the much more powerful bigger model.
Alternatives
I’ve seen two major points of criticism (besides price) leveled at the DGX Spark. The first is that, being an ARM system, it will not run Windows and thus not be suitable for gaming and general purpose computing. I understand this, but I think it misses the point – this really is a computer aimed at AI development. The second is that memory bandwidth is limited at 273 GB/s. This is clearly a result of using relatively slow LPDDR5x RAM, which was probably the only way to achieve the US$ 3,000 price point. The goal clearly was lots of RAM instead of fast RAM. This means however that performance will be limited. By comparison, the RTX 5090’s memory bandwidth is 1792 GB/s, i.e. more than 6.5 times faster.
So if 32 GB of VRAM is sufficient for your applications, an RTX 5090 will probably be the better choice for you if you don’t mind the much higher power draw. If 24GB if sufficient and you’re on a budget, go looking for a used RTX 3090. But if 128 GB of VRAM or unified memory is a must, you have to look elsewhere than NVIDIA for alternatives.
The first thing that comes to mind is the new generation of AMD Strix Halo Ryzen AI MAX+ 395 CPU/GPU. Specs-wise it actually looks quite similar to DGX Spark, with a maximum of 128 GB of unified memory and a 256 bit memory bus, resulting in a comparable memory bandwidth of 256 GB/s. The main advantages over the DGX Spark are that it’s an x86 CPU that will run Windows as well as Linux, and a lower price points. Not many systems have been announced officially yet, with the Framework Desktop being the most prominent. Prices for the 128GB model start at €2,329 without storage, but you can get that at reasonable prices or bring your own. The problem here is that much of the AI world revolves around NVIDIA’s CUDA API. While the situation has improved for AMD’s ROCm API, with support e.g. being available in PyTorch, performance still appears to be quite a bit lower than with CUDA. Once cheaper offerings become available (Strix Halo-equipped Chinese mini PCs have been spotted), the price difference might well be worth it.
The other alternative is the Apple Mac Studio. The M4 Max model can be had with 128 GB of RAM and 512 GB of storage for around €4,400. The memory bandwidth is double that of DGX Spark and Strix Halo at 546 GB/s. This choice is of course limited to Apple’s ecosystem of Mac OS, so no Windows or Linux here (yet). I don’t know how suited Apple Silicon is to AI work and whether it is difficult to get software made for Linux to run. I know that the Metal API is supported by PyTorch through MPS. Performance does seem to be lower than with NVIDIA GPUs, but then once again these don’t offer these amounts of RAM. So especially for existing Mac users, a Mac Studio might be the sensible choice if being able to run large AI models is more important than training or inference speed.