Of course, Altman is referring to chonky enterprise-grade GPUs like those used in the Nvidia DGX B200 and DGX H200 AI platforms—the latter of which OpenAI was the first to take delivery of last year.
You wouldn't be using these for gaming (well, not of the 3D graphics sort).
They run in the tens of thousands of dollars each, as I recall.
Probably more correct to call them "parallel compute accelerator" cards than "GPUs". I don't think that they have a video out, even.
What they do have is a shit-ton of on-board RAM.
EDIT: Oh, apparently those are whole servers containing multiple GPUs.
The NVIDIA DGX B200 is a physical server containing 8 Blackwell GPUs offering 1440GB RAM and 4TB system memory. It also includes 2 Intel CPUs and consumes 14.3kW power at max capacity.
For comparison, the most powerful electric space heater I have draws about a tenth that.
DGX H200 systems are currently available for $400,000 – $500,000. BasePOD and SuperPOD systems must be purchased directly from NVIDIA. There is a current waitlist for B200 DGX systems.