Story Highlights
  • Nvidia’s next-gen flagship GB202 GPU could use a 512-bit memory bus.
  • According to leaks, these graphics cards will utilize GDDR7 memory.
  • A reliable insider states that the memory configuration is not too different from the last generation.

Information about what to expect from its GeForce RTX 50 series GPU launch is now slowly making its way to the public. A reliable source states that the GB202, expected to be used in the GeForce RTX 5090, will be based on a 512-bit bus.

This would translate to 24GB of 28Gbps GDDR7 memory.

 

GPU leaker Kopite7kimi on X speculates that Nvidia’s next generation of RTX 50-series “Blackwell” GB203 and new GB205 dies will have memory bus widths identical to those of Nvidia’s exiting RTX 40-series AD103 and AD104 GPU dies

These are found in some of the best graphics cards, such as the RTX 4080 Super and the RTX 4070 Super. The VRAM memory interface on NVIDIA’s Blackwell GPUs will also skip the 384-bit bus, according to kopite7kimi.

The Blackwell GPUs will probably use a 192-bit and 256-bit bus for GB205 and GB203, respectively. The GB202, on the other hand, should come with a 512-bit wide memory bus. This will likely be the GeForce RTX 4090 successor.

This setup would be a step up over the GeForce RTX 4090, even if the memory itself ends up identical. The use of GDDR7 should also enable the GeForce RTX 5090 to outperform its predecessor, thanks to the new innovations in memory.

Based on these leaks, the next generation could come with the following configurations:

  • GB202 (RTX 5090) 512-bit bus, 24GB memory, 1536 GB/s bandwidth
  • GB203 384-bit bus, 16GB memory, 1024 GB/s bandwidth
  • GB204 256-bit bus, 12GB memory, 768 GB/s bandwidth
NVIDIA GeForce RTX 4070 Founders Edition
Nvidia GeForce RTX 4070 GPU

With the release of the RTX 50-series, Nvidia is expected to switch to the GDDR7 graphics memory standard. Nvidia will be able to significantly increase bandwidth with GDDR7 while simultaneously reducing or reusing its outgoing memory interfaces on its next-generation GPUs.

The reason why memory interface width on both Nvidia and AMD GPUs has shrunk recently is because of faster memory coupled with larger L2/L3 cache capacity. Newer memory technology, combined with larger caches, will allow Nvidia to increase effective memory bandwidth while decreasing bus width.

Industry insiders are already speculating about the unveiling of Blackwell GPUs targeted at high-performance computing (HPC) at the GTC 2024 event next week. Meanwhile, a formal gaming family announcement could arrive many months later.

Was our article helpful? 👨‍💻

Thank you! Please share your positive feedback. 🔋

How could we improve this post? Please Help us. 😔

Malik Usman
[How-To Guides Expert] Laiba, our tech guru at HI Digital, simplifies and crafts How-To Guides with a Google IT Support Certificate. Beyond tech, she captures the city's aesthetics through her lens. Join her journey where tech meets creativity! Get In Touch: Laiba@hidgt.com Google IT Certificate Shehryar Khan