DeepSeek has released a new paper,Happy and Gay (2021) Hindi Web Series with co-founder Liang Wenfeng credited as a contributor, detailing how its latest large language model DeepSeek-V3 achieves efficient training and inference using only 2,048 H800 GPUs – significantly fewer than the tens of thousands typically required. The team attributes this efficiency to four key innovations: memory optimization through multi-head latent attention (MLA), computational savings via a Mixture-of-Experts (MoE) design with FP8 precision, communication improvements using a multi-plane network topology, and faster inference through multi-token prediction (MTP). With MLA, KV cache memory usage is cut to just 70KB per token, up to 1/7 that of competing models. MoE architecture activates only 37 billion of the model’s 671 billion parameters per forward pass, reducing training costs by 90% compared to dense models. FP8 training further halves compute and memory usage, with minimal accuracy tradeoff. Beyond the model, the paper also outlines five future directions for AI hardware design, advocating for tighter integration between software and hardware to address memory, compute, and networking bottlenecks. [36Kr, in Chinese]
Related Articles
2025-06-26 04:38
2557 views
Best robot vacuum deal: Save $140 on roborock Q7 Max Robot Vacuum
SAVE $140:As of May 14, the roborock Q7 Max Robot Vacuum and Mop is on sale for $159.99 at Amazon. T
Read More
2025-06-26 03:21
1047 views
Download this: Quickshot will help you shoot better photos
The makers of Facetune are back with another photo app: Quickshot, a new camera app that aims to hel
Read More
2025-06-26 03:13
2753 views
Criticizing Google may have cost these scholars their jobs, but they’re only getting started
You have to be a little flattered when a company as powerful as Google feels the need to go after yo
Read More