Supermicro Expands Its NVIDIA Blackwell System Portfolio with New Direct Liquid-Cooled (DLC-2) Systems, Enhanced Air-Cooled Models, and Front I/O Options to Power AI Factories
Supermicro (NASDAQ: SMCI) has expanded its NVIDIA Blackwell portfolio with new cooling solutions for AI infrastructure. The company introduced a 4U DLC-2 liquid-cooled NVIDIA HGX B200 system and an 8U front I/O air-cooled system, both designed for large-scale AI training and cloud-scale inference workloads.
The DLC-2 liquid cooling solution delivers up to 40% data center power savings, 40% reduced water consumption, and up to 98% system heat capture. The systems feature dual-socket Intel Xeon 6700 Series processors, support for 32 DIMMs with up to 8TB capacity, and include 8 NVIDIA ConnectX-7 NICs and 2 NVIDIA Bluefield-3 DPUs.
Both systems are optimized for NVIDIA HGX B200 8-GPU configuration with 1.4TB of HBM3e GPU memory per system, delivering up to 15x faster real-time inference and 3x faster training compared to Hopper generation GPUs.
Supermicro (NASDAQ: SMCI) ha ampliato il suo portfolio NVIDIA Blackwell con nuove soluzioni di raffreddamento per l'infrastruttura AI. L'azienda ha presentato un sistema NVIDIA HGX B200 raffreddato a liquido 4U DLC-2 e un sistema 8U con I/O frontale raffreddato ad aria, entrambi progettati per l'addestramento AI su larga scala e per carichi di inferenza a scala cloud.
La soluzione di raffreddamento liquido DLC-2 offre fino al 40% di risparmio energetico nei data center, 40% di riduzione del consumo d'acqua e fino al 98% di cattura del calore di sistema. I sistemi montano processori dual-socket Intel Xeon Serie 6700, supportano 32 DIMM con capacità fino a 8TB e includono 8 NIC NVIDIA ConnectX-7 e 2 DPU NVIDIA Bluefield-3.
Entrambi i sistemi sono ottimizzati per la configurazione NVIDIA HGX B200 a 8 GPU con 1,4TB di memoria GPU HBM3e per sistema, garantendo fino a 15x inferenza in tempo reale più veloce e 3x addestramento più rapido rispetto alle GPU della generazione Hopper.
Supermicro (NASDAQ: SMCI) ha ampliado su cartera NVIDIA Blackwell con nuevas soluciones de refrigeración para infraestructuras de IA. La compañía presentó un sistema NVIDIA HGX B200 refrigerado por líquido 4U DLC-2 y un sistema 8U refrigerado por aire con E/S frontal, ambos diseñados para entrenamiento de IA a gran escala y cargas de inferencia a escala cloud.
La solución de refrigeración líquida DLC-2 ofrece hasta un 40% de ahorro energético en centros de datos, 40% menos consumo de agua y hasta un 98% de captura del calor del sistema. Los sistemas cuentan con procesadores dual-socket Intel Xeon Serie 6700, soporte para 32 DIMM con hasta 8TB de capacidad e incluyen 8 NIC NVIDIA ConnectX-7 y 2 DPU NVIDIA Bluefield-3.
Ambos sistemas están optimizados para la configuración NVIDIA HGX B200 de 8 GPU con 1,4TB de memoria GPU HBM3e por sistema, ofreciendo hasta 15x inferencia en tiempo real más rápida y 3x entrenamiento más veloz frente a las GPU de la generación Hopper.
Supermicro (NASDAQ: SMCI)가 AI 인프라용 신규 냉각 솔루션으� NVIDIA Blackwell 포트폴리오를 확장했습니다. 회사� 4U DLC-2 액체냉각� NVIDIA HGX B200 시스�� 8U 전면 I/O 공랭� 시스�� 선보였으며, � 모델 모두 대규모 AI 훈련 � 클라우드 규모 추론 워크로드� 위해 설계되었습니�.
DLC-2 액체냉각 솔루션은 데이터센� 전력 소비� 최대 40% 절감고, � 사용량을 40% 감소시키�, 시스� 열을 최대 98% 포착합니�. 시스템은 듀� 소켓 인텔 제온 6700 시리� 프로세서� 탑재하고 32� DIMM� 최대 8TB 용량까지 지�며, 8개의 NVIDIA ConnectX-7 NIC� 2개의 NVIDIA Bluefield-3 DPU� 포함합니�.
� 시스템은 NVIDIA HGX B200 8GPU 구성� 최적화되� 있으�, 시스템당 1.4TB� HBM3e GPU 메모�� 제공하여 호퍼(Hopper) 세대 GPU 대� 실시� 추론은 최대 15�, 학습은 최대 3� 빠른 성능� 제공합니�.
Supermicro (NASDAQ: SMCI) a étendu son portefeuille NVIDIA Blackwell avec de nouvelles solutions de refroidissement pour l'infrastructure IA. La société a présenté un système NVIDIA HGX B200 4U refroidi par liquide DLC-2 et un système 8U à I/O frontal refroidi par air, tous deux conçus pour l'entraînement IA à grande échelle et les charges d'inférence à l'échelle cloud.
La solution de refroidissement liquide DLC-2 offre jusqu'à 40% d'économie d'énergie dans les centres de données, 40% de réduction de la consommation d'eau et jusqu'à 98% de capture de la chaleur système. Les systèmes intègrent des processeurs Dual-Socket Intel Xeon Série 6700, prennent en charge 32 DIMM jusqu'à 8TB de capacité et incluent 8 NIC NVIDIA ConnectX-7 et 2 DPU NVIDIA Bluefield-3.
Les deux systèmes sont optimisés pour la configuration NVIDIA HGX B200 8 GPU avec 1,4 To de mémoire GPU HBM3e par système, offrant jusqu'à 15x d'inférence temps réel plus rapide et 3x d'entraînement plus rapide par rapport aux GPU de la génération Hopper.
Supermicro (NASDAQ: SMCI) hat sein NVIDIA Blackwell-Portfolio um neue Kühlungslösungen für KI-Infrastrukturen erweitert. Das Unternehmen stellte ein 4U DLC-2 flüssigkeitsgekühltes NVIDIA HGX B200 System und ein 8U Front-I/O luftgekühltes System vor, die beide für großskaliges KI-Training und Cloud-Scale-Inferenz-Workloads ausgelegt sind.
Die DLC-2 Flüssigkeitskühlung ermöglicht bis zu 40% Energieeinsparung im Rechenzentrum, 40% reduzierte Wasserverwendung und bis zu 98% Erfassung der Systemwärme. Die Systeme sind mit Dual-Socket Intel Xeon 6700 Series Prozessoren ausgestattet, unterstützen 32 DIMMs mit bis zu 8TB Kapazität und enthalten 8 NVIDIA ConnectX-7 NICs sowie 2 NVIDIA Bluefield-3 DPUs.
Beide Systeme sind für die NVIDIA HGX B200 8-GPU-Konfiguration optimiert und bieten 1,4TB HBM3e GPU-Speicher pro System, was im Vergleich zu GPUs der Hopper-Generation bis zu 15x schnellere Echtzeit-Inferenz und 3x schnelleres Training ö.
- None.
- Requires specialized liquid-cooling infrastructure for DLC-2 system deployment
Insights
Supermicro's new liquid-cooled systems deliver 40% power savings while expanding their Blackwell AI portfolio, positioning them strongly in the high-growth AI infrastructure market.
Supermicro has strategically expanded its NVIDIA Blackwell portfolio with two key systems: a 4U DLC-2 liquid-cooled system and an 8U front I/O air-cooled system. Both support the NVIDIA HGX B200 platform and are designed for forward compatibility with upcoming HGX B300 systems, providing customers important investment protection.
The direct liquid cooling (DLC-2) implementation is particularly significant, enabling up to 40% data center power savings and 40% reduced water consumption compared to traditional cooling approaches. This translates to substantial operational expenditure (OPEX) reductions at scale—critical as AI deployments grow and energy costs rise.
The front I/O design addresses key deployment challenges in large-scale AI environments by relocating networking components, storage bays, and management interfaces to the cold aisle. This architectural improvement significantly simplifies cabling, maintenance, and thermal management while supporting eight 400G NVIDIA ConnectX-7 NICs and two NVIDIA Bluefield-3 DPUs for high-bandwidth connectivity.
The technical specifications are impressive: 32 DIMM slots supporting up to 8TB memory capacity, Intel Xeon 6700 processors up to 350W, and NVIDIA GPUs with 180GB HBM3e memory per GPU. This configuration eliminates potential bottlenecks between CPU and GPU components for AI workloads.
Supermicro's expanding portfolio—now eight different Blackwell-based system configurations—demonstrates their agility in addressing diverse customer requirements and ability to quickly adapt to the rapidly evolving AI infrastructure market.
Supermicro's expanded Blackwell portfolio strengthens its competitive position in the high-margin AI infrastructure market with unique liquid cooling advantages.
Supermicro's expansion of its NVIDIA Blackwell portfolio represents a strategic move to capture higher-margin business in the rapidly growing AI data center market. By offering both liquid-cooled and air-cooled options with front I/O configurations, the company is positioning itself to serve a broader range of customers with varying infrastructure requirements.
The DLC-2 liquid cooling technology is particularly noteworthy as a competitive differentiator. With claims of up to 40% data center power savings, Supermicro addresses one of the most significant cost concerns for AI infrastructure deployments—power consumption. For hyperscale customers building massive AI factories, these efficiency gains could translate to millions in operational savings annually.
The company's "Building Block" architecture approach allows for rapid customization to meet specific customer requirements, potentially accelerating sales cycles and improving inventory management. This flexibility should help Supermicro compete effectively against larger rivals in capturing AI infrastructure spending.
From a product lifecycle perspective, ensuring compatibility with future NVIDIA HGX B300 platforms creates a compelling upgrade path for customers, potentially leading to recurring revenue opportunities. The expanded portfolio also demonstrates Supermicro's strong partnership with NVIDIA, a critical relationship as AI deployments accelerate.
While specific pricing details weren't disclosed, these specialized high-performance systems typically command premium margins compared to standard servers. The technical innovations in cooling efficiency, front I/O design, and memory expansion capabilities provide multiple vectors for value-based pricing in competitive bids.
- New DLC-2 4U Front I/O liquid-cooled system provides up to
40% data center power savings - 8U front I/O air-cooled system delivers enhanced system memory configuration flexibility, density, and cold aisle serviceability
- The new front I/O air-cooled or liquid-cooled configurations expand customer choice and serve broader AI Factory environments
"Supermicro's DLC-2 enabled NVIDIA HGX B200 system leads our portfolio to achieve greater power savings and faster time to online for AI Factory deployments," said Charles Liang, CEO and president, Supermicro. "Our Building Block architecture enables us to quickly deliver solutions exactly as our customers request. Supermicro's extensive portfolio now can offer precisely optimized NVIDIA Blackwell solutions to a diverse range of AI infrastructure environments, whether deploying into an air- or liquid-cooled facility."
For more information, please visit
Supermicro's DLC-2 represents the next generation of Direct Liquid Cooling solutions, engineered to meet the escalating demands of AI-optimized data centers. This comprehensive cooling architecture delivers significant operational and cost benefits for high-density computing environments.
- Up to
40% data center power savings - Faster time-to-deployment and reduced time-to-online by providing an end-to-end data center scale liquid cooling solution
- Up to
40% reduced water consumption with warm water cooling at an inlet temperature of up to 45°C, reducing the necessity of chillers - Up to
98% system heat capture by liquid-cooling CPUs,GPUs, DIMMs, PCIe switches, VRMs, power supplies, and more - Enabling quiet data center operation at a noise level as low as 50dB
Supermicro now offers one of the broadest portfolios of NVIDIA HGX B200 solutions with the two new front I/O systems and six rear I/O systems allowing customers to choose their most optimized CPUs, memory, networking, storage, and cooling configuration. The new 4U and 8U front I/O NVIDIA HGX B200 systems are built upon proven solutions for large-scale AI training and inference deployment by addressing major pain points of deployment, including networking, cabling, and thermals.
"Advanced infrastructure is accelerating the AI industrial revolution for every industry," said Kaustubh Sanghani, vice president of GPU product management at NVIDIA. "Based on the latest NVIDIA Blackwell architecture, Supermicro's new front I/O B200 systems equip enterprises to deploy and scale AI at unprecedented speed—delivering breakthrough innovation, efficiency, and operational excellence."
Modern AI data centers demand high scalability that requires substantial node-to-node connections. The system's 8 high-performance 400G NVIDIA ConnectX®-7 NICs and 2 NVIDIA Bluefield®-3 DPUs are moved to the front of the system to allow for the configuration of networking cables, storage drive bays, and management all from the cold aisle. The NVIDIA Quantum-2 InfiniBand and the Spectrum�-X Ethernet platform are fully supported to ensure the highest-performing compute fabric.
In addition to system architecture improvements, Supermicro has fine-tuned components to maximize efficiency, performance, and cost savings for AI data center workloads. Upgraded memory expansion with 32 DIMM slots delivers greater flexibility for system memory configuration, enabling large-capacity memory implementations. Large system memory complements the NVIDIA HGX B200's HBM3e GPU memory by eliminating CPU-GPU bottlenecks, optimizing large workload processing, enhancing multi-job efficiency in virtualized environments, and accelerating data preprocessing.
All Supermicro 4U liquid-cooled systems and 8U or 10U air-cooled systems are optimized for NVIDIA HGX B200 8-GPU with each GPU connected via 5th Generation NVLink® at 1.8TB/s, providing a combined total of 1.4TB of HBM3e GPU memory per system. NVIDIA's Blackwell platform delivers up to 15x faster real-time inference performance and 3x faster training for LLMs compared to the Hopper generation of GPUs. The new front I/O systems with dual-socket CPUs supporting up to 350W Intel® Xeon® 6 6700 Series processors deliver high performance and efficiency for a wide range of AI workloads.
The newly introduced 4U front I/O liquid-cooled system features front-accessible NICs, DPUs, storage, and management components. It utilizes dual-socket Intel® Xeon® 6700 Series processors with P-cores up to 350W and NVIDIA HGX B200 8-GPU configuration (180GB HBM3e per GPU). The system supports 32 DIMMs with up to 8TB capacity at 5200MT/s or up to 4TB at 6400MT/s DDR5 RDIMM, plus 8 hot-swap E1.S NVMe storage drive bays and 2 M.2 NVMe boot drives. Network connectivity includes 8 single-port NVIDIA ConnectX®-7 NICs or NVIDIA BlueField®-3 SuperNICs and two dual-port NVIDIA BlueField®-3 DPUs.
Supermicro designed this liquid-cooled system as the building block for densely populated AI factories that can reach cluster sizes beyond thousands of nodes, delivering up to
The 8U front I/O air-cooled system shares the same front-accessible architecture and core specifications while providing a streamlined solution for AI factories without liquid-cooling infrastructure. It features a compact 8U form factor (compared to Supermicro's 10U system) with a reduced-height CPU tray while maintaining the full 6U-height GPU tray to maximize air-cooling performance.
About Super Micro Computer, Inc.
Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in
Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.
All other brands, names, and trademarks are the property of their respective owners.
View original content to download multimedia:
SOURCE Super Micro Computer, Inc.