At Supercomputing 2024 (SC24), Enfabrica Company unveiled a milestone in AI knowledge heart networking: the Accelerated Compute Cloth (ACF) SuperNIC chip. This 3.2 Terabit-per-second (Tbps) Community Interface Card (NIC) SoC redefines large-scale AI and machine studying (ML) operations by enabling large scalability, supporting clusters of over 500,000 GPUs. Enfabrica additionally raised $115 million in funding and is anticipated to launch its (ACF) SuperNIC chip in Q1 2025.
Addressing AI Networking Challenges
As AI fashions develop more and more giant and complicated, knowledge facilities face mounting pressures to attach giant numbers of specialised processing models, akin to GPUs. These GPUs are essential for high-speed computation in coaching and inference however are sometimes left idle as a consequence of inefficient knowledge motion throughout current community architectures. The problem lies in successfully interconnecting hundreds of GPUs to make sure optimum knowledge switch with out bottlenecks or efficiency degradation.
Conventional networking approaches can hyperlink roughly 100,000 AI computing chips in an information heart earlier than inefficiencies and slowdowns change into important. In response to Enfabrica’s CEO, Rochan Sankar, the corporate’s new know-how helps as much as 500,000 chips in a single AI/ML system, enabling bigger and extra dependable AI mannequin computations. By overcoming the constraints of typical NIC designs, Enfabrica’s ACF SuperNIC maximizes GPU utilization and minimizes downtime.
Key Improvements within the ACF SuperNIC
The ACF SuperNIC boasts a number of industry-first options tailor-made to trendy AI knowledge heart wants:
- Excessive-Bandwidth, Multi-Port Connectivity: The ACF SuperNIC delivers multi-port 800-Gigabit Ethernet to GPU servers, quadrupling the bandwidth in comparison with different GPU-attached NICs. This setup gives unprecedented throughput and enhances multipath resiliency, guaranteeing strong communication throughout AI clusters.
- Environment friendly Two-Tier Community Design: With a high-radix configuration of 32 community ports and as much as 160 PCIe lanes, the ACF SuperNIC simplifies the general structure of AI knowledge facilities. This effectivity permits operators to assemble large clusters utilizing fewer tiers, lowering latency and bettering knowledge switch effectivity throughout GPUs.
- Scaling Up and Scaling Out: The Enfabrica ACF SuperNIC, with its high-radix, high-bandwidth, and concurrent PCIe/Ethernet multipathing and knowledge mover capabilities, can uniquely scale up and scale out 4 to eight latest-generation GPUs per server system. This considerably will increase AI clusters’ efficiency, scale, and resiliency, guaranteeing optimum useful resource utilization and community effectivity.
- Built-in PCIe Interface: The chip helps 128 to 160 PCIe lanes, delivering speeds over 5 Tbps. This design permits a number of GPUs to connect with a single CPU whereas sustaining high-speed communication with knowledge heart backbone switches. The result’s a extra environment friendly and versatile format that helps large-scale AI workloads.
- Resilient Message Multipathing (RMM): Enfabrica’s proprietary RMM know-how boosts the reliability of AI clusters. By mitigating the influence of community hyperlink failures or flaps, RMM prevents job stalls, guaranteeing smoother and extra environment friendly AI coaching processes. Sankar notes the significance of this function, particularly in giant setups the place hyperlinks to switches failures change into frequent.
- Software program-Outlined RDMA Networking: This distinctive function empowers knowledge heart operators with full-stack programmability and debuggability, bringing the advantages of software-defined networking (SDN) into Distant Direct Reminiscence Entry (RDMA) setups. It permits customization of the transport layer, which might optimize cloud-scale community topologies with out sacrificing efficiency.
Enhanced Resiliency and Effectivity
Conventional methods usually require one-to-one connections between GPUs and numerous elements, akin to PCIe switches and RDMA NICs. Nonetheless, because the variety of GPUs in a system will increase, the danger of hyperlinks to switches failures grows, with potential disruptions occurring as usually as each 23 minutes in setups with over 100,000 GPUs, in response to Shankar.
The ACF SuperNIC addresses this concern by enabling a number of connections from GPUs to switches. This redundancy minimizes the influence of particular person element failures, boosting system uptime and reliability.
The SuperNIC additionally introduces the Collective Reminiscence Zoning function, which helps zero-copy knowledge transfers and optimizes host memory management. By lowering latency and enhancing reminiscence effectivity, this know-how maximizes the floating-point operations per second (FLOPs) utilization of GPU server fleets.
Scalability and Operational Advantages
The ACF SuperNIC’s design isn’t solely about scale but additionally about operational effectivity. It gives a software program stack that integrates with customary communication, current interfaces, and RDMA networking operations. This compatibility ensures environment friendly deployment throughout various AI compute environments composed of GPUs and accelerators (AI chips) from totally different distributors. Information heart operators profit from streamlined networking infrastructure, lowering complexity and enhancing the flexibleness of their AI knowledge facilities.
Availability and Future Prospects
Enfabrica’s ACF SuperNIC can be out there in restricted portions in Q1 2025, with each the chips and pilot methods now open for orders by means of Enfabrica and chosen companions. As AI fashions demand larger efficiency and bigger scales, Enfabrica’s modern strategy might play a pivotal function in shaping the subsequent era of AI knowledge facilities designed to assist Frontier AI models.
Filed in AI (Artificial Intelligence), Chip, generative AI, Semiconductors, Server, SoC and Supercomputer.
. Learn extra aboutTrending Merchandise