Dell Expands AI Server Portfolio in Collaboration with AMD for Enhanced Generative AI Capabilities

2 mins read

Dell and AMD Join Forces to Elevate Generative AI Capabilities: Unveiling the PowerEdge XE9680 Server

Introduction

In order to better support AI workloads, Dell is introducing a new server designed specifically for large language models (LLMs) in collaboration with AMD. This approach complements Dell’s Nvidia-powered AI infrastructure options by adding diversity to them.

As part of a calculated strategy to expand their high-performance computing offering for artificial intelligence applications, AMD and Dell have partnered to launch the PowerEdge XE9680 server. With AMD Instinct MI300X accelerators, this state-of-the-art addition has impressive performance—it can achieve over 21 petaFLOPS and has 1.5GB of high-bandwidth memory (HBM3). The server is designed primarily for companies who want to create and manage large language models (LLMs) internally.

Technical Details of PowerEdge XE9680 Rack Server

  • Processor: Two 4th Generation Intel® Xeon® Scalable processor with up to 56 cores per processor
  • Operating System: Red Hat® Enterprise Linux Canonical® Ubuntu® Server LTS
  • Chipset: Intel® C741 chipset of chips
  • Accelerators: 8 fully interconnected NVIDIA HGX H100 80GB 700W SXM5 GPUs or 8 fully interconnected NVIDIA HGX A100 80GB 500W SXM4 GPUs
  • DIMM Speed for Memory Up to 4800 MT/s
  • RAM Type RADIMM
  • 32 DDR5 DIMM slots for memory modules
  • Maximum RAM RAM DIMM: 4 TB
  • Front bays for storage
    Maximum capacity of eight 2.5-inch NVMe SSD drives is 122.88 TB.
  • Storage Controllers: HWRAID 1, 2 x M.2 SSDs; Internal Boot Boot Optimized Storage Subsystem (NVMe BOSS-N1)
    RAID software: S160

Infrastructure for AI in a New Era: Scalability and Connectivity

The scalability and potent accelerators of the PowerEdge XE9680 make it stand out. By utilizing the global memory interconnect (xGMI) standard, users can expand their systems without any problems. Additionally, AMD’s GPUs can be connected via the Dell PowerSwitch Z9664F-ON, an Ethernet-based AI fabric, providing a stable and adaptable AI infrastructure solution. This comes after Dell previously released a device with Nvidia H100 GPUs.

Establishing Criteria: AMD-Validated Design for Generative AI by Dell

With AMD, Dell released a ground-breaking standard called the Dell Validated Design for Generative AI, which gives enterprises a thorough framework for integrating their networking and hardware design for LLMs. This open-source package, which emphasizes the use of AMD ROCm-powered AI frameworks, supports well-known AI programs like PyTorch, TensorFlow, and OpenAI Triton, all of which are natively compatible with the AMD-powered PowerEdge XE9680.

The Open Networking Method: Deviating from the Norm

Unlike Nvidia, Dell is committed to standards-based networking as seen by their participation in the Ultra Ethernet Consortium (UEC). In contrast to Nvidia, AMD and Dell support open Ethernet for AI, allowing switches from several vendors to work together in the same system. To support internal generative AI models, Dell advises companies to embrace an open approach encompassing computing, fabric, and storage components.

Conclusion

In summary, transform AI infrastructure for the future. To sum up, the partnership between AMD and Dell represents a major advancement in AI infrastructure. The launch of the AMD Instinct MI300X-powered PowerEdge XE9680 server demonstrates a dedication to variety and innovation. A progressive approach may be seen in Dell’s focus on scalability, open standards, and tested designs for generative AI. Businesses are preparing for the first-half rollout of these innovative solutions, paving the way for a new era of robust, adaptable, and interoperable AI infrastructure that meets the changing demands of advanced computing.

FAQ

What is Dell PowerEdge XE9680?

The Dell PowerEdge XE9680 is a high-performance server introduced by Dell in collaboration with AMD. It is designed specifically for handling AI workloads, particularly large language models (LLMs).

What is the use of the Dell PowerEdge server?

Businesses can use the PowerEdge server as a flexible option to manage sophisticated computing activities. It offers the processing capacity and scalability required for creating and executing internal generative AI models in the context of AI. For enterprises involved in AI research, development, and deployment, the server’s design, scalability characteristics, and support for multiple AI frameworks, including PyTorch, TensorFlow, and OpenAI Triton, make it an excellent option. Its connectivity options—which include support for Ethernet-based AI fabric and the global memory interconnect (xGMI) standard—also add to its adaptability and flexibility in a variety of AI infrastructure configurations.

What are the advantages of Dell servers?

Dell servers are a great option for companies with a variety of computing demands, including AI workloads, because they combine performance, scalability, versatility, and reliability.

Is HP better than Dell Server?

There is no one-size-fits-all solution; instead, the choice between HP and Dell servers is based on the particular requirements, tastes, and financial constraints of your company. It is advised that you conduct in-depth research and potentially speak with IT specialists to make an informed choice based on your particular needs.

Dharmendra is a blogger, author, Expert in IT Services and admin of DJTechnews. Good experience in software development. Love to write articles to share knowledge and experience with others. He has deep knowledge of multiple technologies and is always ready to explore new research and developments.

Leave a Comment

Stay Connected with us