Blog

Thailand in 2017

In this page header section, you can provide details about the purpose of the page. This helps users quickly understand what to expect from the page content.


Thailand in 2017

In this page header section, you can provide details about the purpose of the page. This helps users quickly understand what to expect from the page content.


Networking for Nerds

How AI Changes Your Network Infrastructure Requirements


Enjoy a sumptuous riverside dining experience with diverse culinary delights and a serene view in Thailand.
thailand, food, traditional


Cheerful woman holding a drink, enjoying her time at a vibrant Thai street market.
thailand, food, traditional



Enjoy a sumptuous riverside dining experience with diverse culinary delights and a serene view in Thailand.
thailand, food, traditional


Traditional networking must evolve to run AI workloads and transmit data across distributed GPU clusters efficiently and reliably

As companies progress through the various stages of AI maturity, they continuously discover new infrastructure requirements. One such requirement is transforming their networking infrastructure to run AI workloads on GPUs. Given their significant investment in acquiring and managing GPUs, companies need to ensure these servers are constantly running—without connectivity interruptions, latency challenges or bandwidth issues.

Traditionally, Ethernet has been the go-to choice for CPU networking. However, the high-performance computing demands of processing AI workloads across large, distributed GPU clusters have raised the bar for performance, scalability and efficiency. Applications such as natural language processing, computer vision, advanced driver-assistance systems (ADAS), virtual assistants and medical diagnostics all require low-latency, high-bandwidth networks that can efficiently handle complex workloads. If the network cannot supply data to the GPUs fast enough, they will be underutilized, resulting in the hardware not delivering the expected value for the cost.

Mature technologies, including InfiniBand™ and Remote Direct Memory Access Over Converged Ethernet (RoCE), are beginning to emerge as top choices for networking infrastructure in AI-ready data centers. Another contributor to evolving network technologies is the Ultra Ethernet Consortium (UEC), a neutral party developing protocols around high-speed networking and specifications based on Ethernet technologies—which will be of significant interest to companies in the future. Many leading companies that develop AI hardware or software are participating across various membership levels within the organization.

Network infrastructure technologies for AI will continue to evolve and play a significant role in enabling AI workloads running in high-performance data centers.



Scroll to Top