
How adding capacity to a network could reduce IT costs
Higher network capacity means less congestion. They say it’s an old-fashioned idea to assume that if you have a faster LAN connection to users and servers, you’ll pass through more traffic and congest highways. “Applications drive traffic,” noted one CIO. “The network does not suck data into the interface. The apps push it.” A faster connection means less congestion, which means fewer complaints and more alternative routes without delays and lost traffic, which also reduces the number of complaints. In fact, anything that results in packet loss, outages, or even delays generates complaints, and handling complaints is a large source of operating costs. The complexity arises because network speed affects the user/application experience in many ways beyond the obvious effects of congestion.
When a data packet passes through a switch or router, it is subject to two factors that can delay it. Overload is one thing, but “serialization delay” is another. This complicated-sounding term means that you can’t switch a packet unless you have it all, and so each packet of data is delayed until it’s completely received. The length of this delay is determined by the speed of the connection it arrives on, so fast interfaces always provide better latency, and the latency a given packet experiences is the sum of the serialization delay of each interface it passes through.
Application designs, component costs and AI are changing the way we think about network capacity
You may wonder why businesses are starting to pay attention to the problem of solving capacity problems now rather than years or decades ago. They say there is an answer on both the demand and supply sides.
On the demand side, application componentization has increased, including the division of component hosting between data center and the cloud have radically increased the complexity of application workflows. Monolithic applications have simple workflows – input, processing, output. Components have to move messages between components, and each of these movements is supported by a network connection, so the network is more closely tied to application availability and performance. Not only that, but complex workflows make it harder to figure out what’s wrong and how to fix it. Finally, remember the serialization delay? Each component interface adds to and eats up some of the latency budget inherent in all applications.
On the supply side, the cost of network adapters in systems and network device interfaces does not increase linearly. One network engineer noted that the cost per bit of an interface typically drops as speed increases up to a certain point and then starts to rise. Where this curve rises has changed as technology has improved, so it is more practical today to build additional capacity. Ethernet standards have also been enhanced to better handle multiple paths between switches (a feature popular among enterprises that prefer to increase power to reduce operating costs) and different traffic priorities.
Then there’s AI. Interestingly, most of the enterprises that are now actively building local networks with the necessary bandwidth are also the first to use self-hosted AI. AI in general and model training in particular. generates a lot of inter-server traffictherefore, congestion and the risk of delay or packet loss are high. Most agree that the AI will require lower latency and higher network capacity, especially during training, and that since the volume and nature of traffic generated by the AI is impossible for the AI user to understand, congestion-related problems will cause more calls with complaints. . AI traffic may also affect other applications. Thus, AI hosting is a good reason to seriously think about expanding the capacity of the data center network.
2025-01-07 15:01:39