Advertisements

IoT, Bandwidth, and Latency

When we think about network connections, our focus is usually on bandwidth. Bandwidth is the main metric in everyday use, as most connections are within a local site, where latency will be very low. There are a few specific instances in which this is not true and latency is the main target. High-performance computing (HPC) is one, and inter-site connections is usually another. As soon as connections touch the Internet, though, most thought of latency is out the window. There are too many factors beyond the enterp

Add New

rise’s control, and usually latency is not the most important factor.

Bandwidth, as it relates to network connections, is the amount of throughput a connection is capable of sustaining: the number of bits per second that can be pushed through the interface. Modern data centres work in the realm of 10 Gb, 40 Gb, and even 100 Gb links, with some 1 Gb legacy links still around. Latency, though, is the time it takes data to travel across the network. Usually, latency is measured as an RTT, or round-trip time: this is the time a packet takes to get from source to destination and back. Within the data centre, latency is measured in milliseconds (ms) and is generally in the less than 5 ms range. Over the Internet, a good rule of thumb is 25 ms within a given country, 100 ms within a given continent, and 150 ms intercontinental. These limits are very close to the speed of light limit, which is fundamental. The final consideration, even more esoteric than both of these, is the number of packets an interface is able to process per second (PPS). This is something that switches are rated on, and tends to be in the millions of packets per second range.
Advertisements
%d bloggers like this: