True or false: high bandwidth equals high speed?
Since I posed the question like that you’re probably second guessing yourself. And that’s good, because there’s a lot of confusion orbiting the topic of network speed and I want to clear up the muck.
In this post I’m going to explain the difference between two critical networking terms. Then i’ll wrap up with a summary and my opinion.
The truth about network speeds
A quick visit to Comcast’s site touts the following promise:
Download speeds up to 50Mbps
Time Warner Cable, tells me I can get up to 50Mbps down for $64.99 per month.
By the way, note the key phrase “up to“.
Up to is not average, it’s maximum but most people don’t realize that after the glitter and confetti stops.
Monolithic data service providers such as Comcast and Time Warner promise you up to 50Mbps download speeds but what exactly does that mean? 50Mbps is pretty fast right?
Bandwidth vs Latency
Whenever you think about network speed there are at least two elements in play: bandwidth and latency.
Bandwidth is always measured in bits per second (bps) and refers to how many bits are getting sent to your computer every second. The best analogy I can give for bandwidth is to think of the highway.
Think of each car as a data packet and each lane as a “band” that data can traverse. Adding more bandwidth is tantamount to adding more lanes. The more lanes you have (the more bandwidth) the more cars (network traffic) can travel the link simultaneously. In essence, adding more bandwidth let’s you transmit more data in parallel. Bandwidth refers to how “wide” the communication path is.
Alright, so how should you construe the grandiose promises of rapacious data providers who are famished for your hard earned dollars?
Does this mean more bandwidth is always better? Not really, let me show you why.
Latency is the other half of the speed equation. If bandwidth is analogous to how many lanes you have on the highway then latency is tantamount to the length of the highway.
How many miles are there between the New York City and Anchorage, Alaska compared with New York City and Atlanta? The first is significantly longer than the latter.
Looking at latency
Imagine with me that you’re the President of the United States and you’re thinking about improving the national infrastructure. You say to yourself:
We already have a three-lane road connecting New York City to both Atlanta and Anchorage but people are still complaining about traffic. I’ll go ahead and add an extra lane on both highways to ease the traffic burden
What you just did was add more bandwidth. You upgraded your network connection from 25Mbps to 50Mbps but you did nothing to decrease the distance between the two cities. The latency refers to the time it takes a single car to drive from point A to point B. As we’ll see in a moment, each car has a theoretical top speed of 125,000 miles per second… but that’s still time and it adds up.
The main thing I want you get get from this section is that latency is delay. Just remember the word “late” is in there.
Adding more bandwidth to your network does absolutely nothing to decrease the time it takes for each packet to travel from your home, through the labyrinth of the internet, across all the routers in the world and finally to your final destination.
Yes, adding more bandwidth means you can send more data concurrently but it doesn’t decrease the time required to send each nugget (packet) of data. The speed of data has hard physical limitations that no amount of bandwidth can ever allay.
An example with some math
According to Einstein’s special theory of relativity, nothing can travel faster than the speed of light, which in a vacuum is about 300,000 km per second (aka kickass fast)…
Now, we currently don’t have a reliable means of transmitting photon data in a vacuum so we use the next best transmission medium: fiber optics. Super smart people tell me it takes a single photon about 5 milliseconds to travel 1000 km in fiber optic cable; that breaks down to a blood curdling speed of 200,000 km per second, or said differently: 125,000 miles per second.
The straight line distance from New York City to Anchorage is 5,409 km (3,361 miles).
So let’s figure out how long it takes a single photon of light to get from NYC to Anchorage. We’ll assume this is a perfect world with a dedicated fiber optic cable connecting the two end points.
5,409km / 200,000 km/s = 0.027045
Alright, so now we need to convert that to milliseconds so multiply that value by 1000.
0.027045 x 1000 = 27.045 ms
But that’s only one direction usually when you send data, such as a HTTP web request, you expect something back so we need the round-trip time (RTT):
27.045 x 2 = 54.09 ms
In this example, 54.09 milliseconds is the fastest feasible time you would have to wait for the web server to send back the HTTP response containing your web page.
But that’s all hypothetical because in reality such a dedicated, direct link from New York to Alaska doesn’t exist. Furthermore, there are actually dozens of routers separating your computer from the web server in Alaska and each of those routers introduces more latency into the equation.
This is why it doesn’t matter if you have a 50Mbps link, a 100Mbps link or even a 1000Mbps link because all you’re doing is adding more lanes or increasing the girth of the pipe.
You haven’t done anything to decrease delay because delay is a function of immutable factors such as distance, physics and essential network equipment that required to deliver your data to the destination.
Let’s get Practical
So math and theory is good but what does this mean practically? Well it means if you have a high bandwidth link with high latency (high delay) then when you click a bookmark to a web site such as fixedbyvonnie.com there’s a brief pause because of latency but then after a moment all the web elements such as the images, text, and layout instantly appear (high bandwidth).
Conversely, the antithesis of this is low bandwidth and low delay. In this case clicking fixedbyvonnie.com would start to load instantly (because of the low delay) but then you would see each web elements slowly loading one by one on the screen (low bandwidth).
It’s often easier to add more bandwidth than it is to decrease latency because latency is a function of not only your data service provider but also your home router and the network path between your home and all the routers on the web.
What should I do?
On a side note, a slow network might be a corollary of slow web browser and may not be pertinent to latency. If you’re using Firefox, Chrome or God forbid… Internet Explorer, you should try a few things to speed it up first..
That being said, I’ll assume you have copious amounts of bandwidth but your network connection is boringly slow so the first thing you should do is get a hard number on your latency.
On a windows box you can perform a quick check by using a popular troubleshooting tool called ping.
Click Start, and enter this:
Next, type ping followed by the web address of the website that’s taking forever to load.
You’ll see the IP address of the host and how many bytes were sent. There’s also a bunch of other stuff but for the purposes of this article we only care about this little bit:
That’s the latency. The ping utility is telling me that it sent four packets to google.com and it took 8 milliseconds which is pretty fast.
If you ping your home router, for example type ping 192.168.1.1 (or whatever your home router IP is) you’re latency should be extremely low; usually 1 ms since the distance is so short.
A really fun command to use is pathping.
It’s like ping on steroids because it pings each node and analyzes the results over a period of time. You can actually see how many routers your packet had to pass through.
There are even some programs out there such as Open Visual Traceroute that let you see all kinds of cool stuff such as a 3D map of the globe, Gant charts and basic route trace data. I haven’t used Open Visual Traceroute but you can check it out on Soureforge.
Ultimately, when it comes to latency, you can measure it with ping and pathping, use a wired connection and if possible and try to access servers in your proximity but there isn’t really much you can do to decrease it.
Network Service providers never advertise latency; instead, they just throw big numbers at you and slap on the Mbps prefix and exclaim: get more broadband! Upgrade to 75Mbps for the low addition of 9.95 per month!
But now you know that getting more bandwidth only deals with half the problem – the idea solution comprises both more bandwidth and less latency.
Wrapping things up
I just want to make sure you really understand the distinction between bandwidth and latency so I’m going to give you one more scenario.
Let’s say you have two equal latency network connections but connection A has really crappy bandwidth (56Kbps, less lanes, narrow pipe) but connection B has high bandwidth (50Mbps, wider lanes, wider pipe).
The time it takes to start transmitting the first byte for both connections is identical because the latency is the same.
But connection B has more bandwidth therefore once it gets the first byte it finishes downloading the entire file before connection A. The reason for that is because connection B has more lanes it can therefore pull down more data at a time. 50 million bits at one time vs 56 thousand bits at a time is faster.
Bandwidth is inextricably linked to latency. The two work together to determine your true internet speed.
The next time a sales guy tries to sell you a internet bundle ask him about average latency, I bet you’ll get a blank stare.
By the way, if you really find this stuff interesting, you should ready Stewart Cheshire’s excellent article about latency. It’s a little old, was published in May 1996, but is a relatively short read and provides an entertaining read on latency and bandwidth. Enjoy!