The latency difference a modem/router can make on ADSL max

Started by esh, Oct 23, 2010, 16:26:59

Previous topic - Next topic

0 Members and 2 Guests are viewing this topic.

esh

Some of you may remember I collect large quantities of data regarding my line statistics out of idle curiosity. Well, recent updates (as in, over the past year) allow me to produce concrete results that I can show you, should you be interested, on how changing hardware may affect your latency.

The statistics I take are done via pings to various sites (ones that are always up unless something disastrous happens, eg. google) and this is done every 5 minutes, mostly so I don't potentially annoy site owners :) As such, 5 minutes is not really enough of a time resolution to get a feel on packet loss, but it's enough to get a good clue on line quality, especially over long periods. Those of you who like numbers will realise that a sample time of every 5 minutes gives 288 measurements per day. Here we receive two useful numbers: the mean latency (this is the average, the sum of the samples divided their quantity) and the standard deviation, which gives us a feel for how much this latency varies by.

In the plot below, the top graph shows the mean latency (black line) for a *single* target site, and the upper standard deviation bar in red. The green, blue, and red dashed lines indicate the 90%, 95%, and 99% confidence margin over the entire series of data ("90% confidence" meaning that there is 90% chance that the ping on any given day will be equal or less than that value).

Ping will of course be affected not just by the router and the modem, but the amount of use it receives and BT/IDNet. Averaging over the entire day should knock out most of the 'end user' variation. It is not a line that is saturated at all; a brief odd hour here and there of upload and download and backups. The rest of the time it is fairly lightly used: communication, email, web. It could be of use to remove 'peak' hours of the line from the quality computation which is an exercise I will leave for the future.

It should be noted that over a entire day's sampling, the latency graph for the other sites is almost identical. Nonetheless, I wanted to condense this data into a single measurable number which I term "line quality" (the bottom plot), where 1.0 would indicate perfect theoretical performance from numbers measured to date. This is a logarithmic plot, so small variations are magnified greatly. Each target ping site is weighted *equally* in the line quality plot, and it also takes into account the variance of the ping during the measured period. The main use of the 'line quality' number is that the number immediately means something to me. LQ of 0.0 means it is an 'average' day (average meaning over the length of measurement period, in this case since 06-2008). Numbers less than zero indicate things are worse than usual, and numbers greater than zero mean things are good. It is however, still an arbitrary number.



You can see how my new DSL line starts off very nicely in 2008 before being interleaved within the first couple weeks until someone finally tips me off to this and IDNet chide BT for their foolishness. The congestion is incredibly nasty. I think IDNet flagged this with BT in both occasions, and a reset of the DSL line cleaned it up quite well.

Now for the hardware side of things.

Point (a)
Up until point (a) I am using an old (2002 era) US Robotics DSL modem which served me quite well. It is a 66MHz, 8MB RAM job with a custom version of Linux. At this point it is swapped out for a rather expensive Netgear firewall+VPN solution (266MHz and 64MB RAM). You can see the ping fluctuation drops a large amount, and the line quality becomes very steady; whether this is assisted by the line reset performed at this time is hard to figure out. But it clearly has an effect on the variance of the ping, but not so much the average of it. The new router fails to open secure websites, fixed by firmware update.

Point (b)
This is a little odd. Around the end of the year I install the new storage network and the Netgear seems to take a turn for the worse. Ping times rise but sustain the fairly flat profile. Maybe not related and could be BT, but the line is frequently reset over the next months because more Netgear troubles arise: random crashes ensue, and the DSL modem is 100% crashable by uploading large files via FTP/SSH. Logging in to the web interface also starts to randomly crash it, as does wireless. I start a long discussion with Netgear support about this and small hordes of firmware updates and tweaks are done throughout this period (which may be responsible for the latency increase).

Point (c)
Server upgrade. The old win2k server is retired, and the monitoring is moved to a 'real' machine, as it was previously on an old Linux VM server. You can see this actually makes a small but measurable effect to latency. While unrelated, all LAN machines are now swapped to the storage network instead of using local hard drives, resulting in a 10-100 fold increase in local traffic.

Point (d)
Netgear finally admit that the DSL modem in their router is 'not compatible' with 'some BT lines'. I resort to using the external WAN port on the Netgear to hook a separate DSL modem in. Massive decrease in latency and a lot less variance.

Point (e)
Netgear is abandoned in favour of a router VM, running pfSense (2000 MHz, 256MB RAM), connected via gigabit. It seems somewhat bizarre using a massive blue Netgear box as a switch, but at least it can do that. You will probably notice on the upper ping plot that the red upper variance arm appears to vanish except for the odd peak. This isn't so, it's just too small to plot now.

Brief conclusions

  • Upgrading routers tends to lower ping time, but it's mostly the DSL modem that affects the variance in that latency
  • Faster routers (in MHz) give results closer to the optimum ping: in 2008 the mean ping is about 20ms higher than the minimum ping recorded (not shown). With the new pfSense VM router, the mean ping is effectively equal to the minimum ping.
  • Maybe it's me, but the line quality in 2008 shows a lot less 'large spikes'. I guess this coincides with BT's exchanges getting further stretched

Anybody else have any interesting experiences to share?
CompuServe 28.8k/33.6k 1994-1998, BT 56k 1998-2001, NTL Cable 512k 2001-2004, 2x F2S 1M 2004-2008, IDNet 8M 2008 - LLU 11M 2011

Rik

Great post, esh, I agree with your last conclusion very strongly.
Rik
--------------------

This post reflects my own views, opinions and experience, not those of IDNet.

Technical Ben

I wonder if processing speed and the likes has any effect on a routers latency?
It's all good having it compatible with "Fibre optical lines" but if it's running on Atari processors, it's going to lag behind.
I use to have a signature, then it all changed to chip and pin.

Rik

There's certainly been evidence of early Netgear fibre routers not having enough processing power.
Rik
--------------------

This post reflects my own views, opinions and experience, not those of IDNet.

Fox

The network card in your PC can also have an effect on latency and in game frame rates

http://www.bit-tech.net/hardware/networking/2010/05/01/killer-xeno-pro-gaming-network-card-review/1

worth a read if you have a spare 10 minutes
True power doesn't lie with the people who cast the votes, it lies with the people who count them



Technical Ben

No idea what 128mb memory would do for it... but the test seem to show it works. But is it any diff over a normal pci express network card? I'd agree on board is probably cheap/chearful and software driven (so slower). But not sure if it's worth shelling out for one with a massive buffer.
I use to have a signature, then it all changed to chip and pin.

esh

I think you'll find the last point (ie. switch->PC) is one of those you will least worry about unless you are an intensive data user. Yes, you can shave a millisecond off here and there, but truly what will make a difference to you in gaming is the stability of your latency, which I have shown is mostly dictated by your DSL modem.

Yes the router's processor is most definitely an aspect, as I comment upon. Basically the faster the processor, the more your mean ping approaches the minimum ping for the line.

Edit: think about it this way. If you peaked on ADSL max, this is 8Mbit (okay, 7.1Mbit, but let's say it's 8). 8 Mbit, meaning "mega"bit, meaning 8x10^6 bits/sec, 8 million bits. My first DSL modem has 66MHz processor. Neglecting any router OS overhead, this means the processor can do

66/8 CPU cycles per bit per second = 8 cycles

In the crudest sense this will be moving data from one location to another (ie. from the WAN port to the LAN port) and some of these operations are most likely offloaded to the ethernet chip. But you can clearly see that 24Mbit would mean this processor would now only have 2 cycles per bit, which is stretching it some I would imagine. If it needs more than 2 cycles per bit then data would start to be dropped and you would not reach peak performance. With 100Mbit down, clearly you want a processor a few hundred MHz, especially if you are doing stateful packet inspection.
CompuServe 28.8k/33.6k 1994-1998, BT 56k 1998-2001, NTL Cable 512k 2001-2004, 2x F2S 1M 2004-2008, IDNet 8M 2008 - LLU 11M 2011

pctech

I'd say processor clock has a lot of effect on ping and latency because of the packet processing speed.