WINDING DOWN
An idiosyncratic look at, and comment on, the week's net, technology and science news
by Alan Lenton
Just one piece this week - the second (and final) piece of my analysis of problems with Net Neutrality in the form in which it is currently being debated. It came out somewhat longer than I intended, I hope you'll forgive me for that, but a set of slogans is no substitute for an explanation.
Next week we are having a weekend off, but we will be back with a news round up the week after, July 10.
Until then...
Analysis: Net Neutrality
Part 2: The Internet Promise
So, given that the Internet doesn't, unlike a phone system, give you dedicated link to your destination, what does it promise to do?
The promise is very specific. It makes a promise that it will make its best effort to deliver your data packets to their destination. In fact it's very good about that - if something stops working between you and the destination, it will try to route your data by an alternative route! Of course the alternative route may take longer, but, usually, it will get there.
When you send and receive stuff over the Internet, the software involved breaks it down into packets, usually of 1,500 bytes of data plus a header giving information about the packet, including its destination address. The format is specified by the Internet Protocol (also known as IP). An IP packet either gets delivered or, for some reason, it doesn't and that's that. Either way there is no communication about the packet.
Some internet services just use IP, but most of the ones your software uses make use of a protocol built on top of the basic IP packet. It's called Transmission Control Protocol (usually referred to as TCP). The whole ensemble is referred to as TCP/IP. Packets using this protocol contain information designed to make the delivery reliable, and make it possible for the receiver to reassemble the packets in the correct order. One of the things TCP does to ensure reliable delivery is to insist that the receiver acknowledges receipt of the packets. If the sender doesn't receive an acknowledgement within a certain time, then it will assume the packet was lost, or somehow corrupted, and retransmit it.
OK. So now let's look at what's physically happening. You may remember that in part one I defined the Internet as a giant digital computer optimised for copying digital data from one distributed element to another. Well now it's time to look at the components - the individual bits that do the computing. Generically we can call them routers, although in practice there are specialist versions with different names. Routers are specialist dedicated computers which take a message from their input queue and copy it to the next router in the chain between the initiator and the destination.
On the incoming side, the router takes in packets from loads of different places and places them in a queue in its memory. The output takes the next packet from the queue, works out where it needs to send it next, does some housekeeping on the packet and copies it to the next router via the network. You have a router - albeit a small one - in your home. It's in the box that connects you to the Internet. The ones in the core sections of the Internet (sometimes referred to as The Backbone) are somewhat larger, bigger than a standard fridge, and capable of dealing with an enormous number of messages in a very short space of time.
That's what happens when everything is running smoothly. The problems start when for some reason there are more packets coming in than there is memory to put them in the queue. There are many reasons why this might happen. A physical breakdown somewhere. Perhaps a backhoe cuts a cable, or a sudden rush of subscribers connecting, because of an event like the result of the Brit referendum about exiting the EU. Maybe even another heavy duty router crashing, so that all the traffic it was handling now has to be taken over by the rest of the routers (remember, the Internet promised to do its best...). The result is pretty brutal. If there's no room in the queue, the packet is dropped, it just vanishes from the net.
Eeep! My packets have gone down a black hole! TCP to the rescue, or perhaps not. Remember when we were talking about TCP/IP earlier we said that TCP -requires- the destination to send a confirmation of the receipt of the packet? Well, of course, the sender notes that a packet hasn't been received so it sends a replacement, duplicate, packet.
Yea!
Err, cough, cough.
Remember that our router was overloaded and was dropping packets? So what do you think is likely to happen to our duplicate packet? Yes, spot on, the chances are that it will also get dropped, resulting in another duplicate being sent. Now multiply this by thousands of messages all trying to get through. After a while the system will note the existence of the black hole and try to route round it by using other routers, quite possibly overloading them as well. This is what is known as an Internet Storm. If you want to see what an internet weather report looks like, point your browser at this: https://www.silver-peak.com/sites/default/files/uploads/infographics/internet-weather-report-infographic.jpg.
Actually, it's not as bad as it sounds. TCP does have facilities for 'backing off' and, for instance, increasing the time between retransmits. And the service providers, especially the ones running the backbone, have they own software to prevent this. It's called traffic management.
Danger! Will Robinson! Traffic Management? Isn't that not treating all packets as the same? Giving some packets preference over others? What happened to Net Neutrality?
Yes it is. And I'm afraid it's simply not possible to keep the traffic flowing smoothly without some level of management. The problem is that 'traffic management' could be used to surreptitiously ensure that certain packets always receive preferential treatment, while others are delayed. Unfortunately, I doubt anyone would believe that the really big ISPs won't try that if they think they can get away with it.
Although theoretically, all packets are equal, in practice they aren't if you want to keep the Internet flowing smoothly, something which rarely if ever mentioned in the discussion on net neutrality. Net Neutrality is about politics and commercial interests. Internet traffic management is about keeping the Internet running. If the law mandated all packets to be equal all the time, then to put it bluntly the Internet would probably seize up in pretty short space of time.
So nothing is as simple as it seems - whether that's Net Neutrality, or the way the Internet works. Net Neutrality is important, but so is internet traffic management. Please keep that in mind when you read material on the subject.
Finally a word of caution. My description of the way the Internet works is simplified. Very much so, but taking that into account it is a description of how it works at its most basic level.
I hope you find it useful.
Acknowledgements
Please send suggestions for stories to alan@ibgames.com and include the words Winding Down in the subject line, unless you want your deathless prose gobbled up by my voracious Thunderbird spam filter...
Alan Lenton
alan@ibgames.com
26 June 2016
Alan Lenton is an on-line games designer, programmer and sociologist, the order of which depends on what he is currently working on! His web site is at http://www.ibgames.net/alan/index.html.
Past issues of Winding Down can be found at http://www.ibgames.net/alan/winding/index.html.