Network Neutrality: A Concept for Yesterday’s Internet

CTN Issue: January 2018

As an editor I am often (correctly) accused of over-hyping the CTN topics in the email introduction; sometimes even by the authors of the articles, to which my reply is usually, “hey, this is social media people! It's all about the hype!”. Well, I don’t need to do it this time because my work has already been done on Network Neutrality—a juicy intersection of politics and engineering that inflames passions in Washington and may make or break some companies that you might just well be working for. OK, I just hyped it again...

So, this month we have some expert authors in this area, with Dr. Reed and Dr. Tripahi having served as consultants to the telecommunications industry on the topic of network neutrality. Jeff, Linda and Nishith have taken the very brave step of providing some technical clarification for this controversial topic. Your comments, and you know you have them, are most welcome.

Alan Gatherer, Editor-in-Chief

Network Neutrality: A Concept for Yesterday’s Internet

Jeffrey Reed

Professor Jeffrey Reed, the Willis G. Worcester Professor at Virginia Tech and the Founding Director of Wireless @ Virginia Tech and the Founding Faculty Member of the Ted and Karyn Hume Center for National Security and Technology

 
Linda Doyle

Professor Linda Doyle, Professor of Engineering & The Arts, in Trinity College, University of Dublin Ireland. She is the Director the CONNECT Research Centre for future networks and communications.

 
Dr. Nishith D. Tripathi

Dr. Nishith D. Tripathi, Principal Consultant, Award Solutions

 

Introduction

When you talk about the term “network neutrality,” it is necessary to define exactly what you mean by the term.  Like many new concepts, network neutrality has a number of interpretations within the technical and non-technical communities.   The common definition is that internet service providers (ISPs), including both cable and wireless services providers, should treat all internet traffic equally. In the U.S., the FCC tends to follow this common definition with a caveat, allowing for “reasonable” network management.  However, some network neutrality advocates do not support the concept of allowing for network management.   Many view the term network neutrality from a business/consumer perspective, that is, network neutrality means your ISP shouldn’t be allowed to block or degrade access to certain websites or services, nor should it be allowed to set aside a "fast lane" that allows content favored by the ISP to load more quickly than the rest. We define network neutrality simply as treating all bits equal across all services, both in the priority and costs. 

The word neutral is a word that is highly politically charged. Used in the context of network neutrality, it tends to convey a sense of morality that is about guaranteeing equal access and fairness. However, in this article, we argue that network neutrality is in fact just one kind of business model. The business model is one in which all bits are treated equally, no content is privileged over other, and the cost of carrying any one bit is the same as the cost of carrying any other bit. In addition, we show that when it comes to wireless and cellular networks, this business model and the idea that all bits are treated equally does not sit well in a world in which there is an ever-increasing demand for networked services of different kinds across all sectors (transport, agriculture, health, cities etc.). Our focus here is on wireless, and the technical issues of why it is important to have the ability to prioritize traffic and charge differently for various types of services for the varying types  of resources needed to support the services.  

Wireless is different from wireline services in that several wireless competitors in a market compared to broadband wireline service.  A recent report  http://ei.com/wp-content/uploads/2017/06/SingerAssessingImpact6.17.pdf claims nearly half of the 118 million US households lacking any wired Internet choice at the FCC's broadband standard of the download speed of 25 Mbps.  In contrast, there are four strong competing wireless service providers in the U.S.   Furthermore, wireless systems have much less available bandwidth to support traffic than a wireline system which means more aggressive traffic management is needed.

One of the great successes of the Internet is that it has created a sense of one homogenous network serving the diverse needs of users all over the world. Furthermore, most of the general press coverage on network neutrality uses examples that tend to  talk about one kind of content – for example, video by one provider being privileged over video by another provider, leading to the content we get to see and use being highly controlled.  No one is in such suppression of speech or monopolistic practices that have been highlighted in some emotionally-charged hypothetical situations described in the popular press. These types of hypothetical problems can happen with industries other than wireless or Internet services and are typically handled by a variety of legal and governmental institutions that enforce policies to ensure fairness. 

The reality is much more diverse and complex. In the world of the Internet-of-Things, for example, we now have tiny simple sensors, large complex machines, living animals, moving vehicles, virtual entities, and more interconnected. And more importantly the bits that circulate on those networks, represent data of such varied value, meaning and import, that we cannot reduce the discussion to simple examples or easily compare like with unlike.

While goals of network neutrality may be honorable, the execution of policies to reach those goals can, if not implemented correctly, lead to other problems.   This is the case with network neutrality; policies that enforce technical restrictions such as on prioritization of traffic and cost structures that ignore technical complexity can lead to unintended consequences. 

Many proponents of network neutrality desire to have all bits be treated equally by the network.   While "We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights.."  (2nd paragraph of the U.S. Declaration of Independence) is a great phrase, we should not treat bits like people!  Some bits are indeed more important than other bits and require special treatment!   Bits that support demanding services (such as services requiring very high data rates or low latency) will incur more costly processing within the network and such costs should be reflected in the price of the service. 

Even if the class of services is the same, but services are provided by different companies, one would expect to see different performances for those services.  Where, when and how the data enters the network and the use of caching at various points within an overall network can greatly impact the performance of the service.   The metrics for determining if one company’s data is being preferred over another company’s data by the network would be like comparing apples to oranges.   Enforcing technical rules to insure fairness is infeasible because of the complexity of the network, rather as a policy, fair-trade practices are best monitored by examining business practices. 

Historical Context

Pricing strategy has always been a struggle for the wireless industry, and it remains that way today in the face of network neutrality.    Even pricing for cellular voice calls has differed between Europe and U.S.  In the U.S., whether you make or receive a phone call on your cell phone, you pay for it.   In Europe, you make a call to a cell phone, you pay for it.   Pricing strategy has also changed with time.  Consider the case of Bell Atlantic cellular digital packet data (CDPD) (a data system that ran on top of the early analog cellular network called AMPS).  In 1996 price plans for Bell Atlantic (now part of Verizon Wireless) were cut more than 70 percent from previously published rates, with new rates range from 4 to 15 cents per kilobyte  ( that's $150/MB of data or $150,000/ GB) https://www.thefreelibrary.com/BELL+ATLANTIC+NYNEX+MOBILE+MAKES+WIRELESS+DATA+MORE+AFFORDABLE%3B...-a018357180.  A typical 10GB plan for $150/month today would have cost $1.5M per month) at the reduced 1996 rate!   Likewise speeds for CDPD in 1996 were 19.2 kbps and today’s average speeds for an LTE user are many tens of MB/s.   Even today we see frequent pricing strategy changes within the wireless industry as "all you can eat" data plans for consumers are being added, removed, and added again in the U.S. by the service providers.

Why such a big price difference between 1996 and now?   Certainly, the technology in 1996 was immature, but service providers and manufacturers have anticipated the market incorrectly many times.   Pricing is all about understanding the market, the volume-cost trade-off, and if one thing has taught us over the years, the wireless industry is always in a state of confusion on how to price services.   As an example, one of the authors remembers visiting Bell Labs in the 1994 time frame and discussing the future business model for wireless --provide free wireless service as long as the service provided your long distance calling plan.  So, what is the lesson to be learned from these past experiences?   Maximum flexibility in pricing is needed – it is almost impossible to predict how something should be priced until the market arrives.

Prioritization has been a part of cellular standards since the earlies days of digital cellular data.   Different types of data have different quality of service needs.  For instance, voice requires a maximum of about 100 ms of latency or people begin talking on top of each other. However, a 100 ms delay in delivering email would not be noticed.  Why not provide both services in the same time frame?   First, there are limits to the capacity of the network that arise from limits on available spectrum.   Networks are built to handle peak load, and smoothing out the load by delaying some traffic produces a more consistent and less spiky traffic that would require much more extensive networks for intermittent traffic loading.   Power generation and distribution has a similar goal of reducing peak loading to reduce costs.   Prioritizing traffic involves scheduling traffic and allocating the needed resources within the needed time frame.  Matching the data needs of many users to the radio resources at hand (e.g., bandwidth, delivery time, error correction, modulation, antenna resources, and base station(s)) is a very complex task.   There can be an overwhelming number of variables to optimize the traffic flow and overall quality of services and this challenge has only increased as the sophistication of the communications technology has increased.    Scheduling is certainly an open-ended research issue and will be for the foreseeable future.   Nevertheless, scheduling and its ability to enable prioritization is key in improving the capacity of networks so that all of us have the best possible experience of services.  The need for prioritization though a good scheduler is needed more than ever as the demands of the applications supported diverge in their service needs.

Fast-emerging Future

So why do cellular service companies want to avoid elements of network neutrality?  They need maximum flexibility in pricing and prioritization of data to support fast-emerging wireless technologies.  Basically, they know that they don't know how to do the pricing and prioritization for these new services, and hence need maximum flexibility to change strategies as needed and as the market evolves.  This is especially true with 5G with a plethora of new services being contemplated with widely different data rates, latency, reliability needs, and coverage requirements that vary orders of magnitude from one application to the next.  The deployment cost and strategies for these new technologies is yet to be determined, but certainly they will change with time.     Examples of these new services and requirements are shown in Figure 1.

The envisioned usage scenarios developed by the International Telecommunication Union (ITU)  for IMT-2020 and beyond that featured in Recommendation ITU-R M.2083-02 provide a useful and often used means of understanding the range of services we now need to consider. The triangle captures the idea of the network demands pulling in different directions. On one apex, we see usage scenarios driven by human consumption of large amounts of video, including augmented and virtual reality experience, on another we see a connected world with  massive numbers of low-cost simple sensors, and in the final apex we see what are termed mission critical applications that require superfast response times because very often they have life and death implications.

Figure 1. 5G Services

Figure 1. 5G Services
(Copyrighted by Dr. Reed and Dr. Tripathi. Adapted from the ITU document ITU-R M.2083-0.)
http://www.itu.int/dms_pubrec/itu-r/rec/m/R-REC-M.2083-0-201509-I!!PDF-E.pdf

In the utopia 5G world, the network would be flexible and able to accommodate the varying needs of different types of services.   We are starting to see the emergence of new networking technologies such as Software-Defined Networks (SDN) and Network Functions Virtualization (NFV) to give the network flexibility to handle these divergent services, even with 4G systems.  The true utility of these technologies will become apparent as 5G systems are deployed.  Nevertheless, there is an adage a "jack of all trades, and a master of none" that may describe the pitfalls of an overly broad network.   Engineering is a series of compromises, trade-offs that must be made, with the best trade-off depending on what you are trying to accomplish.   Hence, how resources will be divided among various applications will depend on the business goals of the service provider.  The future of the Internet may be multiple competitive providers, with each becoming known for providing superior quality for specific services.  While this vision different from today’s view of the Internet as the universal and homogenous bit pipe, this new vision in the end may provide consumers more choices through specialization. 

Network neutrality regulations that revolve around equal priority and pricing are addressing issues that have not materialized and are not  practical with the evolving Internet.  Nevertheless, there are important policy issues that need to be addressed to make way for future communication systems and their applications.  Specifically, policies that address data ownership, storage, monetization, and privacy/security as well as how that data is processed in conjunction with other data.  Information that is obtained from ubiquitous wireless devices can reveal much about a person’s life.   Data is valuable and who controls that value is perhaps the more important debate going forward in the future.

Note:   Dr. Reed and Dr. Tripahi have served as consultants to the telecommunications industry on the topic of network neutrality.

Leave a comment

Statements and opinions given in a work published by the IEEE or the IEEE Communications Society are the expressions of the author(s). Responsibility for the content of published articles rests upon the authors(s), not IEEE nor the IEEE Communications Society.