By CTN Editor in Chief Steven Weber
In May of this year, the thirty-third annual IEEE International Conference on Computer Communications, better known as INFOCOM, was held in Toronto, Canada. One of the eleven workshops held at the conference was the Third Workshop on Smart Data Pricing (SDP). The term SDP refers to any dynamic context-dependent mechanism used by a service or content provider to set the price charged to an end user in exchange for handling a content (data) request. The context from which the price is computed may incorporate the request time, the user location, the application originating the request, the current data usage pattern on the network, the overall level of network congestion, the type of data being requested, or any other potentially relevant aspect of the content request.
The scope of the SDP Workshop included context dependent dynamic pricing, WiFi offloading and caching, pricing and billing, content distribution schemes, and several other topics [SDP 2014]. The very presence of the SDP workshops reflects a growing optimism, from both academia and industry, that perhaps SDP is an idea whose time has come. This optimism is partially rooted in the necessity of cellular and content providers to find a way to deal with the rapidly growing disparity between user demand for wireless data and the capacity of existing infrastructure to handle that demand. There is also optimism on account of an evolving understanding by researchers working in this area of the rich design space for SDP, which sits at the intersection of network economics, game theory, protocol design, and human psychology.
This month’s ComSoc Technology News (CTN) summarizes three exciting papers from the 2014 SDP Workshop. Although each of the papers from the workshop held significant merit, the key objective behind our selection was to identify three papers that represented the diverse scope of problems and proposed solutions within the SDP community.
Three papers from the 2014 Smart Data Pricing Workshop
Our first focus paper is by Yue Jin (Bell Labs, Alcatel-Lucent, Ireland)and Zhan Pang (Lancaster University, UK), entitled “Smart Data Pricing: To Share or Not to Share?”. The paper investigates the economics of shared data plans. Shared data plans are widespread in today’s market, including both plans shared across multiple devices owned by a user, and plans shared across multiple users with a shared account. The authors consider two pricing strategies by a monopolist telecom operator servicing a simplified market where each user subscribes to two mobile devices (e.g., a phone and a tablet): partitioned pricing (a separate data plan for each device) and bundled pricing (a shared data plan across the user’s two devices).
Our second focus paper is by John Tadrous, AtillaEryilmaz, and Hesham El Gamal (The Ohio State University, USA), entitled “Joint Pricing and Proactive Caching for Data Services: Global and User-centric Approaches”. The key idea behind proactive caching is that our behavior as consumers is often highly predictable, e.g., when watching a television series over a streaming video service we will almost always watch the episodes in order, meaning a “smart” provider could proactively cache the next episode onto our device during a period of low congestion. The authors demonstrate the benefits of proactive caching extend to both service providers (in the form of increased profits) and users (in the form of reduced expected payments).
Our third and final focus paper is by Di Niu (University of Alberta, Canada) and Baochun Li (University of Toronto, Canada), entitled “Congestion-Aware Internet Pricing for Media Streaming”. Inspired by proposed congestion-aware road pricing policies that charge a vehicle in proportion to its distance traveled, the authors propose that streaming media providers pay Internet service providers (ISPs) a rate per unit time and per link proportional to the product of the streaming rate times the packet transmission delay on the link. Such a rate is congestion-aware since congested links will produce longer packet transmission delays.
Additional resources on Smart Data Pricing
The first Smart Data Pricing Workshop [SDP 2012] was held at Princeton University in July, 2012, and was organized by Princeton Professor Mung Chiang’s EDGE Lab. This workshop featured a keynote presentation by Andrew Odlyzko (University of Minnesota, USA) covering his thought provoking article on SDP [Odlyzko 2013].
Soumya Sen, Carlee Joe-Wong, Sangtae Ha, and Mung Chiang (Princeton University, USA) have written a survey on SDP [Sen et al. 2013] tracing the evolution of thought on pricing data in both the academic and industrial communities. The same authors have also edited a compilation of articles on SDP to be published this month by Wiley [Sen et al. 2014].
In addition to the more “academic” topics addressed by the SDP community (of which the three papers in this special issue are representative), there is also a groundswell of industrial activity focused on translating some of the theory that has been developed into practice. We mention two companies recently founded by members of the SDP community. First, DataMi [DataMi 2014], a company spun out of the EDGE Lab, has several applications for SDP-related tasks, including DataWiz (for monitoring data usage), TUBE (time-dependent pricing), IDS (intelligent demand shaping), and MAP (measurement, analytics, profiling). Second, inmobly [inmobly 2014], a company founded by Hesham El Gamal (The Ohio State University, USA), is a mobile video content delivery system based on their TruEdge platform, which “pre-positions” (caches) user targeted video.
In conclusion, SDP is an exciting initiative within the ComSoc community, motivated by practical challenges faced by content providers and service providers. Researchers and entrepreneurs have risen to this challenge, and the papers summarized in this special issue reflect this exciting intellectual landscape.
1. Smart Data Pricing: To Share or Not to Share
Authors: Yue Jin (Bell Labs, Alcatel-Lucent, Ireland) and Zhan Pang (Lancaster University, UK)
Title: “Smart Data Pricing: To Share or Not to Share?”
Publication: 2014 IEEE INFOCOM Workshop on Smart Data Pricing
A shared data plan is one where the usage price and/or quota for a single account covers data consumed across multiple devices and/or multiple users. “Family plans” currently advertised by service providers are a representative example, and offer an appealing (to some) alternative to the conventional model of one data plan per device. It is natural to ask under what conditions a service provider finds profit in offering such plans to its customer base.
This paper compares “bundled” (shared) data plans with “partitioned” data plans in a simplified market environment with a single (monopolist) service provider, servicing an idealized population of independent users, each of which owns two wireless devices (say, a smart phone and a tablet).
The objective of the service provider is to maximize profits by selecting prices. In the case of partitioned data plans, the service provider selects separate prices per time period, one price for smart phone, and a second price for tablet. In the case of bundled data plans, the service provider selects a single price for the pair of devices.
The two types of devices (smart phones and tablets) are assumed to have different maximum data usages, and each user’s usage of each device is modeled as a random variable, uniformly distributed between zero and the maximum data usage. This randomness is included to capture the heterogeneity of user data consumption within the population, and the authors assume this usage equals the user’s utility for the device.
Once the ISP has advertised its price or prices, each user now subscribes to the service if its net utility (usage minus cost) is positive. In the case of a partitioned data plan, each user will make two separate decisions, i.e., whether or not to subscribe to the plan for each of the two devices, based on whether or not the user’s usage on that device exceeds the set cost. Likewise, in the case of a bundled data plan, each user will make a single decision, i.e., whether or not to subscribe to the plan for both devices, based on whether or not the user’s sum usage over both devices exceeds the set cost.
With this basic model, the natural question is how to set the prices under each of the two plans so as to maximize the profit to the ISP, and to compute the corresponding profits under those optimal prices. The prices and profits for both data plans are given in terms of the model parameters, i.e., the maximum data usage on each device and the operating costs per unit of data faced by the ISP. With these in hand, the authors can then compare both the optimal prices and the profits across the two pricing schemes. Theorem 1 asserts that the maximum profit under bundling exceeds that under partitioned data plans if and only if the operating costs per unit of data faced by the ISP is sufficiently small (less than one-half), and that the optimal price under bundling is less than the sum of the two optimal prices for the services under partitioning.
The authors have several additional results beyond Theorem 1 that further characterize these two data models which we don’t discuss here, and for which a precise nonmathematical characterization of the results is more difficult.
Overall, the paper presents an elegant abstraction of the question of data plan sharing from the perspective of a monopoly service provider. The (idealized) findings state that optimized bundling yields superior profit to optimized partitioning when the operating costs of the service provider are sufficiently low.
2. Joint Pricing and Proactive Caching for Data Services: Global and User-centric Approaches
Authors: John Tadrous, AtillaEryilmaz, and Hesham El Gamal (The Ohio State University, USA)
Title: “Joint Pricing and Proactive Caching for Data Services: Global and User-centric Approaches”
Publication: 2014 IEEE INFOCOM Workshop on Smart Data Pricing
A vexing problem for the service provider is the (typically) high ratio of daily peak demand to average demand, especially in the context of streaming media. The service provider must provision sufficient resources in order to handle the peak demand (at potentially significant cost), but these resources then sit idle during off peak times (with corresponding reduced profits). An appealing solution to lower this ratio is for the service provider to prefetch content during off peak times in order to reduce the network load during peak times. In order for this prefetching to be effective, the provider must either have a very good characterization of user streaming preferences in each time period, or must be able to incentivize the user to view the cached material. These two options are by no means exclusive: the provider can prefetch content based on user preference profiles, as well as set prices for streaming to incentivize users to view cached content. This paper studies the optimal caching and pricing policies in this setting.
As a simplifying but inessential condition, the authors focus on the case of a single user and a single service provider, with a fixed collection of items (streaming content). The user’s “demand profile” is characterized by a probability that user will select each item to watch in each time period; the user then selects at most one item to watch in each time period, in accordance with the governing probabilities. These probabilities that the user will select each item are influenced by the prices the service provider sets on each item in each time step.
In the proactive architecture, the provider selects items to prefetch for the user in advance, as well as a price on each item in the collection. This leads to a natural “joint pricing and proactive download profit maximization problem” (Equation 3) over the proactive downloads and prices. By leveraging the assumed cyclostationarity of the user demand profile (i.e., a user’s demand varies over the course of a day, but has the same daily variation across days), the authors are able to focus in on a single “cycle” (day). Unfortunately, the associated optimization problem has a non-concave objective, making it difficult to obtain a general explicit solution. Nonetheless, the optimization can be tackled in an iterative manner, by first fixing the proactive downloads and optimizing over prices, then fixing the prices and optimizing over the proactive downloads, etc. In this iterative manner, increasingly accurate approximations to the optimal solution may be obtained.
The authors next consider a slightly different problem formulation, where the user controls the proactive download decisions and the ISP sets the price on the media content. The user wishes to minimize the costs its requests incur at the ISP, and the ISP wishes to maximize profit. This scenario is an instance of a coordination game, and due to the convexity of the profit function, the (Nash) equilibrium of this game is obtained by the two players (the user and the ISP) iteratively optimizing the objective over its control, holding the previous player’s control fixed. Under this scenario, the authors establish (Theorems 3 and 4) that (optimized) proactive downloads (with suitably optimized prices) yields lower user payments and increased ISP profits compared with a system without proactive downloads.
In summary, the paper studies a natural scenario where users and service providers help each other in a distributed manner. Users select items for proactive download based on set prices and user viewing preference, and the ISP selects prices on content in response to their cached state and the corresponding profit.
3. Congestion-Aware Internet Pricing for Media Streaming
Authors: Di Niu (University of Alberta, Canada) and Baochun Li (University of Toronto, Canada)
Title: “Congestion-Aware Internet Pricing for Media Streaming”
Publication: 2014 IEEE INFOCOM Workshop on Smart Data Pricing
Taking inspiration from proposed congestion-dependent road pricing policies where a vehicle is charged in proportion to the distance it travels along the road, the authors in this paper propose a new metric for congestion-sensitive pricing of streaming media content over a network. In particular, the ISP computes the product of the (instantaneous) streaming media rate (the application throughput) times the (instantaneous) packet transmission delay on each link. The overall cost to the content consumer is computed by summing this cost over all links between the content source and destination. A little thought shows that this metric is equivalent to computing the so-called bandwidth-delay product on each link. The bandwidth delay product of a flow of packets over a link is the product of the rate at which packets are transmitted onto the link times the end-to-end delay for a packet on that link, and equals the amount of data from the flow on the link at any point in time. The proposed pricing mechanism is congestion dependent, since the transmission delay on the link is increasing in the link congestion level.
The authors then consider some implications of such a pricing policy for media content providers. Content providers would wish to form minimum cost overlay networks to connect with their clients. By minimizing the cost of the sum bandwidth delay products, the clients are by definition minimizing the amount of “waiting data” on the network. Focusing on the particular case of streaming video multicast, each multicast session wishes to find the collection of relays such that the resulting distribution tree has minimum cost. As an example, applications like Google Hangouts, Facetime, and Skype need to form these kinds of overlay networks.
To approximate this hard problem, the authors study a “network coordinate system” (or, in their words, “delay space”), where the delay between two nodes determines the delay between them, and idealized relay locations must be eventually mapped back to the actual underlying network. Specifically, given the spatial locations of the terminal nodes to be connected, and a limit on the allowed number of relays nodes, the goal is to place the relays in space so as to form a multicast tree of minimum cost.
Unfortunately, the resulting optimization problem is non-convex, and as such the authors investigate an appealing approximation based on the expectation-maximization (EM) paradigm. In this context, the algorithm iteratively solves for the optimal flow given a fixed set of relay positions (a linear program), then solves for the optimal set of relay positions given a flow (a convex program).
The simulations section demonstrate that using this optimized approach to relay and flow selection will yield potentially significant savings in overall cost for multicast networks relative to the naïve approach of sending the content directly.
In summary, this paper proposes a natural congestion-dependent pricing principle for streaming content, and uses this metric to illustrate a principled approach to forming multicast session trees.
[Odlyzko 2013] Andrew Odlyzko, “Will smart pricing finally take off?”, Available at http://www.dtc.umn.edu/~odlyzko, dated October 20, 2013.
[SDP 2014] The IEEE INFOCOM Workshop on Smart Data Pricing (SDP), Available at http://infocom2014.ieee-infocom.org/Workshops_SDP.html
[Sen et al. 2013] Soumya Sen, Carlee Joe-Wong, Sangtae Ha, and Mung Chiang, “A Survey of Smart Data Pricing: Past Proposals, Current Plans, and Future Trends”, ACM Computing Surveys, Vol. 46, No. 2, Article 15, November, 2013.
[Sen et al. 2014] Soumya Sen, Carlee Joe-Wong, Sangtae Ha, and Mung Chiang, Smart Data Pricing, Wiley Series on Information and Communication Technology, September, 2014.
[DataMi 2014] Available at http://www.datami.com
[inmobly 2014] Available at http://www.inmobly.com
[SDP 2012] Available at http://scenic.princeton.edu/SDP2012/