Skip to main content
Written By:

Ender Ayanoglu, University of California, Irvine

Published: 13 Nov 2019


The energy consumption of data centers is reported to be a number between 1-3% in the United States or the world. With the new era of Internet-of-Things and the increased use of artificial intelligence, one can assume these numbers will grow substantially. This article investigates this problem and presents the current situation and potential future developments.

1. Introduction

It is not possible to think about computing and networking today without con- sidering data centers. They are where computing and networking equipment is concentrated for the purposes of collecting, storing, processing, distributing or allowing access to large amounts of data. Their construction costs about $20B a year worldwide [1], and they cause approximately as much CO2 emissions as the airline industry [2]. According to a report by Lawrence Berkeley National Labora- tory, data centers in the United States consumed about 70 billion kilowatt hours of energy in 2016 [3]. This corresponded to 1.8% of all the energy consumed in the United States that year. The United States Department of Energy currently states up to 3% of all electricity may be consumed by data centers today [4]. Globally, electricity demand by data centers in 2018 was an estimated 198 terawatt hours, or almost 1% of demand for electricity in the world [5]. It is estimated that the electrical energy consumption by the information technology industry across the globe is less than only the United States and China and is more than the third country on the list [1]. On the other hand, global Internet traffic has tripled between 2015 and 2019 and is expected to double by 2022 [6]. Increased use of artificial intelligence and the large number of sensors expected to be deployed in the Internet-of-Things era bring up the question how much more energy use in data centers will increase. The crucial question is whether it will increase exponentially as Internet data traffic is growing and what this growth will mean for the release of CO2 worldwide. The latter question becomes important when one considers that 63.5% of electricity generation in the United States during 2018 was from fossil fuels [7].

2. Significance of the Problem

The Internet search engine Google disclosed in 2011 that its data centers consume 260 megawatts and made the estimation that this is sufficient to power 200,000 homes [8]. Google estimated that one Google search is responsible for emitting 0.2 grams of CO2 per year [9]. It also estimated its carbon footprint as being 1.5 million metric tons in 2011 [10]. Considering the increase in Internet traffic during this period, that number may be even higher today. On the other hand, in 2019, the company announced that it has made purchases of renewable energy equivalent to 1,600 megawatt, increasing its share of renewable energy sources to 5,500 megawatt, making it 40% of its total energy consumption.

Today, streaming video is the most significant part of global data traffic. The Internet company Cisco predicts that streaming video will make up 82% of all Internet traffic by 2021, up from 73% in 2012. Currently, one third of Internet traffic in North America is already dedicated to streaming Netflix services [1]. The global non-governmental environment organization Greenpeace shows a strong interest in energy consumption by Internet companies. To that end, Greenpeace began benchmarking the energy performance of the Information Technology sector in 2009, challenging the largest Internet companies to substantially increase their use of renewable energy. The advocacy group uses an annual report to rate big Internet and cloud companies on their use of renewable power.  In the report in 2017, Greenpeace denounced Netflix for substantial energy inefficiency [11]. Netflix does not own data centers; instead it uses contractors such as Amazon Web Services (AWS). Greenpeace also denounced AWS for being completely non-transparent about the energy footprint of its massive operations. AWS has some of its largest operations in Northern Virginia, which receives less than 3% of its energy from renewable sources [1]. Decisions by Internet and cloud companies can be surprising. Northern Virginia, with such a bad record for renewable energy, has the largest concentration of data centers in the world [1]. The location where energy is generated is actually very important in terms of its efficiency. For example, to generate 1 kilowatt hour of energy, it takes 3 grams of CO2 in Norway, 100 grams in France, 600 grams in Virginia, and 800 grams in New Mexico. Most of the energy is used to keep the processors cool. Very surprisingly, most of the world’s largest data centers are located where the temperature is hot [1]. However, through various efforts, Internet and cloud companies seem to be listening. Netflix declared in 2017 that they now purchase renewable energy certificates to match their non-renewable energy use and fund renewable energy production from sources such as wind and solar [12]. AWS has declared that it is committed to achieving 100% renewable energy use for their global infrastructure [13].

A measure used to gauge the energy efficiency of data centers is called the

Power Use Effectiveness (PUE). PUE is the ratio of the amount of energy used in the center by the amount to run the processors. It is a number larger than or equal to 1 and it is desirable to make it close to 1. In other words, a smaller PUE is better because it shows smaller energy is used for operations other than running the processors. An industry analysis a decade ago found an average PUE of 2.5. An organization, known as the Uptime Institute, publishes average PUE figures for the industry. In 2009, this number was declared to be 2.5 [14]. It is encouraging that this number is actually dropping very fast. In 2011, the Uptime Institute declared the PUE for the industry to be 1.8 [15]. An Uptime Institute study in 2014 studied the PUE of cloud data centers from Google and Facebook public disclosures plus AWS internal data, all of which show PUEs under 1.2 [13]. These numbers appear to be very good, since in 2008, the Uptime Institute declared that the typical data center has an average PUE of about 2.5, but that number could be reduced to about 1.6 employing best practices [16].

Figure 1. Underwater data center by Microsoft [17].

Some simple measures help in improving the PUE in data centers. These are decommissioning or repurposing servers which are no longer in use, powering down servers when not in use, replacing inefficient servers, and virtualizing or consolidating servers. Technology helps as well, by making use of intelligent power management, energy monitoring software, and efficient cooling systems. There is a data center described in [4] that uses air conditioning only 33 hours per year although the data center is always functioning. This is achieved by using an intelligent cooling system.

Some drastic measures have been taken by the information technology indus- try to combat the problem of energy consumption. Considering the fact that cooling is a very important part of the overall PUE, the computing company Microsoft introduced an underwater data center as depicted in Fig. 1 [18]. Of course, it is not clear that warming the world’s rivers and oceans is a good practice. But at the end of the day, the demand for significantly more computations will inevitably be with us, and it is important to keep the energy efficiency of the Internet and the cloud at a high level. It is good to lower the PUE of a data center is as close to 1 as possible, but it is actually necessary to go a step further and make sure that the energy consumption by the search itself results in a minimal level of energy consumption.

Document [19] lists 12 methods to reduce energy consumption in data centers. These 12 methods can be grouped into changes in the information technology infrastructure, airflow management, and managing air conditioning. In terms of information technology, virtualizing servers, decommissioning inactive servers, consolidating lightly used servers, removing redundant data, and investing in technologies that use energy more efficiently can provide substantial improvements. One of the improvements in terms of managing air flow is a “hot aisle/cold aisle” layout where the backs of servers face each other so that the mixing of hot and cold air is avoided. In order to further reduce mixing hot and cold air, containing or enclosing servers is recommended. Improving air flow by means of simple measures such as using structured cabling to avoid restricting air flow is recommended. Finally, adjusting the temperature and humidity, employing air conditioning with variable speed fan drives, bringing in outside cooling air, and using the evaporative cooling capacity of a cooling tower to produce chilled water are recommended to potentially make significant changes.

Figure 2. Projected data center total electricity use. The solid line represents his- torical estimates from 2000-2014 and the dashed lines represent five projection scenarios through 2020; Current Trends, Improved Management (IM), Best Prac- tices (BP),
There are a number of technological achievements to improve data center energy efficiency. Reference [20] proposes a resource management system by consolidating virtual machines according to current utilization of resources, virtual network topologies established between virtual machines, and thermal state of computing nodes. Virtualization is an important tool in data center energy efficiency. To that end, [21] provides a survey of existing virtualization techniques. Reference [22] introduces an optimization based on scheduling tasks according to their thermal potential and with the goal of keeping the temperature low. This paper introduces gains in temperature and cost reduction compared to other techniques. Traffic engineering is employed in [23] to assign virtual machines. This paper states, based on experimental results, 50% energy savings was achieved. Reference [24] provides an analysis of how increased ambient temperature will affect each component in a data center and concludes that there is an optimum temperature for data center operation that will depend on each data center’s individual characteristics. In terms of shutting down inactive servers, [25] introduces a technique that predicts the number of virtual machine requests, together with the amount of CPU and memory resources of these requests, provides accurate estimations of the number of physical machines that will be needed, and reduces energy consumption of cloud data centers by putting to sleep unneeded PMs. The paper shows the technique achieves substantial savings in energy consumption.

3. Conclusion

The report by the Lawrence Berkeley National Laboratory [3] has an encouraging conclusion. It states that improved energy is almost canceling out growing capacity. In 2014, data centers in the United States consumed 70 billion kilowatt hours. If energy efficiency levels remained as they were in 2010, the energy consumption by data centers today would be 160 billion kilowatt hours. The surprising reality is that the estimate for 2020 is only 73 billion kilowatt hours [3]. Fig. 2 shows what the total energy consumption would be without the practices introduced around the year 2010 in the data center industry, what today’s numbers are, and future predictions based on five different estimates [3]. However, although short-term predictions appear to be good, there are still concerns for the longer term future, such as ten years in the future [26].


  1. F.    Pearce,    “Energy    hogs:       Can    the world’s       huge         data     centers    be made        more efficient?” Yale Environment  360 [online].  Available: made-more-efficient, Apr. 2018.
  2. A. Vaughan, “How viral cat videos are warming the planet,” The Guardian, Sep. 25, 2015.
  3. Lawrence Berkeley National Laboratory, “United States Data Center Energy Usage Report,” LBNL-1005775 [online]. Available: http://eta- v2.pdf, Jun. 2016.
  4. United States Department of Energy, “Energy 101: Energy efficient data cen- ters,” [online]. Available: energy-efficient-data-centers.
  5. E. R. Masanet et al., Global Data Center Energy Use: Distribution, Compo- sition, and Near-Term Outlook, Evanston, IL, 2018.
  6. International Energy Agency, “Data centres and data transmission networks: Tracking clean energy progress,” [online]. Available:, May 2019.
  7. U.S.  Energy   Information   Administration,   “What   is    U.S.    electricity     generation     by    energy    source?”    [online]. Available:, Mar. 2019.
  8. J. Glanz, “Google details, and defends, its use of electricity,” The New York Times, Sep. 8, 2011.
  9. U.     Holzle,      “Powering    a     Google     search,”      [online].      Available:, Jan. 2009.
  10. D. Clark, “Google discloses carbon footprint for the first time,” The Guardian, Sep. 8, 2011.
  11. A. Rodriguez, “Greenpeace says binge-watching all those TV shows is bad for the environment,” [online]. Available: says-that-binge-watching-netflix-nflx-and-amazon-prime-amzn-is-bad-for- the-environment/, Jan. 2017.
  12. N. Hunt, “Renewable energy at Netflix: An update,” [online]. Available: update, Jun. 2017.
  13. Amazon   Web   Services,    “AWS   &   sustainability,”   [online].    Available:, 2019.
  14. M. Fontecchio and M. Rouse, “Power usage effectiveness (PUE),” [Online].
    Available: definition/power-usage-effectiveness-PUE, Apr. 2009.
  15. R. Miller, “Uptime Institute: The average PUE is 1.8,” [Online]. Available:
    05/10/uptime-institute-the-average-pue-is-1-8, May 2011.
  16. M. Szalkus, “What is Power Usage Effectiveness?” [Online]. Available:, Dec. 2008.
  17. CBCI News, “Microsoft’s underwater datacenter now has live video feeds for your viewing pleasure,” Aug. 2018.
  18. I. Paul, “Microsoft’s audacious Project Natick wants to submerge your data in the oceans,” [Online]. Available: to-submerge-your-data-in-the-oceans.html, Feb. 2016.
  19. Energy Star, “12 ways to save energy in data centers and server rooms,” [Online]. Available: Top12-Brochure-Final.pdf.
  20. A. Beloglazov and R. Buyya, “Energy efficient resource management in virtu- alized cloud data centers,” in 2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing, May 2010, pp. 826–831.
  21. M. F. Bari, R. Boutaba, R. Esteves, L. Z. Granville, M. Podlesny, M. G. Rab- bani, Q. Zhang, and M. F. Zhani, “Data center network virtualization: A sur- vey,” IEEE Communications Surveys Tutorials, vol. 15, no. 2, pp. 909–928, Second Quarter 2013.
  22. Q. Tang, S. K. S. Gupta, and G. Varsamopoulos, “Energy-efficient thermal- aware task scheduling for homogeneous high-performance computing data centers: A cyber-physical approach,” IEEE Transactions on Parallel and Dis- tributed Systems, vol. 19, no. 11, pp. 1458–1472, Nov. 2008.
  23. L. Wang, F. Zhang, J. A. Aroca, A. V. Vasilakos, K. Zheng, C. Hou, D. Li, and Z. Liu, “GreenDCN: A general framework for achieving energy efficiency in data center networks,” IEEE Journal on Selected Areas in Communications, vol. 32, no. 1, pp. 4–15, Jan. 2014.
  24. M. K. Patterson, “The effect of data center temperature on energy efficiency,” in 2008 11th Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems, May 2008, pp. 1167–1174.
  25. M. Dabbagh, B. Hamdaoui, M. Guizani, and A. Rayes, “Energy-efficient re- source allocation and provisioning framework for cloud data centers,” IEEE Transactions on Network and Service Management, vol. 12, no. 3, pp. 377– 391, Sep. 2015.
  26. N. Jones, “How to stop data centres from gobbling up the world’s electricity,” Nature, vol. 561, pp. 163–166, Sep. 2018.


Ender Ayanoglu

Ender Ayanoglu

University of California

Sign In to Comment