Connect : ComSoc Linkedin ComSoc Twitter ComSoc Facebook ComSoc You Tube

ComSoc Webcasts

Live Conference Sessions

ADVERTISEMENT

Next Generation Data Center Networks for Cloud Computing

PurchaseFREE FOR MEMBERS

Abstract:
Large scale data centers are enabling the new era of Internet cloud computing. The computing platform in such data centers consists of low-cost commodity servers that, in large numbers and with software support, match the performance and reliability of expensive enterprise-class servers of yesterday, at a fraction of the cost. The network interconnect within the data center, however, has not seen the same scale of commoditization or dropping price points. Today’s data centers use expensive enterprise-class networking equipment and associated best-practices that were not designed for the requirements of Internet-scale data center services -- they severely limit server-to-server network capacity, create fragmented pools of servers that do not allow any service to run on any server, and have poor reliability and utilization. The commoditization and redesign of data center networks to meet cloud computing requirements is the next frontier of innovation in the data center.

Recent research in data center networks addresses many of these aspects involving both scale and commoditization. By creating large flat Layer 2 networks, data centers can provide the view of a flat unfragmented pool of servers to hosted services. By using traffic engineering methods (based on both oblivious and adaptive routing techniques) on specialized network topologies, the data center network can handle arbitrary and rapidly changing communication patterns between servers. By making data centers modular for incremental growth, the up-front investment in infrastructure can be reduced, thus increasing their economic feasibility. This is an exciting time to work in the data center networking area, as the industry is on the cusp of big changes, driven by the need to run Internet-scale services, enabled by the availability of low-cost commodity switches/routers, and fostered by creative and novel architectural innovations.

In this tutorial, we will begin with an introduction to data centers for Internet/cloud services. We will survey several next-generation data center network designs that meet the criteria of allowing any service to run on any server in a flat un-fragmented pool of servers and providing bandwidth guarantees for arbitrary communication patterns among servers (limited only by server line card rates). These span efforts from academia and industry research labs, including VL2, Portland, SEATTLE, Hedera, and BCube, and ongoing standardization activities like IEEE Data Center Ethernet (DCE) and IEEE TRILL. We will also cover other emerging aspects of data center networking like energy proportionality for greener data center networks.

Bio:

Sudipta Sengupta is currently at Microsoft Research, where he is working on data center systems and networking, peer-to-peer applications, wireless networking, flash memory for cloud/server applications, and data deduplication. Previously, he spent five years at Bell Laboratories, the R&D Division of Lucent Technologies, where he worked on Internet routing, optical switching, network security, wireless networks, and network coding. Before that, he was with Tellium, an optical networking pioneer, that grew from an early-stage startup to a public company during his tenure there. Before that, he was part of a foundational team at Oracle, working on the company’s first mobile applications product offering.
Dr. Sengupta received a Ph.D. and an M.S. in EECS from Massachusetts Institute of Technology (MIT), USA, and a B.Tech. in Computer Science from Indian Institute of Technology (IIT), Kanpur, India. He was awarded the President of India Gold Medal at IIT-Kanpur for graduating at the top of his class. Dr. Sengupta has published 65+ research papers in some of the top conferences, journals, and technical magazines. He has authored 40+ patents (granted or pending) in the area of computer systems and networking. Dr. Sengupta has taught advanced courses/tutorials at many academic/research and industry conferences. His recent work on networking and storage has received widespread coverage in media/press and blogs.
Dr. Sengupta won the IEEE Communications Society William R. Bennett Prize for his work on a new oblivious routing scheme for handling highly variable Internet traffic and the IEEE Communications Society Leonard G. Abraham Prize for his adaptation of the scheme in IP-over-Optical networks. At Bell Labs, he received the President's Teamwork Achievement Award for technology transfer of research into Lucent products. His work on peer-to-peer based distribution of real-time layered video received the IEEE ICME 2009 Best Paper Award. At Microsoft, he received the Gold Star Award which recognizes excellence in leadership and contributions for Microsoft's long term success. Dr. Sengupta is a Senior Member of IEEE.

Type: Tutorial

Duration: 2 hours 49 minutes

Popular Topics