BIG DATA
An Internet 100 times as fast
- Written by: Writer
- Category: BIG DATA
A new network design that avoids the need to convert optical signals into electrical ones could boost capacity while reducing power consumption.
The heart of the Internet is a network of high-capacity optical fibers that spans continents. But while optical signals transmit information much more efficiently than electrical signals, they’re harder to control. The routers that direct traffic on the Internet typically convert optical signals to electrical ones for processing, then convert them back for transmission, a process that consumes time and energy.
In recent years, however, a group of MIT researchers led by Vincent Chan, the Joan and Irwin Jacobs Professor of Electrical Engineering and Computer Science, has demonstrated a new way of organizing optical networks that, in most cases, would eliminate this inefficient conversion process. As a result, it could make the Internet 100 or even 1,000 times faster while actually reducing the amount of energy it consumes.
One of the reasons that optical data transmission is so efficient is that different wavelengths of light loaded with different information can travel over the same fiber. But problems arise when optical signals coming from different directions reach a router at the same time. Converting them to electrical signals allows the router to store them in memory until it can get to them. The wait may be a matter of milliseconds, but there’s no cost-effective way to hold an optical signal still for even that short a time.
Chan’s approach, called “,” solves this problem in a different way. Between locations that exchange large volumes of data — say, Los Angeles and New York City — flow switching would establish a dedicated path across the network. For certain wavelengths of light, routers along that path would accept signals coming in from only one direction and send them off in only one direction. Since there’s no possibility of signals arriving from multiple directions, there’s never a need to store them in memory.
Reaction time
To some extent, something like this already happens in today’s Internet. A large Web company like Facebook or Google, for instance, might maintain huge banks of Web servers at a few different locations in the United States. The servers might exchange so much data that the company will simply lease a particular wavelength of light from one of the telecommunications companies that maintains the country’s fiber-optic networks. Across a designated pathway, no other Internet traffic can use that wavelength.
In this case, however, the allotment of bandwidth between the two endpoints is fixed. If for some reason the company’s servers aren’t exchanging much data, the bandwidth of the dedicated wavelength is being wasted. If the servers are exchanging a lot of data, they might exceed the capacity of the link.
In a flow-switching network, the allotment of bandwidth would change constantly. As traffic between New York and Los Angeles increased, new, dedicated wavelengths would be recruited to handle it; as the traffic tailed off, the wavelengths would be relinquished. Chan and his colleagues have developed network management protocols that can perform these reallocations in a matter of seconds.
In a series of papers published over a span of 20 years — the latest of which will be presented at the OptoElectronics and Communications Conference in Japan next month — they’ve also performed mathematical analyses of flow-switched networks’ capacity and reported the results of extensive computer simulations. They’ve even tried out their ideas on a small experimental optical network that runs along the Eastern Seaboard.
Their conclusion is that flow switching can easily increase the data rates of optical networks 100-fold and possibly 1,000-fold, with further improvements of the network management scheme. Their recent work has focused on the power savings that flow switching offers: In most applications of information technology, power can be traded for speed and vice versa, but the researchers are trying to quantify that relationship. Among other things, they’ve shown that even with a 100-fold increase in data rates, flow switching could still reduce the Internet’s power consumption.
Growing appetite
Ori Gerstel, a principal engineer at Cisco Systems, the largest manufacturer of network routing equipment, says that several other techniques for increasing the data rate of optical networks, with names like burst switching and optical packet switching, have been proposed, but that flow switching is “much more practical.” The chief obstacle to its adoption, he says, isn’t technical but economic. Implementing Chan’s scheme would mean replacing existing Internet routers with new ones that don’t have to convert optical signals to electrical signals. But, Gerstel says, it’s not clear that there’s currently enough demand for a faster Internet to warrant that expense. “Flow switching works fairly well for fairly large demand — if you have users who need a lot of bandwidth and want low delay through the network,” Gerstel says. “But most customers are not in that niche today.”
But Chan points to the explosion of the popularity of both Internet video and high-definition television in recent years. If those two trends converge — if people begin hungering for high-definition video feeds directly to their computers — flow switching may make financial sense. Chan points at the 30-inch computer monitor atop his desk in MIT’s Research Lab of Electronics. “High resolution at 120 frames per second,” he says: “That’s a lot of data.”
In recent years, however, a group of MIT researchers led by Vincent Chan, the Joan and Irwin Jacobs Professor of Electrical Engineering and Computer Science, has demonstrated a new way of organizing optical networks that, in most cases, would eliminate this inefficient conversion process. As a result, it could make the Internet 100 or even 1,000 times faster while actually reducing the amount of energy it consumes.
One of the reasons that optical data transmission is so efficient is that different wavelengths of light loaded with different information can travel over the same fiber. But problems arise when optical signals coming from different directions reach a router at the same time. Converting them to electrical signals allows the router to store them in memory until it can get to them. The wait may be a matter of milliseconds, but there’s no cost-effective way to hold an optical signal still for even that short a time.
Chan’s approach, called “,” solves this problem in a different way. Between locations that exchange large volumes of data — say, Los Angeles and New York City — flow switching would establish a dedicated path across the network. For certain wavelengths of light, routers along that path would accept signals coming in from only one direction and send them off in only one direction. Since there’s no possibility of signals arriving from multiple directions, there’s never a need to store them in memory.
Reaction time
To some extent, something like this already happens in today’s Internet. A large Web company like Facebook or Google, for instance, might maintain huge banks of Web servers at a few different locations in the United States. The servers might exchange so much data that the company will simply lease a particular wavelength of light from one of the telecommunications companies that maintains the country’s fiber-optic networks. Across a designated pathway, no other Internet traffic can use that wavelength.
In this case, however, the allotment of bandwidth between the two endpoints is fixed. If for some reason the company’s servers aren’t exchanging much data, the bandwidth of the dedicated wavelength is being wasted. If the servers are exchanging a lot of data, they might exceed the capacity of the link.
In a flow-switching network, the allotment of bandwidth would change constantly. As traffic between New York and Los Angeles increased, new, dedicated wavelengths would be recruited to handle it; as the traffic tailed off, the wavelengths would be relinquished. Chan and his colleagues have developed network management protocols that can perform these reallocations in a matter of seconds.
In a series of papers published over a span of 20 years — the latest of which will be presented at the OptoElectronics and Communications Conference in Japan next month — they’ve also performed mathematical analyses of flow-switched networks’ capacity and reported the results of extensive computer simulations. They’ve even tried out their ideas on a small experimental optical network that runs along the Eastern Seaboard.
Their conclusion is that flow switching can easily increase the data rates of optical networks 100-fold and possibly 1,000-fold, with further improvements of the network management scheme. Their recent work has focused on the power savings that flow switching offers: In most applications of information technology, power can be traded for speed and vice versa, but the researchers are trying to quantify that relationship. Among other things, they’ve shown that even with a 100-fold increase in data rates, flow switching could still reduce the Internet’s power consumption.
Growing appetite
Ori Gerstel, a principal engineer at Cisco Systems, the largest manufacturer of network routing equipment, says that several other techniques for increasing the data rate of optical networks, with names like burst switching and optical packet switching, have been proposed, but that flow switching is “much more practical.” The chief obstacle to its adoption, he says, isn’t technical but economic. Implementing Chan’s scheme would mean replacing existing Internet routers with new ones that don’t have to convert optical signals to electrical signals. But, Gerstel says, it’s not clear that there’s currently enough demand for a faster Internet to warrant that expense. “Flow switching works fairly well for fairly large demand — if you have users who need a lot of bandwidth and want low delay through the network,” Gerstel says. “But most customers are not in that niche today.”
But Chan points to the explosion of the popularity of both Internet video and high-definition television in recent years. If those two trends converge — if people begin hungering for high-definition video feeds directly to their computers — flow switching may make financial sense. Chan points at the 30-inch computer monitor atop his desk in MIT’s Research Lab of Electronics. “High resolution at 120 frames per second,” he says: “That’s a lot of data.”