OFC/NFOEC 2011 to Feature Advances in Wireless, Fiber, Multiplexing, Data Center Processing, Cloud Computing, Network Capacity
Date: Tue, 03/08/2011 - 18:40
The world's largest international conference on optical communications is taking place at the Los Angeles Convention Center. The Optical Fiber Communication Conference and Exposition/National Fiber Optic Engineers Conference (OFC/NFOEC) is the premier telecom meeting where experts from industry and academia share their results, experiences, and insights on the future of electronic and wireless communication and optical technologies. More than 10,000 attendees and an exhibit with 500 companies are expected.
o Plenary Session keynote speakers
o Special symposia
o Agenda of talks and abstracts
o Workshops & panels
The conference features a comprehensive technical program with talks covering the latest research related to all aspects of optical communication. Some of the highlights, outlined below, include:
o Boosting Undersea Cable Capacity without Increasing Optical Bandwidth
o The Face Behind Facebook
o Data Transportation on Light Trails
o Upgrading to Fiber with RFoG
o eScience on SURFnet
o Inspired by Rogue Light Waves
Boosting undersea cable capacity without increasing optical bandwith
The Internet, of course, is a global phenomenon. Its most ubiquitous application is not called the World Wide Web for nothing. Global reach means Internet traffic often must cross the ocean in fiber optic cables. As these cables serve more users, each requiring more bandwidth, traffic demands invariably rise. TE SubCom researcher Dmitri Foursa says that meeting these demands might be possible in a way that doesn't require pumping up line rates, or bandwidth, the usual approach.
Line rates describe the amount of data that can be pushed through a fiber optic cable per second in each channel. Today the rate for many undersea cables is 10 Gb/s. Higher rates are on the way. Some cables that span continents now have 40 Gb/s rates, while industry R&D labs have demonstrated 100 Gb/s rates and higher.
Foursa studies spectral efficiency, another more subtle metric concerning capacity of fiber optic cable. Basically, the term is a measure of how many information-carrying channels can be packed into a given bandwidth. The comparison is inexact, but you'd have something like improved spectral efficiency if you could tune in 30 stations as you hit "scan" on your car radio instead of the usual 15 stations. The reason you are usually stuck with 15 or so is simple: it is increasingly difficult to faithfully receive a signal as the distance between the transmitter and receiver increases. A similar problem plagues transmission of signals along undersea cables.
Foursa's accomplishment, described in a paper co-authored with several TE SubCom colleagues and to be presented at OFC/NFOEC, is to more than double the distance over which transmission of high spectral efficiency channels is possible at the highest commercially deployed submarine line rate, 40 Gb/s.
In terms high spectral efficiencies over transoceanic distances, previous demonstrations at 40 Gb/s fell short of those achieved at 100 Gb/s. So Foursa's work, in effect, fills in a gap in the technology roadmap for undersea fiber optic cables.
"The point is 40 Gb/s systems are being built and upgraded now, hence the need to optimize the use of bandwidth," says Seymour Shapiro, TE SubCom CTO and vice president of research and development. "Systems operating at 100 Gb/s are somewhat down the road, although new transoceanic builds will be capable of supporting 100 Gb/s when the terminal equipment becomes available."
Foursa notes that his work was mostly done to generate applied knowledge relevant to his industry and might never make it into cables strung across the ocean by companies like TE SubCom. But as is true of all new cable capacity, Foursa's innovation could give options either to meet future traffic needs or cut costs to serve existing traffic. Then there's the technical novelty of it. "This demonstration shows that we can pretty much go 18,000 kilometers at a pretty good spectral efficiency," he says, noting in his paper that such a distance is sufficient to cross the Pacific Ocean, the world's largest. "And that hasn't been achieved with any other system before."
The face behind Facebook
The revolutionary online social network Facebook now has more than 500 million active users, which is roughly double the count from just two years ago. With each new member comes more status updates, photo albums, and Web links. More than 30 billion pieces of content are shared each month on the site. How all that information is stored in warehouse-sized data centers is a constantly evolving process.
To keep up with the growing "human" network on Facebook, the company is scaling up its own network of computers and other hardware. "It's like painting the Golden Gate Bridge," says Donald Lee, a Facebook network engineer. "By the time you are done upgrading all parts of the network, you have to return to where you started and begin scaling everything again." In the data centers, more bandwidth is needed between servers, but Lee says that traditional network switching hardware has fallen behind, and novel switching technologies need to mature before they can be used in a real-world data center.
What may be needed are new rules for how network routing and switching is done in a large data center. In his presentation, Lee plans to discuss the current requirements of a large data center, while laying out the role he thinks optical fiber innovations can play in future data center scale-ups. He will also provide some concrete visuals of cloud computing, in which an application like those offered on the Facebook website is performed remotely on a set of shared servers.
Donald C. Lee, talk OWU1, "Scaling Networks in Large Data Centers," Wednesday, March 9, 3:30 p.m.
Data transportation on light trails
As optical networks begin accepting more traffic, they will need a more efficient way to move data from place to place. One option is to use a single dedicated optical bus, called a light-trail, which has some of the same advantages that a subway line has over multiple intersecting city streets. The "tracks" for a light-trail would be essentially permanent, so fewer resources would be needed for deciding how to route data. Recent simulations show that light-trail communication can outperform other network options for certain applications.
Many city-wide optical networks are complicated webs connecting multiple nodes together. Current data transport techniques navigate data from one node to another by setting up temporary light-paths that the optical signal can follow. These strategies have worked fine for data rates of 1 Gbit/s, but the speed limit is set to increase to 10 Gbit/s. The light-trail's approach, which was conceived in 2003 for intra-city data transport, involves setting up a one-way channel that continuously connects multiple nodes. When one of these nodes seeks to send data, it is allotted a specific time slot, as well as a small section of the available bandwidth over which to broadcast. A short burst of optical data is generated by the sending node and travels over the pre-determined light-trail to all the downstream nodes, including the destination node. One of the advantages of having a constantly open channel is that intermediate nodes can obtain a time slot and use the same light-trail without having to reconfigure the network, explains Arun Somani of Iowa State University.
Besides city networks, the light-trail approach may be appealing for data centers and cloud computing. To examine this broader potential, Somani and Ashwin Gumaste of the Indian Institute of Technology in Mumbai performed network simulations comparing light-trails to other management protocols. The researchers looked at efficiency, energy consumption and response time – all of which are important parameters to network operators. The results that will be presented show that light-trails performed better at high-traffic rates than all the alternatives.
Arun Somani, talk OtUR4, "Light-trails: Distributed Optical Grooming for Emerging Data-Center, Cloud Computing, and Enterprise Applications," Tuesday, March 8, 5:30 p.m.
Upgrading to fiber with RFoG
Cable TV won't be "cable" for much longer. The eventual transition to all-optical-fiber networks means there will no longer be a coaxial cable running to each customer's house. But getting the full potential from the optics will require replacing the signal-producing devices. Some cable operators want to continue sending radio frequency over glass, or RFoG, as a way to upgrade to fiber while postponing a complete overhaul.
Currently, a lot of homes get their cable TV and Internet access over a hybrid fiber-coaxial (HFC) network, in which the signal travels over optical fiber from the cable company to a neighborhood node and then switches to a coaxial cable for the last mile to the house. The light transmitted through the fiber is modulated by a radio frequency (RF) signal that carries both video and Internet data. In order to convert this optical signal into an electrical signal for the coaxial cable, the HFC nodes require active components that have high energy and maintenance costs. Therefore, the push is toward passive optical networks (PONs) that can reduce overhead by extending fiber all the way to the user.
A true PON system will boost data rates by encoding the signal in optical rather than radio frequencies. But this will require cable companies to invest in new hardware and new management practices in order to generate the optical data stream. The intermediate solution, RFoG, is to continue transmitting the same RF signal on a fully fiber network. It wouldn't give all the advantages of PON, but RFoG would offer a modest increase in data rates over current HFC hook-ups. "RFoG provides cable operators a way to break into fiber to the home without having to disrupt their operational procedures," says Jim Farmer from Enablence – an optical communication company based in Ottawa.
RFoG has been tested in several field trials, but wider adoption is likely now that a standard was recently adopted. Farmer will report on these developments and how RFoG could make for an easier transition to PON in the coming years.
Jim Farmer, Talk NThF2, "RFoG - Foggy, or Real," Thursday, March 10, 4 p.m.
eScience on SURFnet
The Netherlands-based SURFnet is among the most advanced research and education networks in the world. The network is similar to the Internet2 protocol in the United States, which brings together select networks of universities and industrial research centers, and is a potential boon to anyone dealing with vast amounts of data and large computational problems. It's also something of a mystery to everyday users of the Internet, something that Cees de Laat aims to remedy in his OFC/NFOEC talk on eScience applications on SURFnet.
De Laat is professor at the University of Amsterdam whose research focuses in part on SURFnet innovation. The main difference of the Dutch SURFnet network is its so-called hybrid nature, a quality shared with other national research networks like Internet2. On a hybrid network, users can access underlying network architecture and circuitry. For most people interacting with the Internet via a browser, these underlying resources are static and occasionally swamped by traffic, a fact that's annoyingly obvious to anyone who has dealt with choppy Web videos or dropped Skype calls.
SURFnet's hybrid architecture is already helping astronomers and filmmakers. This is thanks to two applications de Laat says might well be prototypes for addressing the inevitable increase in data intensity and increase in demand for network services.
One application he will discuss is the Software Correlator Architecture Research and Implementation for the e-VLBI (SCARIe) project. SCARIe takes observational data from telescopes mostly around Europe pointed at a similar part of the sky. This data is transported via SURFnet to a central location, where it is stitched together, or correlated, to build a high-resolution radio map of the sky.
"In earlier days, radio astronomers would put their data on hard disks or tapes and ship it to each other; this meant correlation always took weeks or months," de Laat says. "Now they can do it in almost real time."
The second application is CineGrid, which aims to help those in the entertainment industry take full advantage of advances in parallel computing and photonic networking. These advances are especially important because the digital tools used to create films generate increasingly huge digital files that need to be shared around the world for tasks such as dubbing audio, adding computer generated animation, correcting color and so on.
Cees de Laat, Talk NThD5, "eScience Applications on the SURFnet RE Network," Thursday, March 10, 2:30 p.m.
Above, image credited to OFC/NFOEC