The United Nations Office for Disaster Risk Reduction reported that disasters have affected around 2.9 billion people worldwide from 2000-2012— killing more than a million, and damaging around 1.7 trillion US dollars in estimates. Moreover, natural disasters and their damages have been documented to occur with increasing intensity. Given the staggering numbers, effective disaster preparedness and relief response plans is compelling, especially considering the fact that natural disasters are usually unpredictable and damage cannot be avoided.

Implementing a speedy and effectual outreach post-disaster is a nontrivial challenge "due to potential infrastructural changes such as destruction of road systems that make some highways impassable, and damage to the facilities and/or warehouses that serve as storage for relief supplies." A Singapore-based team of scientists from the Institute of High Performance Computing, A*STAR and The Logistics Institute-Asia Pacific has presented a model that looks into the logistics of disaster relief using open data and tools and measures developed in the field of network science. The work was recently published in the International Journal of Modern Physics C.

Based on OpenStreetMap— a collaborative project that provides open geodata to the world, the team reported a procedure that automatically converts a road map system into a road network of nodes and edges. It then utilizes contemporary tools in complex networks to assess several dynamics on the system, particularly, the flow of goods and other relief efforts, and quantify the reachability of critical loci within a geographic area where a disaster has struck. The proposed model is highly-flexible— allowing for inclusion of damage information, such as information coming from the Humanitarian OpenStreetMap team, in the analyses. The procedure developed also enables evaluation of the various effects of a range of possible hypothetical infrastructure destruction scenarios even before a disaster strikes a region—this was shown to be crucial in formulating contingency plans for the logistics of disaster response and relief operations.

To illustrate the utility of the methodology developed, the team considered the roadmap of the city of Tacloban in the central Philippines that was hit by Typhoon Haiyan, which claimed 6000 lives and displaced around 4.1 million more. Among others, the work quantifies the extent at which the inherent structure of the road network plays a role in facilitating, or hindering, landbound relief efforts, especially in the critical hours and days immediately following a disaster event. It also discusses the inaccuracy of assuming that road networks follow a structure similar to the more commonly studied scale-free, random, and/or grid (regular) network configurations.

To improve network efficiency and meet continued bandwidth demand, Verizon is deploying 100G technology on its U.S. metro network.

The company will gain the same benefits of increased capacity, superior latency and improved scalability in its metro network as it has from deploying 100G technology in its long-haul network.

Verizon is using the Fujitsu FLASHWAVE 9500 platform and the Tellabs 7100 in its metro deployment, which is the "last-mile" connectivity from the network to the customer.

"Metro deployment of 100G technology is the natural progression of Verizon's aggressive deployment of 100G technology in its long-haul network," said Lee Hicks, vice president of Verizon Network Planning. "It's time to gain the same efficiencies in the metro network that we have in the long-haul network. By taking the long view, we're staying ahead of network needs and customer demands as well as preparing for next-generation services."

Driven by increased demand from online video consumption as well as cloud and Ethernet services, the company is implementing 100G technology in its metro and regional networks in the U.S. where traffic demand is highest and will expand deployment as growth occurs.

The benefits of 100G scalability are especially relevant for signal performance, which is improved by using a single 100G wavelength as opposed to aggregating 10-10G wavelengths.

Equally important, implementing 100G in the metro network means additional savings in space and power. Less space and reduced power requirements are needed to support 100G technology, compared with traditional 10G technology, so fewer pieces of equipment are needed to carry the same amount of traffic.

"We see the value today of starting deployment of 100G in the metro, since 100G technology has performance improvements over 10G," said Hicks. "But the cost-per-bit of 100G in the metro currently isn't as cost-effective as it is in the long-haul network so 100G in the metro won't be the default technology for a while."

Verizon has been a leader in 100G technology. Beginning in November 2007, the company successfully completed the industry's first field trial of 100G optical traffic on a live system. Verizon currently has 39,000 miles of 100G technology deployed on its global IP network.

Bell Labs-developed G.W.A.T.T. app can identify energy, cost and carbon footprint savings available through network modernization

Alcatel-Lucent has unveiled a new application developed by its research arm, Bell Labs, that network operators and others can use to explore how the use of the latest technologies can dramatically reduce energy consumption, costs and the carbon footprint of their networks.

This easy-to-use G.W.A.T.T. - Global 'What if' Analyzer of NeTwork Energy ConsumpTion app - forecasts trends in energy consumption and efficiency based on a wide variety of traffic growth scenarios and technology evolution choices.

G.W.A.T.T. provides a view of the entire network, showing how much power is consumed at each point in the network. This make it possible for G.W.A.T.T. to quickly identify 'hot spots' in a network where the most energy is consumed, and also provides a way to identify  how different technologies can make the network more efficient. The application was developed as part of Alcatel-Lucent's commitment to dramatically reducing the energy consumption and operational costs of information and communications technology (ICT) in the face of dramatic growth in data traffic and associated energy use impacts.

The rapid adoption of smart phones and tablets are driving up daily Internet traffic dramatically, and forecasts indicate that it will increase up to 85 times by 2017 compared to 2010. By 2017 more than 5 trillion gigabytes of data will pass through the network every year; this is the equivalent of everyone on the planet tweeting non-stop for more than 100 years. G.W.A.T.T. provides a roadmap of how to support this growth in a sustainable and economically viable way.

"G.W.A.T.T. is more than just a modeling tool.  It is intended to guide future product and architecture evolution by allowing network operators and ICT architects and engineers to have a complete view of the energy impact of the decisions they make regarding what new technologies to introduce into their networks and when. It also can clearly explain which technology investments can have the biggest impact on energy consumption," said Marcus Weldon, Corporate Chief Technology Officer (CTO) of Alcatel-Lucentand Bell Labs President.

G.W.A.T.T. addresses a variety of key questions that are relevant to the ICT industry, including:

    --  Where is most of the energy consumed in the end-to-end network today?
    --  How much does it cost to power the network?
    --  What is the carbon footprint of the network?
    --  How much energy is consumed by wireless networks? By data centers?
    --  What is the impact of traffic growth and new applications and services
        on the energy consumption of current networks?
    --  How will the network's energy consumption evolve based on technology
        evolution over the next four years?

Energy consumption models and scenarios used to build G.W.A.T.T. are based on forecasts and network modeling from Bell Labs, as well as Alcatel-Lucent's CTO organization and independent consortia including GreenTouch and the Global e-Sustainability Initiative (GeSI).

Over time, Alcatel-Lucent will continually expand the capabilities of G.W.A.T.T., refining its modeling capabilities, adding new scenarios and technologies, and including new technologies and architectures currently being investigated by Bell Labs and the GreenTouch consortium.


"Enabling Technology 2020: The Carbon Impact of Cloud Computing", said: "My colleagues and I at Imperial College, Reading University and Harvard Business School have spent a lot of time in recent years analysing the potential for technology to enable or undermine carbon abatement and environmental pollution across the globe. I believe that G.W.A.T.T. offers a breakthrough in helping the ICT and telecoms sectors to understand how different technologies, alone and in combination, will impact the energy consumption of their networks.  As demand for broadband services and applications skyrockets, the team at Alcatel Lucent should feel proud that they have made a meaningful contribution to the tool box of those fighting to keep energy consumption in check," said Dr. Peter Thomond, Managing Director at Clever Together and Author of the Global e-Sustainabiliy Initiative's and Microsoft's report.

Two Technical Discussions at Ethernet Technology Summit 2014

Vitesse Semiconductor Corporation has announced that Martin Nuss, chief technology officer at Vitesse, will advocate new paradigms for system architecture and network security at Ethernet Technology Summit 2014 in Santa Clara, Calif., April 29 – May 1, 2014.
  • “Securing Ethernet Networks: Authentication, Data Integrity, and Confidentiality in Ethernet Networks,”Session A-102 on Wednesday, April 30 at 9:50 am local time
  • “New Systems Architectures for Storage over Ethernet,” Forum 2B on Thursday, May 1 at 9:50 am local time

The first panel discussion will examine security at the network protocol level, particularly focusing on IEEE 802.1AE MACsec. Advocating MACsec as a scalable security solution for end-to-end network encryption, Dr. Nuss joins experts from Data Confidential and InsideSecure.

In the second panel, Dr. Nuss participates alongside Seagate Technology, Dell’Oro Group, and Broadcom to discuss convergence on Ethernet for storage, supercomputing, and other functions, as well as in standard networking applications. Dr. Nuss will introduce new architectural paradigms for Storage over Ethernet (SoE).

Dr. Nuss has over 20 years of technical and management experience and is a recognized industry expert in Ethernet technology including timing and synchronization for public and private communications networks. Dr. Nuss serves on the board of directors for the Alliance for Telecommunications Industry Solutions (ATIS) and is a fellow of the Optical Society of America and IEEE member. He holds a doctorate in applied physics from the Technical University in Munich, Germany.

New OneConnect OCm14000-OCP Adapters Maximize Application Performance with RDMA, Support Overlay Networks for the Cloud and Enable SDN with Open Programmable APIs

Emulex Corporation has released its next generation of high performance Ethernet connectivity solutions to the open source community. Now, data centers are able to utilize Emulex 10Gb Ethernet and 40Gb Ethernet (10GbE/40GbE) adapters on OCP based hardware platforms that support global cloud platforms running OpenStack or CloudStack. The new Emulex OneConnect OCm14000-OCP 10Gb and 40Gb Ethernet (10/40GbE) Network Adapters and 10GbE Converged Network Adapters (CNAs), enable higher virtual machine (VM) densities, support the cloud with Overlay Network offloads, leverage a new RDMA over Converged Ethernet (RoCE) based low latency architecture delivering application acceleration, and integrate with next generation software-defined networking (SDN) solutions.

The Emulex OCm14000-OCP adapters are also the first CNAs available for OCP based platforms that feature fully offloaded Fibre Channel over Ethernet (FCoE) and iSCSI, providing performance and higher server efficiencies that are superior to software initiator-based adapters. This brings the power and flexibility of network convergence to the OCP community and allows data centers built on the OCP model to realize the full performance benefits and cabling and power consumption reductions that come with it.

“We have been working with the OCP community to provide an open, flexible and scalable I/O platform that maximizes server efficiency and scales networking connectivity, while increasing application support in enterprise and hyperscale infrastructures,” said Shaun Walsh, senior vice president of marketing and corporate development, Emulex. “Emulex OCm14000-OCP adapters are ideal for hyperscale and enterprise data centers that require I/O optimization for delivering compute power, energy efficiency and scalability.”

The Emulex OCm14000-OCP adapters are optimized to meet the needs of enterprises and cloud providers who are building an open infrastructure while delivering a powerful set of features and capabilities, including:

  • Open Enablement of Software-defined Networking: The recently introduced Emulex SURF open API provides the tools needed to implement SDN technology that can be optimized for next generation applications and new industry standards, such as OpenStack, CloudStack and OpenFlow.
  • High-Performance Virtualization: OCm14000-OCP adapters use highly efficient and scalable hardware offload technology to transfer the overhead of virtual networking, providing up to 50 percent better CPU utilization1 compared to standard NICs when used for VMware VirtualWire connection, thereby increasing the number of VMs that can be supported per server. In addition, the OCm14000-OCP adapters deliver a fundamental 4x performance increase in small packet network performance,2 which is required to scale transaction-heavy and clustered applications.
  • Rapid, Secure and Scalable Cloud Connectivity: Emulex Virtual Network Exceleration (VNeX) offload technology provides up to 70 percent better performance1 vs. software-only implementations of emerging Overlay Network standards such as Network Virtualization using Generic Routing Encapsulation (NVGRE) used by Microsoft Hyper-V Network Virtualization and Virtual Extensible LAN (VXLAN) used in VMware’s NSX. These Overlay Network standards enable virtual and cloud environments to scale beyond the limitations of Layer 2 networks and support seamless migration from anywhere to anywhere.
  • Optimized Application Delivery with Advanced RoCE Architecture: The OCm14000-OCP adapters are based on a low-latency RoCE architecture that help enterprise IT and cloud data centers optimize unstructured and file-based storage environments, which are based on Windows Server SMB Direct and Linux NFS protocols.
  • Increased Block Protocol Performance: The OCm14000-OCP adapters increase total block protocol IOPS by 50 percent over previous generation CNAs and build upon the proven history of enterprise-class storage reliability at Emulex.

“As we move closer to making the disaggregated data center a reality, we welcome design contributions that enable us to design and build scalable, more efficient technologies,” said Cole Crawford, executive director of the Open Compute Project Foundation. “By open sourcing its OneConnect family of network adapters through the OCP community, Emulex provides customers with high-performing solutions that can be tailored to their needs.”

Page 10 of 45