eBooks

Solving NFV's Data Plane Paradox

Issue link: https://hub.radisys.com/i/859789

Contents of this Issue

Navigation

Page 4 of 7

classify and load-balance tens of millions of data plane sessions across hundreds of virtualized functions. Performing that well would eliminate one of the key bottlenecks around service chaining for disparate packet processing functions in a Gi-LAN – that all packets must pass through all functions – each with its own deep packet inspection - to decide if the packet flow is relevant to that function or not. Instead, an optimized NFV platform would be designed to apply wirespeed deep packet inspection at packet ingress within the switch, to then rapidly classify each flow and decide what virtualized processing may be required against the flow. An integrated load balancer would then distribute the data plane flows only to relevant and required virtualized functions, resulting in improved efficiencies and resource utilization. Optimized NFV platforms must also be designed to balance the objectives of centralized SDN orchestration, with the realities of moving packets with Terabit throughput. Conceptually, in the SDN model, each new flow identified by the IP forwarder would need to ask the SDN orchestration layer where the packet needs to go. The process of an IP forwarder requesting a new flow destination, and then waiting for SDN orchestration to respond, is much too slow for high-performance IP forwarding. Instead, by allowing orchestration to create rules and then use these to autonomously assign flows locally within the data plane platform, without always referring to the full Orchestration function, brings about a critical improvement in latency. Radisys trials around these sorts of architectural improvements found a dramatic improvement in latency. An NFV platform that can deliver these data plane processing capabilities could help lower the costs and/or increase revenues for network operators in a number of ways. First, integrating sophisticated load balancing into an intelligent switch eliminates the need for distributing the overhead of load balancing tasks across the VNFs. Hence, the capacity for VNF application processing on a given pool of resource blades goes up, and capex costs get reduced. Second, it allows for simplified "back office", by upgrading the core network infrastructure from numerous disparate purpose-built elements, to a network of virtualized network functions running on a smaller number of optimized COTS platforms. By delivering optimized capabilities on COTS technology, ongoing improvements in silicon technology can be introduced faster into future NFV platform upgrades, compared to the refresh rate of today's proprietary platforms. Finally, and most importantly, operators are then better equipped to benefit from the revenue enhancing capabilities of service chaining, by delivering QoS to classes and flows of traffic prioritized based on policy enforcement. CONCLUSION We have outlined the benefits that NFV can bring to operational cost structures and revenue-generating services within the network. But we have also seen that a key aspect of introducing NFV for data plane functions requires a focus on retaining carrier grade density and capacity, with minimized latency. To realise the benefits of NFV, these issues must be overcome. Integrated COTS platforms, with architectural features that optimize data plane VNF processing, while maintaining telco-grade reliability and performance, offer an ideal migration path for operators as they evolve their network architecture. Solving this paradox in the data plane will be a key industry challenge for ensuring faster and more widespread adoption of NFV. Some of the solutions to these challenges come in the form of gains in silicon technology, as data plane functions that have historically driven the need for proprietary hardware can instead benefit from the ever increasing number of cores on a single processor, to increase processing power and platform densities. Improvements in virtualization software is also helping. Enterprize-class virtualization technologies are typically not optimized for the real-time high throughput (I/O) required to support data plane functions – something known as the vSwitch issue. There are means to overcome this known issue – such as Single Root I/O virtualization, or you can optimize the vSwitch itself, thereby giving data plane performance suitable for NFV. Overcoming these vSwitch issues in the virtual switch environment could mean that operators can efficiently scale virtual switch implementations in the network locations where the highest throughput and I/O densities are required, and to enable the relocation of data plane services in the network. WHAT WOULD A PLATFORM ARCHITECTURE OPTIMISED FOR THE NFV DATA PLANE LOOK LIKE? But beyond these technical solutions lies another route – developing a class of open commercial server product optimized for hosting data plane VNFs with the right blend of networking, switching and compute resources. First and foremost, the platform architecture would need to intelligently 3. A Guide to NFV and SDN, Martin Taylor, Metaswitch Networks http://www.metaswitch.com/sites/default/files/Metaswitch_WhitePaper_NFVSDN_final_rs.pdf A TMN eBOOK: SOLVING NFV'S DATA PLANE PARADOX

Articles in this issue

Links on this page

view archives of eBooks - Solving NFV's Data Plane Paradox