VECTOR PACKET PROCESSING

Vector Packet Processing, the engine inside TNSR™, extends packet processing performance and scale orders of magnitude beyond traditional technologies.

050-rocket.png

Zoom Past the Kernel

VPP reaches speeds previously thought impossible in software by bypassing the kernel and working in user space. Tapping user space and instruction caching exponentially expands your packet processing.

unlock.png

Escape Vendor Lock-in with Open Source

VPP, made available through FD.io, provides direct access to source code. This allows organizations to inspect and modify the code, and not be confined by vendor restrictions.

010-hand.png

Slash Costs with Commodity Silicon

Thanks to the advanced speeds of VPP the need for specialized hardware like ASICs and FPGAs are cost burdens of the past.

 

conductor.png

Orchestration Managed

Drive secure network configuration changes fast and easy with a RESTFUL API that tells VPP exactly what you want it to do to packet flows. 

What is VPP?

VPP, powered by the open source project, FD.io, is the fastest most efficient software packet processing engine - without exception. It takes advantage of CPU optimizations such as Vector Instructions and direct interactions between I/O and CPU cache. The result is a minimal number of CPU core instructions and clock cycles spent per packet - enabling Terabit performance. So what exactly is VPP and how might it benefit your organization?

VPP is a data plane, a very efficient and flexible one. It consists of a set of forwarding nodes arranged in a directed graph and a supporting framework. The framework has all the basic data structures, timers, drivers (and interfaces to driver kits like DPDK), a scheduler which allocates the CPU time between the graph nodes, performance and debugging tools, like counters and built-in packet trace. The latter allows you to capture the paths taken by the packets within the graph with high timestamp granularity, giving full insight into the processing on a per-packet level.

One of the main ways VPP achieves such high performance, is to treat a vector of packets instead of one packet at a time. This is done by buffering ingress or egress packets into a vector and processing them all at once, instead of constantly interrupting the CPU - causing needless context switching. Processing a vector at a time per network function has several benefits:

  • Each node is independent and autonomous
  • Performance is derived from optimizing the use of the CPU's instruction cache (i-cache). The first packet heats up the cache, and the rest of the packets in the frame (or vector) are processed "for free.” VPP takes full advantage of the CPU’s superscalar architecture, enabling packet memory loads and packet processing to be interleaved for a more efficient processing pipeline.
  • VPP takes advantage of CPU’s speculative execution. Performance benefits are gained from speculatively re-using forwarding objects (like adjacencies and IP lookup tables) between packets, as well as preemptively loading data into the CPUs local data cache (d-cache) for future. This efficient use of the compute-hardware allows VPP to exploit fine-grained parallelism.

 

 

 

Kernel Diagram

 

 

 

Vector Packet Processing Deployment

 

 

 

VPP Deployment

 

 

So what does all of this mean to me, the service provider?

  • Speed: VPP is fast. It is 10X - 100X faster than anything else on the market today. Service provider customers with high throughput needs, will experience performance never before seen.

  • Slashed costs: Because VPP runs on commercial-off-the-shelf (COTS) processors, underlying hardware costs can be cratered. This savings can be passed along as a discount to customers, absorbed as increased margins, or both.

  • Flexibility and freedom: No longer is a service provider locked into a single vendor platform. VPP provides the flexibility and freedom to maneuver as business and market demands change.

  • Services Expansion: Service providers can integrate other open-source applications to deliver virtually any security or networking service.

Not only is FD.io’s VPP powerful, it is capable of being deployed in discrete appliances, in support of virtual machines, or in a fully virtualized manner in support of cloud native environments - as shown in the diagram to the left.

Deeper Look into VPP

 

  VPP sounds great, but is it ready for prime-time deployment?  

 

tnsr-logo-white  

Speeds up to 100X faster

Flexibility to add new services

Simple orchestration

Tenth of the cost

 

Open source software is unlocking previously single vendor, expensive and slow packet processing functions that underpin secure networking functions including routers, firewalls, VPNs, IDS/IPS, and more. But to make VPP ready for enterprise and carrier-class deployments, integration development, testing, quality assurance, packaging, delivery, and support are still required.

 

It has arrived. Introducing TNSR from Netgate. TNSR leverages the power of VPP to create a  highly-scalable, orchestration-managed, open-source stack-based, secure-networking software platform.

 

FD.io's project is now a commercial product - ready for the ongoing challenge to drive up  secure networking performance and drive out  cost. 

 

 

The TNSR Platform Architecture

TNSR integrates an array of open source components into an enterprise-ready packet processing solution.

 

TNSR infograph double fade

 

Want to dig deeper into TNSR?

Learn More

 Ready to take TNSR for a test drive.

Discover how TNSR can help you transform your network. 

  Contact Us  
Icons made by Freepik from Flaticon are licensed by CC 3.0 BY