VPP reaches speeds previously thought impossible in software by bypassing the kernel and working in user space. Tapping user space and instruction caching exponentially expands your packet processing.
VPP, made available through FD.io, provides direct access to source code. This allows organizations to inspect and modify the code, and not be confined by vendor restrictions.
Thanks to the advanced speeds of VPP the need for specialized hardware like ASICs and FPGAs are cost burdens of the past.
Drive secure network configuration changes fast and easy with a RESTFUL API that tells VPP exactly what you want it to do to packet flows.
What is VPP?
VPP, powered by the open source project, FD.io, is the fastest most efficient software packet processing engine - without exception. It takes advantage of CPU optimizations such as Vector Instructions and direct interactions between I/O and CPU cache. The result is a minimal number of CPU core instructions and clock cycles spent per packet - enabling Terabit performance. So what exactly is VPP and how might it benefit your organization?
VPP is a data plane, a very efficient and flexible one. It consists of a set of forwarding nodes arranged in a directed graph and a supporting framework. The framework has all the basic data structures, timers, drivers (and interfaces to driver kits like DPDK), a scheduler which allocates the CPU time between the graph nodes, performance and debugging tools, like counters and built-in packet trace. The latter allows you to capture the paths taken by the packets within the graph with high timestamp granularity, giving full insight into the processing on a per-packet level.
One of the main ways VPP achieves such high performance, is to treat a vector of packets instead of one packet at a time. This is done by buffering ingress or egress packets into a vector and processing them all at once, instead of constantly interrupting the CPU - causing needless context switching. Processing a vector at a time per network function has several benefits:
Speed: VPP is fast. It is 10X - 100X faster than anything else on the market today. Service provider customers with high throughput needs, will experience performance never before seen.
Slashed costs: Because VPP runs on commercial-off-the-shelf (COTS) processors, underlying hardware costs can be cratered. This savings can be passed along as a discount to customers, absorbed as increased margins, or both.
Flexibility and freedom: No longer is a service provider locked into a single vendor platform. VPP provides the flexibility and freedom to maneuver as business and market demands change.
Services Expansion: Service providers can integrate other open-source applications to deliver virtually any security or networking service.
Not only is FD.io’s VPP powerful, it is capable of being deployed in discrete appliances, in support of virtual machines, or in a fully virtualized manner in support of cloud native environments - as shown in the diagram to the left.
Speeds up to 100X faster
Flexibility to add new services
Simple orchestration
Tenth of the cost
Open source software is unlocking previously single vendor, expensive and slow packet processing functions that underpin secure networking functions including routers, firewalls, VPNs, IDS/IPS, and more. But to make VPP ready for enterprise and carrier-class deployments, integration development, testing, quality assurance, packaging, delivery, and support are still required.
It has arrived. Introducing TNSR from Netgate. TNSR leverages the power of VPP to create a highly-scalable, orchestration-managed, open-source stack-based, secure-networking software platform.
FD.io's project is now a commercial product - ready for the ongoing challenge to drive up secure networking performance and drive out cost.
Want to dig deeper into TNSR?