DPDK Summit North America presentations are online!
Skip to main content
Category

User Stories

Elevating Network Security Performance: Suricata’s Integration with DPDK

By User Stories

Introduction

The demand for high-performance network security solutions is at an all-time high, as organizations constantly seek faster and more efficient ways to handle traffic, detect threats, and ensure real-time response capabilities. 

Suricata as an open-source high-performance network security engine has long been at the forefront of these efforts. Network security professionals appreciate Suricata for its capabilities to act as an IDS (Intrusion Detection System), IPS (Intrusion Prevention System), and as an NSM (Network Security Monitoring) system.

But it’s the integration of the Data Plane Development Kit (DPDK) into Suricata that has allowed it to reach unprecedented performance levels, providing a vital boost for packet processing at high speeds. 

This story explores the journey of Suricata’s DPDK integration, the technical challenges and solutions, and the ongoing impact on Suricata’s functionality and performance.

Origins of Suricata: A Security Solution with Community at Its Core

In 2008, a group of security-focused professionals came together with a vision to improve open-source network security. 

Victor Julien, who was working as a contractor in the network security field, joined forces with Matt Jonkman, who led an early threat intelligence project (known as Emerging Threats), and Will Metcalf, who was involved in developing an inline version of Snort—a popular intrusion detection and prevention system (IDS/IPS). 

Their collaborative work in network security sparked the idea to create something new that would address gaps in existing solutions.

The journey truly began when Victor experimented with code on his own in 2007, without expecting much traction. However, after meeting Matt and Will at a conference in the U.S. and sharing his prototype with them, the project gained momentum. 

By 2008, they secured initial seed funding from the Department of Homeland Security (DHS), allowing them to pursue their vision formally. This funding was instrumental in establishing the Open Information Security Foundation (OISF), a nonprofit entity designed to ensure that the project would remain community-oriented and free from corporate control.

From the start, they were committed to making Suricata an open-source, community-driven project. With the OSF foundation’s setup, they chose the GPLv2 license, reflecting their belief in open collaboration and safeguarding the project from being absorbed by larger corporations. DHS funding, while crucial, was temporary, so they developed a sustainable model that allowed vendors to join OSF as members, offering a more flexible licensing option.

This foundational approach set the stage for what has now been a 15-year journey of innovation and collaboration in the network security field.

“We wanted to establish an organization that would make Suricata safe from acquisition, which we’d seen happen to other open-source projects at the time.” 

– Victor Julien – Suricata IDS/IPS Lead Developer

Since then, Suricata has gained adoption from large enterprises, including AWS, which integrates Suricata in its network firewall services.

A Perfect Fit: The Role of DPDK in Suricata’s Development

With increasing demand for high-performance network security tools, Suricata’s team saw an opportunity to leverage DPDK. DPDK provides a set of libraries and drivers for fast packet processing, bypassing traditional kernel limitations. 

This high-performance potential caught the attention of users and developers alike, many of whom were eager to see DPDK integration in Suricata. Lukas Sismis, a contributor who led Suricata’s DPDK integration, explained that several teams had previously worked on integrating DPDK with Suricata. 

However, most of these efforts were specific to unique use cases and lacked general applicability, which is why they hadn’t been contributed back to the Suricata codebase.

Lukas initially engaged with Suricata’s architecture through a master’s thesis, where his primary goal was to expand Suricata’s packet capture capabilities using DPDK. He explains, “Suricata’s architecture, with its separate capture logic, made it easy to add a new capture method.” 

His work, later incorporated into Suricata’s main codebase, helped create a general-purpose DPDK integration, ensuring Suricata’s compatibility with multiple DPDK-supported network interface cards (NICs) and enabling seamless configuration.

“Suricata’s architecture, with its separate capture logic, made it easy to add a new capture method.”

 – Lukas Sismis, Software Engineer at Suricata & Cesnet

Suricata’s Architectural Evolution and DPDK Integration

Suricata’s multi-threaded, modular design made it an ideal candidate for integration with DPDK. Suricata supports packet-capturing methods through its modular “capture interface,” which allows users to swap out packet capture techniques. 

DPDK, as an input method, fits naturally within this design and supports Suricata’s scalability goal: Suricata aims to run effectively across small, low-power deployments to high-speed data centers.

Lukas’ integration efforts involved setting up DPDK within Suricata as an alternative capture method, making it possible to directly interface with high-speed NICs while bypassing kernel overhead. Some of the major steps in this integration included:

  • Creating a New Capture Method: Lukas established DPDK as a new capture method within Suricata’s architecture, mapping configuration options for different NICs.
  • Testing Different NICs: Through this process, Lukas tested various NICs supported by DPDK, noting disparities in how each handled DPDK configurations.
  • Traffic Distribution Strategies: To manage packet distribution effectively, Lukas leveraged DPDK to configure hash-based packet distribution, efficiently balancing traffic load across CPU cores.

While most initial optimizations focused on enabling basic packet capture, this work laid the foundation for further enhancements. Testing showed a notable 10-15% performance gain, an exciting outcome that validated the decision to integrate DPDK as a core feature of Suricata’s capture options.

Tackling Hardware Offloading for Enhanced Performance

Beyond standard packet capture, the Suricata team recognized a significant opportunity in DPDK’s hardware offloading capabilities. Suricata’s high-speed packet processing can greatly benefit from the offloading of repetitive tasks to hardware, potentially bypassing certain types of network traffic. 

Lukas and his team began exploring offload capabilities that would allow Suricata to selectively filter traffic in hardware.

The primary focus of Suricata’s hardware offloading research has been on:

  • Flow Bypass: Allowing Suricata to ignore certain flows after initial inspection, reducing the processing load on uninteresting traffic.
  • Packet Filter: Discarding unwanted traffic from the start helps Suricata use more resources on the important part.
  • Decapsulation and Encapsulation Offloads: Offloading these operations can reduce overhead in packet analysis, freeing up CPU resources for other tasks.

Although full offload implementation is still underway, initial testing shows promising potential. DPDK’s RegEx accelerator API, supported by NVIDIA BlueField and Marvell NICs, is an example of hardware that could handle pattern-matching offloads. This ongoing work has been presented at Suricon 2024.

Since Suricata’s detection engine performs extensive pattern matching, a hardware-based solution could significantly reduce CPU load.

Challenges and Technical Hurdles in DPDK Integration

Lukas encountered several challenges while working with DPDK, primarily related to hardware compatibility and traffic distribution. While DPDK offers a standardized API, not all NICs perform identically, which led to variations in performance during testing. 

One challenge was to cover and unify the different configurations of the load balancing hash function (RSS) in the NICs. This required NIC-specific experimentation and testing with different configuration mechanisms.

Lukas also had to modify Suricata’s configuration parsing to map settings to DPDK-compatible options, ensuring a more user-friendly experience. 

This testing phase highlighted the need for adaptable configurations to support a wide range of DPDK-enabled hardware.

Despite these challenges, Lukas’ integration work has laid a strong foundation for Suricata’s use of DPDK, making Suricata more adaptable to high-performance environments.

Leveraging Community and Industry Feedback

Suricata’s community engagement plays a vital role in its development. Lukas worked closely with the CESNET team, a network research institution with deep experience in DPDK. 

This collaboration allowed him to troubleshoot issues in real time without relying solely on online forums. In addition, Victor and Lukas sought feedback from DPDK maintainers like Thomas Monjalon and David Marchand, whose insights were invaluable in refining Suricata’s integration.

Suricata’s developers also participate in community channels, including a Discourse forum, Redmine, and a Discord server. While direct communication with the DPDK team has been limited, Suricata’s community-driven model allows users to share feedback directly with developers, accelerating improvements and ensuring the tool meets evolving needs.

Real-World Impact: Enhanced Packet Processing for Modern Network Demands

DPDK’s integration has brought measurable performance gains to Suricata, providing faster packet processing for users. Major security vendors are already leveraging Suricata with the DPDK integration in their products, attesting to its reliability and scalability.

DPDK’s impact is particularly evident in high-speed environments where packet capture bottlenecks could otherwise lead to packet drops or latency. The integration allows Suricata to handle higher packet rates efficiently, extending its utility in demanding, real-time network security use cases.

Looking Forward: New Horizons with AI and Machine Learning

As artificial intelligence and machine learning applications expand across technology sectors, Suricata’s team remains open to exploring AI-driven enhancements. 

Victor explained that AI’s most promising role would likely be in post-processing. Suricata currently exports JSON-formatted data, which can be fed into AI models for insights beyond immediate packet inspection. 

Many current machine learning models operate at a macro level, analyzing data patterns over time rather than in real time, which aligns well with Suricata’s current functionality as a data generator for other analytics tools.

Real-time AI inference for packet processing, however, remains a challenge. Victor elaborated, “Most AI models require milliseconds for inference, which is too slow for packet-level detection in real-time.” Still, the team is ready to adopt AI models once hardware advances make real-time AI feasible.

“Most AI models require milliseconds for inference, which is too slow for packet-level detection in real-time.”

– Victor Julien – Suricata IDS/IPS Lead Developer

Future Development: Suricata as a Library for Broader Integration

A major long-term goal for Suricata is to establish a core API, effectively transforming Suricata’s detection engine into a library that other tools can leverage. 

This approach could enable seamless integration of Suricata’s capabilities with other applications, such as proxy servers, endpoint security products, and cloud-based services. 

While the foundational work for this API exists, achieving a fully developed API will take time. Victor noted that this goal, motivated by growing encryption in network traffic, could broaden Suricata’s utility in increasingly secure environments.

This library initiative would allow third-party developers to incorporate Suricata’s detection features in novel ways, creating a flexible, modular ecosystem where Suricata is part of larger, more complex security infrastructures.

Expanding Community Engagement Through Events

Suricata’s annual conference, Suricon, exemplifies the project’s community-centric approach. Suricon gathers developers, users, and industry professionals to share insights, discuss roadmap goals, and showcase new features. 

With a mix of training sessions and talks, Suricon provides a valuable opportunity for knowledge exchange and collaboration. DPDK community members have shown interest in attending future events, strengthening cross-community relationships, and fostering a shared development approach.

Suricata’s collaboration model has proven instrumental in its growth. This strong community foundation ensures that Suricata can keep pace with rapidly changing security demands.

Conclusion: Pushing Network Security Boundaries

Suricata’s integration with DPDK marks a significant milestone in its evolution, empowering it to achieve higher performance, greater adaptability, and better hardware compatibility. 

From initial testing to real-world deployments, DPDK’s impact has been transformative, enabling Suricata to meet the demands of today’s high-speed, security-focused networks. 

Through community feedback, industry collaboration, and a forward-looking approach to hardware offloading and AI, Suricata continues to redefine what’s possible in open-source network security.

As Suricata looks ahead, its development team remains committed to innovation and community-driven progress. With a roadmap that includes expanded hardware offloading, AI-driven enhancements, and new API integrations, Suricata is well-positioned to lead the next generation of network security solutions. 

This DPDK integration story exemplifies how open-source collaboration can drive meaningful advancements, pushing technology forward in response to real-world needs.

Learn more about contributing to DPDK here

Unleashing Network Performance with Microsoft Azure MANA and DPDK

By User Stories

Introduction

In the modern cloud computing era, network performance and efficiency are paramount. Microsoft Azure has been at the forefront of this revolution, introducing innovative solutions like the Microsoft Azure Network Adapter (MANA) and integrating the Data Plane Development Kit (DPDK) to enhance the network capabilities of Azure virtual machines.

In this user story we interview Brian Denton, and Matt Reat, Senior Program Managers for Azure Core. Brian’s role has been pivotal, focusing on engaging with all network virtual appliance partners to ensure they are prepared and supported for the introduction of a new Network Interface Card (NIC) into Azure. 

Matt’s journey at Microsoft began primarily within the networking domain. His career commenced with network monitoring before transitioning, about four years ago, into what is referred to as the host networking space. This area encompasses the SDN software stack and hardware acceleration efforts aimed at enhancing customers’ ability to utilize an open virtual network (OVN) and improve their overall experience on Azure. 

A natural progression of his work has involved spearheading innovations in software and the development of hardware, which have recently been introduced to the public as Azure Boost. Additionally, his contributions include the development of the MANA NIC, a product developed in-house at Microsoft. 

The Genesis of Azure MANA

Azure MANA represents a leap in network interface technology, designed to provide higher throughput and reliability for Azure virtual machines. As the demand for faster and more reliable cloud services grows, Azure’s response with MANA smartNICs marks a significant milestone, aiming to match and surpass AWS Nitro-like functions in network and storage speed acceleration. 

Microsoft’s strategy encompasses a comprehensive approach, with a primary focus on hardware acceleration from top to bottom. This effort involves current work being conducted on the host and in the hypervisor (Hyper-V), aiming to advance hardware capabilities. Such initiatives are also being pursued by competitors, including AWS with its Nitro system and Google with a similar project, marking Microsoft’s contribution to this competitive field.

Behind the scenes, the team implemented several enhancements that remained undisclosed until the announcement of Azure Boost last July. This development compelled them to reveal their progress, especially with the introduction of the MANA NIC, which had been concealed from customer view until then.

The introduction of the new MANA NIC, boasting ratings of up to 200 Gbps in networking throughput, represents a significant enhancement of the current Azure offerings, in-line with Microsoft’s competition. The reliance on off-the-shelf solutions proved to be cost-prohibitive, prompting a shift to a fully proprietary, in-house solution integrated with their Field-Programmable Gate Array (FPGA).

DPDK’s Role in Azure’s Network Evolution

DPDK offers a set of libraries and drivers that accelerate packet processing on a wide array of CPU architectures. Microsoft Azure’s integration of DPDK into its Linux Virtual Machines (VMs) is specifically designed to address the needs of applications that demand high throughput and low latency, making Azure a compelling choice for deploying network functions virtualization (NFV), real-time analytics, and other network-intensive workloads.

The technical essence of DPDK’s acceleration capabilities lies in its bypass of the traditional Linux kernel network stack. By operating in user space, DPDK enables direct access to network interface cards (NICs), allowing for faster data plane operations. This is achieved through techniques such as polling for packets instead of relying on interrupts, batch processing of packets, and extensive use of CPU cache to avoid unnecessary memory access. Additionally, DPDK supports a wide range of cryptographic algorithms and protocols for secure data processing, further enhancing its utility in cloud environments.

Azure enhances DPDK’s capabilities by offering support for a variety of NICs optimized for use within Azure’s infrastructure, including those that support SR-IOV (Single Root I/O Virtualization), providing direct VM access to physical NICs for even lower latency and higher throughput. Azure’s implementation also includes provisions for dynamically managing resources such as CPU cores and memory, ensuring optimal performance based on workload demands.

Microsoft’s commitment to DPDK within Azure Linux VMs underscores a broader strategy to empower developers and organizations with the tools and platforms necessary to build and deploy high-performance applications at scale. By leveraging DPDK’s packet processing acceleration in conjunction with Azure’s global infrastructure and services, users can achieve the highest possible performance on Azure. 

Enhancing Cloud Networking with Azure MANA and DPDK

Azure MANA and DPDK work in tandem to push the boundaries of cloud networking. MANA’s introduction into Azure’s ecosystem not only enhances VM throughput but also supports DPDK, enabling network-focused Azure partners and customers to access hardware-level functionalities. When introducing a new Network Interface Card (NIC), it is essential to have support for the Data Plane Development Kit (DPDK). The primary concern is that Azure customers will begin to encounter Mana NICs across various Virtual Machine (VM) sizes, necessitating support for these devices. This situation highlights a notable challenge.

The scenario involves three NICs and two Mellanox drivers requiring support, indicating a significant transition. The introduction of this new NIC and its drivers is intended for long-term use. The goal is for the MANA driver to be forward-compatible, ensuring that the same driver remains functional many yearsfrom now, without the need to introduce new drivers for new NICs with future revisions, as previously experienced with ConnectX and Mellanox.

The objective is a long-term support driver that abstracts hardware changes in Azure and the cloud affecting guest VMs, offering a steadfast solution for network I/O. Although the future specifics remain somewhat to be determined, the overarching aim is to support the features available on Azure, focusing on those needs rather than the broader spectrum of Mellanox’s customer requirements. Some features necessary for Azure may not be provided by Mellanox, and vice versa. Thus, the ultimate goal is to support Azure customers with tailored features, ensuring compatibility and functionality for the long term.

Microsoft offers a wide array of networking appliances that are essential to their customers’ architectures in Azure. Therefore, part of their effort and emphasis on supporting DPDK is to ensure our customers receive the support they need to operate their tools effectively and achieve optimal performance.

Supporting DPDK is essential to accommodate those toolsets. Indeed, maximizing the use of our hardware is also crucial. This is an important point because there’s potential for greater adoption of DPDK.

Matt Reat, Senior Program Manager at Microsoft

Typically, Microsoft’s users, mainly those utilizing network virtual appliances, leverage DPDK, and they are observing increased adoption not only among Microsoft’s Virtual Academy’s but also among customers who express intentions to use DPDK. It’s not limited to virtual appliance products alone. They also have large customers with significant performance requirements who seek to maximize their Azure performance. To achieve this, leveraging DPDK is absolutely essential.

The Technicals of MANA and DPDK

The MANA poll mode driver library (librte_net_mana) is a critical component in enabling high-performance network operations within Microsoft Azure environments. It provides specialized support for the Azure Network Adapter Virtual Function (VF) in a Single Root I/O Virtualization (SR-IOV) context. This integration facilitates direct and efficient access to network hardware, bypassing the traditional networking stack of the host operating system to minimize latency and maximize throughput.

By leveraging the DPDK (Data Plane Development Kit) framework, the MANA poll mode driver enhances packet processing capabilities, allowing applications to process network packets more efficiently. This efficiency is paramount in environments where high data rates and low latency are crucial, such as in cloud computing, high-performance computing, and real-time data processing applications.

The inclusion of SR-IOV support means that virtual functions of the Azure Network Adapter can be directly assigned to virtual machines or containers. This direct assignment provides each VM or container with its dedicated portion of the network adapter’s resources, ensuring isolated, near-native performance. It allows for scalable deployment of network-intensive applications without the overhead typically associated with virtualized networking.

Overall, the technical sophistication of the MANA poll mode driver library underscores Microsoft Azure’s commitment to providing advanced networking features that cater to the demanding requirements of modern applications. Through this library, Azure ensures that its cloud infrastructure can support a wide range of use cases, from web services to complex distributed systems, by optimizing network performance and resource utilization.

“The MANA poll mode driver library, coupled with DPDK’s efficient packet processing, allows us to optimize network traffic at a level we couldn’t before. It’s about enabling our customers to achieve more with their Azure-based applications.”

Matt Reat, Senior Program Manager at Microsoft

The setup procedure for MANA DPDK outlined in Microsoft’s documentation provides a practical foundation for these advancements, ensuring that users can leverage these enhancements with confidence. Furthermore, the support for Microsoft Azure Network Adapter VF in an SR-IOV context, as implemented in the MANA poll mode driver library, is a testament to the technical prowess underlying this integration.

Performance Evaluation and Use Cases

Evaluating the performance impact of MANA and DPDK on Linux VMs highlights significant improvements in networking performance. Azure’s documentation provides insights into setting up DPDK for Linux VMs, emphasizing the practical benefits and scenarios where the combination of MANA and DPDK can dramatically improve application responsiveness and data throughput. 

Microsoft effectively utilizes the Data Plane Development Kit (DPDK) on the host side to optimize network performance across its Azure services. This approach not only supports customer applications by enhancing the speed and efficiency of data processing on virtual machines but also strengthens Microsoft’s own infrastructure. 

By leveraging DPDK, Azure can handle higher data loads more effectively, which is crucial for performance-intensive applications. For a deeper understanding of how DPDK facilitates these improvements in cloud computing, view the latest webinar, “Hyperscaling in the Cloud,” which discusses the scale and scope of DPDK’s impact on Azure’s network architecture. 

“We’re aiming to push the boundaries of network performance within Azure, leveraging MANA alongside DPDK to achieve unprecedented throughput and reliability for our virtual machines.” 

Brian Denton, Senior Program Manager, Microsoft Azure Core

Significant emphasis is placed on the first 200 gig NIC, highlighting a substantial focus on achieving high throughput. Additionally, the necessity to support a high packet rate stands as a corollary to this objective. To comprehend and benchmark their throughput across various packet sizes, extensive work is undertaken. DPDK serves as the primary method for testing their hardware in this regard.

Microsoft’s engineering counterparts focus on the overall testing methodology for developing a DPDK driver set, as well as testing the hardware itself and the VM performance on that hardware. This includes client-side involvement in testing. Currently, only Linux is officially supported for DPDK, although there have been attempts to use Windows and FreeBSD. Various host configurations also play a crucial role in qualifying their hardware.

Future Directions and Community Engagement

As Azure continues to evolve, the collaboration between Microsoft’s engineering teams and the open-source community remains vital. The development of MANA and its integration with DPDK reflects a broader commitment to open innovation and community-driven improvements in cloud networking.

Conclusion

As Microsoft Azure continues to evolve, the partnership between Microsoft’s engineering teams and the DPDK open-source community is poised to play a crucial role in shaping the future of cloud networking. The development of the Microsoft Azure Network Adapter (MANA) and its integration with the Data Plane Development Kit (DPDK) underscore a commitment to leveraging open innovation and fostering community-driven advancements.

The future role of Azure MANA, in conjunction with the DPDK community, is expected to focus on breaking new technical limits in cloud networking. This collaboration could lead to significant enhancements in network performance, including higher throughput, reduced latency, and greater efficiency in packet processing. By leveraging DPDK’s efficient packet processing capabilities alongside the hardware acceleration offered by MANA, Azure aims to provide an optimized networking stack that can meet the demanding requirements of modern applications and services.

Moreover, this is likely to drive the development of new features and capabilities that are specifically tailored to the needs of Azure’s diverse user base. This could include advancements in virtual network functions (VNFs), network function virtualization (NFV), and software-defined networking (SDN), which are essential components in a cloud-native networking landscape.

The open-source nature of DPDK also ensures that the broader community can contribute to and benefit from these developments, promoting a cycle of continuous improvement and innovation. This collaborative approach not only enhances the capabilities of Azure’s networking services but also contributes to the evolution of global cloud networking standards and practices.

Ultimately, the future of Microsoft Azure MANA and the DPDK open-source community is likely to be characterized by the breaking of current technical barriers, the introduction of groundbreaking networking solutions, and the establishment of Azure as a leading platform for high-performance, cloud-based networking services.

Check out the summary and additional use cases on Hyperscaling in the Cloud here.

Join the community on slack here

Marvell, DPDK and the Rise of Octeon: The Journey to a Systematic Ecosystem

By User Stories

In the rapidly evolving landscape of silicon and networking technologies, providing robust and standardized support for hardware has become a paramount aspect of success. Marvell, a leading provider of silicon solutions, embarked on a transformative journey to ensure seamless support for their Octeon system-on-chip (SoC) through the adoption of DPDK (Data Plane Development Kit). 

This open source framework has emerged as the primary vehicle for Marvell’s silicon support, enabling the integration of sophisticated high-bandwidth accelerators with various applications. This user story dives deep into Marvell’s experiences, showcasing their transition from a chaotic ecosystem to standardized silicon support, and the significant role DPDK played in this evolution.

For this user story we interviewed Prasun Kapoor (AVP of Software Engineering), an accomplished professional with a wealth of experience in software engineering and semiconductor technologies. With a strong background in leading-edge technologies, Prasun has played a pivotal role in shaping the landscape of silicon solutions and networking technologies. As a seasoned AVP of Software Engineering at Marvell, Prasun has demonstrated exceptional leadership and expertise in driving innovation and fostering collaboration within the industry. 

Chaos to Standardization: Overcoming Legacy Code Bases

When Marvell (at the time Cavium) launched its first packet acceleration and security focused multi-core SoC, there was no DPDK. Marvell implemented its own proprietary HAL library, which provided a programming interface very close to the hardware to the end users. 

Many customers implemented large applications built on top of this HAL library and many times forked and customized it to suit their purposes. 

However, transitioning between different silicon generations often disrupted customer applications due to minor changes in the hardware’s programming interface. This challenge was exacerbated by Cavium’s reluctance to make source code for this HAL layer available publicly or contribute it to any open-source project. This prevented Cavium from adopting DPDK from the very beginning.  

The turning point for them came about in 2012-13 when they decided to create a server product. This step forced them to realize the importance of conforming to standard specifications for both hardware and software. It quickly became clear that they would not attract customers without supporting the broader software ecosystem. The previous strategy of relying solely on homegrown solutions was no longer sustainable.

Recognizing the need for a standardized library, Marvell turned to DPDK, an open and collaborative specification for networking and packet processing. By adopting DPDK at the project’s inception, Marvell aimed to provide its customers with a stable and predictable programming interface, eliminating compatibility issues when transitioning between silicon generations. The decision to align with DPDK was a fundamental shift for Marvell, enabling them to provide seamless support for their silicon.

Embracing DPDK and Collaborative Contributions

This shift to open source wasn’t merely a preference but a hard requirement, particularly in the 5G domain. Vendors in the wireless space required every piece of software provided to them to be upstreamable and upstream. This shift indicated a significant decrease in tolerance for proprietary APIs. Cavium’s first foray into open source APIs started with the Open Data Plane (ODP) project, but they adopted DPDK shortly thereafter given the much wider adoption of that framework. 

While the journey to open source had its initial recalcitrance, it proved beneficial from a business perspective. Moreover, the transition to the Data Plane Development Kit (DPDK), an open-source set of libraries and drivers for fast packet processing, was monumental. 

This transition saw Marvell going from a somewhat chaotic system of conflicting proprietary systems to a streamlined operation with enhanced inter-system compatibility and fluidity. The transition also had significant implications for Return on Investment 

“Open-source development is not just a trend; it’s a necessary strategy for technological growth and customer satisfaction. By embracing open-source, Marvell could navigate the complexities of the tech market and build a more sustainable business model.”

Prasun Kapoor, Assistant Vice President – Software Engineering at Marvell Semiconductor

Indeed, the push towards open-source has helped Marvell build a more robust relationship with its customers. The company now engages in regular discussions with its customers, ensuring that every piece of software supplied aligns with their needs and is upstreamable. This level of transparency and collaboration has been invaluable in nurturing customer trust and fostering long-term relationships.

Marvell’s adoption of DPDK went beyond conforming to the specifications. They actively participated in the DPDK community, collaborating with other vendors to propose RFCs and extend the specifications. This approach allowed Marvell to integrate their unique accelerators and technologies into the DPDK framework, ensuring that their hardware was well-supported and widely usable. This enabled the end users to have a single application programming interface to program different class of devices such as ASIC, FPGA or SW for a given workload acceleration.

From the inception of the DPDK project, Marvell recognized the openness and receptiveness of the DPDK community to quality contributions. Initially, many of Marvell’s accelerators had no proper representation in the DPDK APIs. 

As a result, Marvell worked diligently to propose RFCs and establish common API infrastructures that catered to the needs of the entire ecosystem. This collaborative effort ensured that all vendors could leverage the benefits of the standardized APIs and maximize their hardware capabilities.

Marvell’s commitment to collaborative contributions, rather than relying on proprietary APIs, helped establish a level playing field within the DPDK community. They actively extended the specifications and submitted their advancements, ensuring a robust and comprehensive framework for all users. Over the years, Marvell’s contributions have resulted in a vast array of accelerators, such as event accelerator, machine learning accelerators, cryptographic accelerators, memory pool managers, and more, being fully utilizable through standard applications.

The Benefits of DPDK Adoption 

Marvell’s wholehearted adoption of DPDK brought numerous benefits to both the company and its customers. Firstly, the transition between different silicon generations became seamless and predictable. Gone were the disruptions and compatibility issues that plagued the legacy code base approach. 

By adhering to the standardized DPDK APIs, Marvell reduced its support burden significantly, as compatibility was ensured through the collaborative efforts of the DPDK community.

Moreover, Marvell’s adoption of DPDK enabled them to tap into the collective work of other partners and vendors within the DPDK community. This collaboration created a win-win situation, where Marvell could leverage the advancements made by others, while their contributions also benefited the community at large. 

DPDK’s standardized library became the common language among Marvell’s customers, ensuring that requests for functionality tweaks adhered to DPDK compliance. This shift in customer mindset and adherence to the standard further enhanced the stability and scalability of Marvell’s solutions.

Furthermore, the adoption of DPDK opened up opportunities for Marvell to provide standard Red Hat support, which was previously challenging with their MIPS-based systems. Customers expressed a desire to run popular Linux distributions like Ubuntu on Marvell’s chips, prompting the company to embrace the open-source ecosystem fully. 

By submitting kernel code and embracing open-source practices, Marvell gained access to comprehensive support from established Linux distributions, further strengthening their position in the market.

The Role of the DPDK Community Lab

Marvell acknowledges the significance of the DPDK community lab in enhancing the robustness of the project. While they don’t explicitly depend on the community lab for testing and validation, its existence contributes to the overall quality of the DPDK project. 

The continuous validation and rigorous testing conducted in the community lab help identify and address bugs, ensuring that DPDK implementations are reliable and stable.

Marvell’s experience with DPDK has been positive in terms of stability and compatibility. The community lab’s rigorous testing and continuous integration and delivery (CI/CD) processes have played a crucial role in achieving this outcome. 

The lab’s comprehensive testing frameworks and collaborative efforts have resulted in a mature and dependable DPDK framework, which Marvell and other contributors benefit from.

Conclusion 

Marvell’s transition to DPDK illustrates the strength of open-source collaboration, standardization, and community engagement in streamlining support for their Octeon system-on-chip. By aligning with DPDK, Marvell overcame hardware compatibility issues, fostering a more versatile ecosystem. 

This open-source commitment resulted in seamless transitions across silicon generations, creating a predictable application programming interface for customers. 

The integration of Marvell’s accelerators into the DPDK community promoted innovation while preserving compatibility. The presence of the DPDK community lab improved the overall robustness of DPDK implementations, benefiting all contributors. 

Marvell’s DPDK experience underscores the transformative power of open-source collaboration and the benefit of standardized libraries, positioning it as a leading provider of seamless silicon solutions in diverse domains such as 5G, enterprise security, and networking.

Check out the latest videos from Marvell at the DPDK 2023 Summit here.

How Ericsson Leverages DPDK for Data Plane Acceleration in the Cloud

By User Stories

Introduction

In the fast-paced world of telecommunications, companies are constantly seeking solutions to address evolving challenges and meet the demands of their customers. Ericsson, a global leader in the industry, has been at the forefront of incorporating new technologies into its product portfolio. One such technology is the Data Plane Development Kit (DPDK), which has proven instrumental in revolutionizing packet processing for Ericsson’s network infrastructure. This user story delves into Ericsson’s utilization of DPDK, the benefits it has brought, and the challenges associated with transitioning to a cloud-native environment.

Ericsson’s Shifting Landscape 

Ericsson, a prominent vendor in the telecommunications domain, has a rich history of innovation and adaptability. With over 100,000 employees and a diverse range of products, Ericsson has witnessed a significant shift from traditional infrastructure to cloud-native solutions. As the industry embraces cloud-native architectures, Ericsson recognizes the importance of incorporating new technologies that align with this paradigm shift. DPDK, though not entirely new, has emerged as a critical component within Ericsson’s product portfolio, facilitating efficient packet processing and enabling the company to remain competitive in an evolving market.

Exploring DPDK’s Role

Niklas Widell – Standardization Manager at Ericsson AB, and Maria Lingemark – Senior Software Engineer at Ericsson shed light on the company’s adoption of DPDK. Maria, who has been involved in evaluating and utilizing DPDK since 2016, emphasizes the need for high-speed packet processing and the ability to split packet flows into multiple parallel streams. DPDK’s Event Dev implementation has been instrumental in achieving these goals, enabling Ericsson to process a large number of packets per second while maintaining the flexibility to distribute packet processing across multiple steps.

Transitioning from Specialized Hardware 

Before incorporating DPDK, Ericsson relied on proprietary ASIC hardware to handle packet processing. However, the company recognized the need to shift toward more readily deployable commercial off-the-shelf (COTS) hardware solutions. DPDK played a crucial role in enabling Ericsson to transition from specialized hardware to a more versatile and scalable environment, reducing the reliance on custom solutions and increasing the reach of their offerings to a broader customer base.

Flexibility and Cost Efficiency

DPDK offers Ericsson the flexibility to deploy their packet processing solutions across a range of hardware configurations, both on ASIC hardware and on common x86-based platforms. By leveraging DPDK’s capabilities, Ericsson can scale their applications and efficiently utilize the available CPU resources. Moreover, the compatibility of DPDK with multiple drivers allows Ericsson to leverage hardware-specific features where available, enhancing performance and optimizing resource utilization.

Challenges of Observability and Cloud-Native Adoption 

As Ericsson embraces cloud-native architectures, they encounter challenges related to observability, performance monitoring, and troubleshooting. Observing and comprehending the behavior of a complex system that processes packets in parallel across multiple CPUs and threads can be daunting. Balancing observability with performance optimization becomes crucial, requiring continuous refinement and adaptation. Additionally, the shift to cloud-based deployments necessitates rethinking observability strategies and ensuring seamless performance monitoring in these environments.

We needed to shift from doing packet processing on special purpose hardware, to doing it on cloud-based general compute hardware. DPDK enabled this – it created versatility and broadened external access. It significantly helped Ericsson meet our customers’ needs and demands as those changed in scale, and gave our team greater portability as well. And the ability to be able to reuse it across different departments without having to rewrite code was, and is, a significant benefit. – Maria Lingemark, Senior Software Engineer – Ericsson

To tackle the observability challenges, Ericsson leverages the eBPF (extended Berkeley Packet Filter) integration in DPDK. By deploying eBPF programs within the DPDK framework, they have achieved efficient packet processing, improved throughput, and enhanced network visibility. The flexibility offered by eBPF allows Ericsson to tailor their networking solutions to specific use cases, ensuring optimal performance and resource utilization. 

“Ericsson uses the included eBPF support in DPDK to simplify observability in complex cloud environments.” Anders Hansen, Cloud RAN System Developer – Ericsson

DPDK BBDev (Baseband Device)

DPDK BBDev (Baseband Device) plays a critical role in Ericsson’s ability to develop a portable and efficient Radio Access Network (RAN) implementation that seamlessly integrates with hardware acceleration from leading silicon vendors. This integration enables Ericsson to leverage the full potential of specialized hardware acceleration features offered by these vendors, enhancing the performance and efficiency of their RAN solutions.

By utilizing DPDK BBDev, Ericsson gains access to a standardized programming interface that abstracts the complexities of hardware-specific optimizations. This allows them to focus on developing high-performance RAN software while ensuring compatibility with various hardware platforms. The portability provided by DPDK BBDev enables Ericsson to deploy their RAN solutions across a wide range of hardware architectures, offering flexibility to their customers, while cultivating a heath ORAN eco-system in the industry.

“DPDK BBDev enables Ericsson to create a portable and efficient RAN implementation that is well integrated with HW acceleration from major silicon vendors” – Michael Lundkvist,
Principal Developer, RAN Application Architect – Ericsson

The integration of HW acceleration from major silicon vendors further boosts Ericsson’s RAN implementation. These hardware accelerators are specifically designed to offload computationally intensive tasks, such as FEC processing, resulting in improved throughput, lower latency, and reduced power consumption. By effectively utilizing these acceleration capabilities through DPDK BBDev, Ericsson delivers efficient and high-performing RAN solutions to their customers.

For more in-depth information on how DPDK BBDev enables Ericsson’s portable and efficient RAN implementation, you can refer to the white paper provided here. This white paper will delve into the technical details and showcase the advantages of integrating DPDK BBDev with hardware acceleration from major silicon vendors, offering valuable insights into Ericsson’s innovative RAN solutions.

DPDK and the Open Source Linux Foundation Community

————————————————–

Ericsson derives substantial benefits from its active involvement in both the open-source DPDK (Data Plane Development Kit) community and the larger Linux Foundation. By being an integral part of these communities, Ericsson experiences several advantages that contribute to their success and technological advancements.

First and foremost, being part of the DPDK community grants Ericsson access to a thriving ecosystem of contributors and developers focused on advancing high-performance packet processing. This access enables Ericsson to stay at the forefront of technological developments, leverage new features, and benefit from ongoing enhancements to DPDK. The collaborative nature of the open-source community encourages continuous innovation, allowing Ericsson to deliver cutting-edge solutions to their customers.

Engaging in the DPDK community also fosters collaboration and knowledge sharing with industry peers and experts. Ericsson can exchange ideas, best practices, and insights, benefitting from the collective expertise of the community. This collaboration helps Ericsson overcome challenges, improve their solutions, and accelerate their development cycles, all while contributing to the growth and success of the DPDK project.

Furthermore, Ericsson experiences a faster time to market by utilizing DPDK and collaborating within the community. By leveraging the work done by the DPDK community, Ericsson can capitalize on existing libraries, APIs, and optimizations, saving valuable development effort and resources. This efficiency enables Ericsson to bring their solutions to market more rapidly, meeting customer demands, gaining a competitive edge, and seizing market opportunities promptly.

Interoperability and compatibility are additional advantages that Ericsson enjoys through their involvement in the DPDK community and the larger Linux Foundation. DPDK’s emphasis on interoperability and common standards allows Ericsson to seamlessly integrate their solutions with other systems and platforms. This compatibility fosters a broader ecosystem, enabling Ericsson to collaborate effectively with other vendors and organizations, further expanding their market reach.

Participating in these open-source communities also positions Ericsson as an influential player and thought leader in high-performance networking and packet processing. Their contributions to the DPDK project not only enhance the framework’s functionality but also demonstrate their technical expertise and commitment to open-source initiatives. Ericsson’s influence and leadership within the community allow them to shape the direction and evolution of DPDK, driving the adoption of industry standards and best practices.

Lastly, being part of the larger Linux Foundation ecosystem offers Ericsson access to a vast network of organizations, developers, and industry leaders. This ecosystem provides collaboration opportunities, potential partnerships, and access to a network of expertise. By leveraging these connections, Ericsson can foster innovation, explore joint development efforts, and stay at the forefront of technological advancements in networking and telecommunications.

Enhancing DDos Mitigation with Gatekeeper & DPDK: A Practical Solution for Network Operators

By User Stories

Author: Michel Machado – michel@digirati.com.br

Overview 

Originally developed at Boston University, Gatekeeper is the brainchild of researchers who looked at the state of distributed denial-of-service (DDoS) attacks and realized that the community lacked an affordable, performant, and deployable solution to defending against such attacks. On one hand, cloud companies offer DDoS protection as a service, but this can be costly. On the other hand, many research proposals have been developed to allow Internet operators to protect their own networks, but none of these ideas have yielded production-quality software systems. Gatekeeper puts theory into practice by providing network operators with an instantly deployable and affordable solution for defending against DDoS attacks, and does so without sacrificing performance by leveraging DPDK as a key enabling technology.

The Challenge

Part of the challenge in defending against DDoS attacks is differentiating good traffic from bad traffic in seconds. To do so most effectively requires capturing the qualities of individual flows as they pass through the DDoS mitigation system — this allows the system to rate limit flows, apply policies based on the traffic features, and punish flows that misbehave by blocking them completely. Capturing these qualities for each packet that passes through the system requires an extreme amount of CPU and memory resources, especially during attacks that nowadays stretch beyond 1 Tbps. To withstand attacks of this magnitude, DDoS mitigation systems either need to be widely deployed in parallel (which is expensive), or need to be especially careful in how they process packets. The latter is where Gatekeeper utilizes DPDK to be able to work efficiently and affordably.

The Solution

To be able to process packets at this scale, kernel bypass is absolutely necessary. We chose DPDK as a kernel bypass solution because of its stability and support from industry, as well as the feature set that it supports. In fact, the feature set of DPDK is so rich that it significantly reduced our time to market since we did not have to write everything from scratch.

Gatekeeper heavily relies on three key features in DPDK: (1) NUMA-aware memory management, (2) burst packet I/O, and (3) eBPF. These features allow Gatekeeper to enforce policies as programs instead of firewall rules, and to do so efficiently. This gives operators a lot of flexibility in determining how flows are processed by Gatekeeper without having to sacrifice performance.

On occasion, we found some shortcomings in DPDK libraries. For example, the LPM6/FIB6/RIB6 libraries that perform longest prefix matching on IPv6 were not a good fit, and we had to implement our own. But for each issue we have come across, we’ve found huge success with other libraries as described below. Furthermore, the community is hard at work to address production demands such as dynamically setting memory zones (see rte_memzone_max_set() for details), which previously required patching DPDK to change.

The Results

With DPDK, Gatekeeper achieves the following benefits:

  • NUMA-aware memory management allows Gatekeeper to reduce memory access latency by enabling CPU cores to access local memory instead of remote memory.
  • Burst packet I/O reduces the per-packet cost of accessing and updating queues, enabling Gatekeeper to keep up with volumetric attacks.
  • eBPF (integrated in DPDK) enables deployers to write policies that are impossible to express in other solutions such as requiring TCP friendliness, bandwidth per flow, and quotas for auxiliary packets (e.g. ICMP, TCP SYN, fragments) per flow. Thanks to the guarantee of termination of eBPF programs, Gatekeeper can gracefully continue processing packets even when an eBPF program is buggy.

Many other DPDK features, including prefetching, the kernel-NIC interface, and packet filters play key supporting roles. With DPDK’s help, a modest Gatekeeper server can track 2 billion flows while processing at the very least 10 Mpps through eBPF program policies to decide how to allow, rate limit, or drop traffic.

Gatekeeper puts DDoS defense back on the hands of network operators, administrators and the general Open Source community. What was until recently only available via opaque and expensive third-party services can now be deployed by anyone with the appropriate infrastructure, with levels of flexibility and control that simply did not formerly exist. Andre Nathan – Digirati

The Benefits

DDoS attacks cause great financial, political, and social damage, and are only increasing in magnitude, complexity, and frequency. With Gatekeeper, network operators have a production-quality, open source choice in the market to defend their infrastructure and services. With the aid of technologies like DPDK, Gatekeeper is able to flexibly and efficiently defend against attacks, lowering the cost of deployment and enabling many stakeholders to protect themselves. To date, Gatekeeper has been deployed by Internet service providers, data centers, and gaming companies, and hopes to reach new deployers to eventually eradicate DDoS attacks.

Check out their GitHub here

White paper link 

Have a user story of your own that you would like to share across the DPDK and Linux foundation communities? Submit one here.

SmartShare Systems Leverages DPDK to Significantly Increase WAN Optimization

By User Stories

The Company / Product

SmartShare Systems is a small privately held company founded in 2006 by Morten Brørup in Denmark. SmartShare Systems develops innovative network appliances and related services with R&D in Denmark and hardware manufacturing in Taiwan. Their solutions have quickly become popular and are currently used by schools, commercial businesses, apartment buildings, hotels, military bases, cruise ships and internet service providers. The products are sold through value-added resellers, with expertise in the field of networks and system integration. SmartShare’s main product line, the StraightShaper products, is focused on WAN Optimization. WAN Optimization is typically used to reduce the data consumption on a costly WAN link. However, the primary purpose of the StraightShaper appliance is to make the WAN link (typically an internet connection) run smoothly for every user. WAN Optimization is relevant when users don’t have access to unlimited WAN bandwidth, or if the network infrastructure doesn’t have infinite bandwidth capacity, e.g:

  • A drilling rig crew sharing a VSAT satellite internet connection.
  • Soldiers in a military camp in the middle of nowhere, sharing whatever internet connection is available.
  • Cruise ship guests using on-board Wi-Fi, sharing the ship’s LTE/5G antenna array or Starlink satellite internet connection while at sea.
  • Students taking their final exams online at the school gym, sharing the school’s fiber internet connection.
  • (But probably not for a family of four sharing a gigabit fiber internet connection at home.)

The key WAN Optimization technologies in the StraightShaper products are:

  • User Load Balancing: Distributing the available bandwidth to the active users ensures that everyone has bandwidth all the time. This can include configuration options to assign various priorities and bandwidths to individual subscribers.
  • Bufferbloat Prevention: All network products have buffers, where the packets can queue up and
    cause increased latency. This is known as “network lag” by online gamers and “bufferbloat” by network professionals. By managing the buffers with this in mind, bufferbloat can be prevented, so the users are not exposed to excessive delays when using the internet.
  • Dynamic Quality of Service: Automatically detecting and prioritizing voice packets over data packets ensures good sound quality for IP telephony.
  • Caching: Caching e.g. DNS replies reduces the total time it takes to load a web page, if someone else has recently visited the same web site.
  • Content Filtering: By optionally blocking certain internet services that use a lot of bandwidth, internet link capacity is freed up for other purposes.

“Development velocity is much higher with DPDK than it was with our Linux kernel-based product line. With DPDK, we only have to develop what we need, and it makes our application code cleaner than when trying to fit our code into some other framework.”
MORTEN BRØRUP, CTO, SMARTSHARE SYSTEMS

The Challenge – WAN Optimization and the Linux kernel:

Some years ago, as bandwidth demands increased, SmartShare’s initial StraightShaper product, based on the Linux kernel, started facing challenges as Linux is not designed for highly specialized packet processing, nor does it support it well. This presented two main challenges:

  • Performance: The Linux kernel’s “qdisc” shaping system does not scale to multiple cores per network interface, and rewriting the kernel would be a major effort; the product could not scale beyond a few gigabits per second, which customers were starting to look for.
  • Complexity: The Linux kernel’s IP stack is extremely advanced and feature-rich, which is great for many purposes. In the Linux kernel, each packet passes through a large number of predefined functions and hooks, and depending on various criteria, packets take different routes through these functions and hooks. SmartShare’s products only use very few of these features, and don’t always fit perfectly into the predefined routes of the functions and hooks. Those other features, however,
    would sometimes get in the way and create unwanted complexity for SmartShare’s developers.

It became clear that with customer demand for bandwidths beyond 1 Gbit/s on the rise, the Linux kernel-based StraightShaper was not scalable.

“We have customers today that we wouldn’t have if we didn’t use DPDK, leading to revenue that would not be generated otherwise.”
MORTEN BRØRUP, CTO, SMARTSHARE SYSTEMS

The Solution

Given the scalability issues of the Linux kernel in specialized packet processing, combined with anticipation of increased customer demands for high bandwidth, SmartShare Systems decided to develop the next generation StraightShaper solutions using DPDK instead of the Linux kernel. DPDK enables developers to decide which functions the packets pass through, and when. This allows
developers to design their own flow (vs. adapting to pre-set routes through the system), and can pick and choose from DPDK libraries and functions.

However, this meant writing a whole new architecture from scratch to support and scale to multiple cores for increased processing, analysis and egress packet scheduling. Most publicly known DPDK projects are based on a “run-to-completion” design. The SmartShare StraightShaper CSP uses a lot of packet buffering, so SmartShare chose a “pipeline” design and developed its framework such that
available CPU cores are assigned to one or more pipeline stages as appropriate.

The Results

When we started using DPDK, it was more or less an ambition to create a version of the existing product, but based on DPDK to generate added performance (e.g. more than 1 Gbit/s). It quickly became clear that working with DPDK makes it much easier to develop these network appliances; and
DPDK’s well-documented library of functions is robust, mature and reliable.

Performance Impact

When the development of the DPDK based StraightShaper CSP firmware began back in 2016 — with the goal of creating a version of the existing product but with added performance — it was internally named “the 10 Gbit/s project”, because that was the problem it was supposed to solve. However, when the new DPDK based product was ready for testing, it was quickly apparent that not only did it push 10 Gbit/s, but easily pushed much more. Referring to it as “the 100 Gbit/s project” would be more appropriate, as the DPDK based firmware easily handles that, and more.

“When we started development of our DPDK based StraightShaper CSP firmware back in 2016, we named it ‘the 10 Gbit/s project’ because that was the problem it was supposed to solve. Now, we know that ‘the 100 Gbit/s project’ would be more appropriate, as our DPDK based firmware can easily handle that, and more!”

MORTEN BRØRUP, CTO, SMARTSHARE SYSTEMS

The Benefits – Impact on Complexity

As mentioned previously, DPDK enables developers to pick and choose which functions packets pass through, and when. This greatly simplifies the entire process and generates results faster and more efficiently.

DPDK enables adding more advanced features to the product, such as specific bandwidth allocation, bufferbloat prevention, and bandwidth shaping within the network core (i.e. inside the SmartShare appliance vs. in low-cost switches at the edge of the network).

Because the SmartShare manages the bandwidth in the core of the network, where the bandwidth capacity is extremely high, bursts and microbursts can be easily absorbed and smoothed out, so they don’t reach the edge of the network and cause packet drops and/or latency issues.

Currently, SmartShare Systems maintains both the Linux kernel-based product line (“StraightShaper”) and the more high-end DPDK-based product line (“StraightShaper CSP”). The new StraightShaper CSP has been deployed in customers’ production networks since 2019, and is a fully mature product,
which continuously evolves with improvements and new features with each firmware release.

Looking ahead, SmartShare Systems has plans to add all features of the initial Linux kernel-based product into the DPDK version. They are also looking at other new projects that leverage DPDK for other use cases, still under development