THE LINUX FOUNDATION PROJECTS
All Posts By

bthomas

DPDK Dispatch Q1 2026

By Newsletter

DPDK Project  ·  Q1 2026 Newsletter

Fifteen years in,
and the work keeps getting better

Summit 2026 is six weeks away. Five new articles. One APAC roundup. Here’s what the community published this quarter.

March 2026
5 owned stories
Ecosystem digest
APAC roundup

DPDK turned fifteen last quarter. The community published five pieces that tell different parts of that story — where the project came from, how it governs itself, who runs it in production, and where it goes next. Alongside those, the ecosystem continued to move: enterprise Linux is catching up with modern networking stacks, observability tooling is evolving fast, and DPDK is turning up in places that would have surprised the original architects — from radio telescopes to DDoS mitigation. Summit 2026 is six weeks away. There is a lot to cover.

Featured  ·  Event

DPDK Summit 2026  ·  May 12–13  ·  Stockholm

What to Watch at DPDK Summit 2026: Routers, Offloads, Verification, and Real-World DPDK

The full schedule is live. Two days in Stockholm cover production routing with Grout and FRR, rte_flow offload limits, eBPF observability, AI-assisted formal verification, CI integration, and deployment stories from CERN, Ericsson, ByteDance, and more. Speakers include Stephen Hemminger, Bruce Richardson, Mattias Rönnblom, Robin Jarry, Aaron Conole, and others from across the ecosystem. A virtual attendance option is available.

“The strongest schedules do more than fill time slots. They show where a project is headed.”

The all-attendee reception on Tuesday and BoF sessions on Wednesday are where follow-up questions turn into real collaboration. Review the schedule before you register.

Read the session preview and register →

Community & Project

Community Spotlight

DPDK at 15: From Intel’s Internal Experiment to Open Source Foundation

Jim St. Leger was there when Venky Venkatesan and a small Intel team were running internal experiments on user-space packet processing with no plan to build a community around it. This piece traces the path from closed code to Linux Foundation project — the architectural bet on polling mode drivers, the decision to move control to dpdk.org in 2014, and the harder cultural shift that followed. It is also a tribute to Venky, whose design decisions still define DPDK fifteen years on.

“When you go into an open source community, you take your company hat off. You sit there with people that are your competitors and you go build something that’s better for all.”

Read the full story →

Community Spotlight

Fifteen Years of Getting Stronger: What DPDK’s Community Built Together

Rashid Khan led Red Hat’s DPDK efforts and chaired the Governing Board during a sustained growth phase. When he reflects on fifteen years, he reaches for a word most engineers would consider a compliment: boring. Khan discusses 100% member retention, mandatory testing through UNH-IOL, governance transparency, and why power efficiency is the community’s next hard problem.

“In the Linux environment ‘boring’ is good. DPDK strives to become boring. Meaning it is easy to deploy, and it just works out of the box.”

Read the full story →

Community Spotlight

Thomas Monjalon: Twelve Years of Building DPDK’s Open Source Community

Thomas Monjalon became one of the first DPDK maintainers from outside Intel in 2013. Twelve years on, he talks about what it actually took to earn trust from competing vendors, why the hardest bugs to fix are in governance, and what daily maintainership looks like across a community spanning Intel, Marvell, NVIDIA, NXP, and dozens of independent contributors. A candid account of the work that does not appear in commit logs.

“It took years to build trust through actions, not just words.”

Read the full story →

User Story

How CHIME’s Correlator Team Uses DPDK to Turn Raw Sky into Science

At the Canadian Hydrogen Intensity Mapping Experiment, the bottleneck was not telescope time — it was host memory bandwidth. FPGAs push UDP from thousands of digitizers at over 6.4 Tb/s. The GPU correlators need that data arranged in exact matrix tiles, in one pass, with no reorder step. Andre Renard’s team used DPDK poll-mode drivers and Intel DDIO to inspect headers in L3 cache and scatter payloads directly into GPU-ready DRAM offsets, halving memory operations per byte. The same Kotekan pipeline now runs across multiple telescope sites.

“DPDK let us look at the header while it was still in L3 and write the payload exactly where the GPU expects it.”

Read the full story →

Ecosystem reading list

Third-party coverage and technical writing that intersects with DPDK’s work this quarter — networking stacks, observability tooling, Kubernetes performance, 5G efficiency, and high-frequency trading infrastructure.

The State of Enterprise Linux for Networking →

How enterprise Linux distributions are evolving for modern networking stacks, and where they sit alongside purpose-built network operating systems. Useful context for teams evaluating deployment environments for DPDK-based applications.

Engineering Absolute Determinism: Advanced Low-Latency Optimization in High-Frequency Trading →

Ultra-low-latency tuning for trading systems — deterministic performance, jitter reduction, and infrastructure-level optimization. Reflects continued demand for the kind of predictable, high-throughput packet handling that DPDK was designed for.

Via Walletinvestor.com

OpenTelemetry Collector Contrib v0.145.0: 10 Features That Will Transform Your Observability Pipeline →

Ten notable additions in the contrib release, covering telemetry collection and pipeline flexibility. As observability tooling around high-performance datapaths matures, OpenTelemetry’s trajectory is worth tracking alongside DPDK’s own CI and verification work.

Optimizing Kubernetes Networking for High-Performance Cloud Applications →

A look at Kubernetes network tuning for cloud-native workloads — pod-to-pod performance, service routing overhead, and scaling under load. Complements the Summit’s sessions on Kubernetes-based routing and cloud-native DPDK deployment.

The Efficient 5G Core: Intel and Nokia on CoSP OPEX Reduction →

How Intel and Nokia are approaching 5G core efficiency through power and performance optimization — territory that overlaps directly with the DPDK community’s current thinking on power-aware packet processing and many-core scaling.

APAC Community Roundup (MANDARIn Language ARTICLE LINKS)

Active research and deployment work from the APAC region this quarter, spanning performance optimization, emerging transport architectures, cloud DPU integration, and new application domains for DPDK-based tooling.

Huawei & Zhejiang University — fast-path forwarding improvements for smaller rule sets →

Forwarding optimizations that improve DPDK performance at smaller rule set sizes, contributing to core fast-path efficiency work.

Huawei  ·  Zhejiang University

DPDK-based latency jitter measurement for precision network validation →

A new testing approach focused on jitter measurement, showing DPDK being applied in precision validation contexts well beyond standard packet forwarding.

BURST soft-RDMA stack and 400G transport research →

DPDK positioned as part of emerging 400G transport and software-defined RDMA architecture research, pointing toward next-generation interconnect work.

Hunan University  ·  ByteDance

China Mobile Cloud — DPU-accelerated load balancer design →

A DPDK-based load balancer combining CPUs, DPUs, and smart NICs, reflecting DPDK’s growing role in cloud architectures that distribute work across heterogeneous silicon.

China Mobile Cloud

OVS-DPDK in telecom and finance deployments →

Continued production coverage of OVS-DPDK where low jitter, hardware offload, and high-throughput handling are hard requirements — not aspirations.

ARM-based financial monitoring with DPDK →

Strong performance gains reported for real-time infrastructure observability on ARM, extending DPDK’s value beyond traditional networking appliance deployments.

DPDK as high-speed ingestion for AI-driven traffic analysis →

Coverage connecting DPDK with AI-driven traffic analysis pipelines, reinforcing its role as a high-rate data ingestion layer in modern security infrastructure.

Oracle-linked coverage

DPDK as a high-rate offensive testing engine →

Security reporting framing DPDK’s packet-processing speed as a capability in cyber tooling and offensive testing contexts.

P4-DPDK in DDoS defense →

Continued interest in combining programmable data planes with user-space packet processing for DDoS mitigation.

Get involved with DPDK

DPDK runs on code review. Every patch that goes into a release was read by someone who asked a careful question or caught a subtle issue. You do not need commit access to contribute — just the ability to read code and give honest feedback. Summit 2026 registration is open now.

DPDK Project  ·  A Linux Foundation Project  ·  dpdk.org

Q1 2026  ·  Newsletter archive

DPDK at 15: From Intel’s Internal Experiment to Open Source Foundation

By Community Spotlight

TL;DR Summary

DPDK at 15: From Intel’s Internal Experiment to Open Source Foundation. A conversation with Jim St. Leger on shepherding a closed project into one of networking’s most enduring open source communities.

A conversation with Jim St. Leger on shepherding a closed project into one of networking’s most enduring open source communities.

The Architecture That Wouldn’t Fit

In the early 2010s, software-defined networking was gaining traction. Cloud providers and telecom operators wanted to move packet processing off dedicated appliances and onto general-purpose servers. The Linux kernel network stack could handle packets. The performance was unconscionable.

“You had to run too many cores. You had to dedicate too much. It was just way too much hardware,” St. Leger says.

Every packet triggered interrupts and context switches through a stack built for general-purpose computing. For line-rate throughput, the overhead killed you.

Intel’s embedded and communications group supplied chips for dedicated networking appliances, single-purpose boxes: routers that routed, firewalls that filtered. “The challenge of that model is you have these one-off chips that only have a single function,” St. Leger explains. “You’re supplier-locked. You don’t have a lot of flexibility.”

Intel architects started running internal experiments. What if packet processing ran in user space? What if you used polling instead of interrupts? What if you stripped the stack down to pure packet movement?

“There were a lot of internal conversations, frankly internal experiments,” St. Leger recalls. “This is what our architects do.”

Venky Venkatesan and a small team developed the breakthrough approach: polling model, full speed on x86, purpose-built stack. Early implementations delivered 10x performance improvement over the kernel. Later releases pushed that to 40x.

The technical bet paid off. Then came the harder question: what to do with it.

The Tradeoffs You Accept

The polling model requires rethinking the entire computing model. You dedicate CPU cores to packet processing and nothing else. Those cores continuously poll the network interface, checking for new packets. No interrupts, no context switches, no kernel involvement.

This creates immediate problems. Dedicated cores mean you can’t use those cores for anything else. Running in user space means building your own memory management, your own buffer handling, your own driver interface, reimplementing everything the kernel provides, specifically for packet processing, without the generality the kernel needs.

Venky Venkatesan and his team made those tradeoffs deliberately. They gave up flexibility and general-purpose efficiency to get raw throughput. The architecture assumed you had packets to process and needed to process them fast. Everything else was secondary.

What DPDK Actually Does

DPDK processes packets in user space at line rate. The polling model dedicates CPU cores to continuously checking network interfaces for packets, no interrupts, no kernel involvement. Packets arrive, get processed, and move on, all in user space with direct hardware access through poll mode drivers.

The framework provides memory management, buffer handling, and queue management specifically tuned for packet processing. It’s not general-purpose. It’s not trying to be. If you need to move millions of packets per second with minimal latency, DPDK gives you the tools to do that on x86 hardware.

The tradeoff is explicit: you dedicate cores to packet processing. Those cores don’t do anything else. If your workload doesn’t justify that dedication, DPDK might not make sense. If you’re running line-rate networking, cloud infrastructure, or processing massive data streams, the performance gain justifies the core dedication.

From Internal Project to Something Else

Intel initially developed DPDK internally with help from 6WIND as a contractor. The code was always BSD-licensed, technically open, available for anyone to download and use. What it wasn’t was community-driven. Intel engineers built releases internally, validated and tested behind closed doors, then pushed completed releases out to the world.

“We weren’t initially looking to build a true open source community around it,” St. Leger says.”

As DPDK gained traction, Intel started hosting developer-centric events in San Francisco, initially in conjunction with the Intel Developer Forum, then as stand-alone events in Dublin, Shanghai, and San Jose. The community was growing, but the governance model wasn’t.

In 2014 Intel moved the development and control of releases and contributions to dpdk.org. It was the first big step towards being a truly open effort. But customers didn’t want to depend on Intel silicon exclusively. Other hardware providers, especially ARM, wouldn’t participate in a project governed solely by Intel.

“They wanted to have a larger open community,” St. Leger explains. “If you’re a customer, you’re like, ‘Hey, I don’t really want this to be a lock-in for Intel hardware. I’d like this to be a much more neutral community where I get things that work across a variety of silicon.’”

The Big Transition

Moving from internal development to a genuine open source community requires more than changing a license file. It requires changing how engineers think about their work.

“You have to move from an in-house, proprietary-life development to open source community development where you put your code out there and you open yourself up and expose yourself for all the accolades and arrows that come with your code,” St. Leger says. “You need to welcome community comments ranging from ‘Hey, that’s a wonderful idea, keep doing that’, or ‘That’s terrible, why did you do it that way? You should have done it this way.’ That’s a very different mindset.”

At one developer summit, the Linux Foundation’s Chris Wright laid out what genuine open source governance would require. Not just open code, but open decision-making, accepting contributions, and sharing control.

“Chris just put it on the table,” St. Leger recalls. “He’s like, ‘Hey, the community needs to decide how you want to do this going forward.’”

6WIND transferred the dpdk.org domain to the community. Intel committed resources to support the transition. “Initially you might actually need more resources because of the transition costs and overhead in having to include community conversations versus one person making all the decisions.”

the Linux Foundation provided neutral hosting and governance. Companies including Intel, NXP, Marvell, Mellanox (now NVIDIA), and others committed funding and engineering resources. When it came to elected positions, the community looked at who was doing the majority of the contributions. Intel engineers like Bruce Richardson earned leadership roles through technical contributions, not corporate mandate. Thomas Monjalon, from 6WIND, became central to community building through the same merit-based process.

“That shows the community’s confidence in doing not only the right thing, but the fair thing.”

Beyond Networking

CERN uses DPDK at the Large Hadron Collider. Particle physics experiments generate colossal amounts of data that need rapid movement and filtering. When St. Leger toured CERN after the Higgs boson discovery, he met with the data center team. DPDK solved their data movement problem. “For them, it’s a straightforward solution, get the code from the repository, use what you need, check that problem off the list, and move forward.”

Radio telescopes process incoming signals with DPDK. Medical imaging systems use it for high-bandwidth data movement. AI inference pipelines leverage it for rapid data plane processing.

“I’d love to know if there’s a use of DPDK in the world of AI and LLMs,” St. Leger says. “Either training or inference, are they using DPDK in any function as part of their model implementation? I’d love to know the answer to that one.”

These applications share a common pattern: massive data movement with minimal latency requirements. The architecture works for any high-throughput data plane processing.

The Community That Formed

At the first DPDK Summit developer events, Siobhan Butler made cakes. They would be themed to celebrate whatever major release we had just completed. Oh, and did I mention they were delicious?! She’d bring them in for the closing gathering, celebrations of another release, another year, another set of contributions merged.

“We’d gather as a community in celebration,” St. Leger says. “That was just a terrific experience that I hope continues.”

The willingness to have hard conversations defines the community’s health. “The fact that the community can have hard conversations with each other about things and be very open and transparent is what makes the strength of the community what it is.” Not all open source projects manage that balance. Some become toxic. Some fracture along company lines. DPDK maintained the ability to disagree technically while collaborating on solutions.

“Some of the highlights of my career are those kinds of community collaborations with people who, when we switch back to our business roles, are still competitors of my employer.”

What Venky Built

Venky Venkatesan died too young. St. Leger still stays in touch with his wife and one of his daughters. “She’s just a joy to talk to and to see her carrying on the legacy of her dad.”

At DPDK developer summits, Venky was the Pied Piper. After his talks, developers swarmed him with questions, following him into hallways, delaying the next speaker. “I was like, ‘I hope the schedule includes a break now, because people are going to keep asking Venky questions,” St. Leger remembers.

Venky could answer any question about DPDK’s architecture because he made the original decisions. He also worked six to eighteen months ahead of current releases. “His talks usually included things he was working on that were maybe a third done, but he was working on them.” Some experiments would pan out. Others wouldn’t. Venky showed both.

“Bruce Richardson was one of those guys who often worked under Venky’s tutelage,” St. Leger notes. “Venky would give him an assignment: go off and play with this concept, try to implement it, see what you come up with.” That mentorship created the next generation of DPDK technical leaders.

“I think he would have kept working on it right at the end of his life.” The project carries his DNA forward to this day.

Fifteen Years Forward

Most open source projects would be in the sunset phase at fifteen years. DPDK keeps expanding into new domains. The architecture Venky Venkatesan and his team designed in the early 2010s remains relevant because they made the right fundamental tradeoffs: user space, polling, dedicated cores, purpose-built for throughput.

“One of the things I love about all the work I’ve done is when you go into an open source community, you take your company hat off,” St. Leger says. “You sit there with people that are your competitors and you go build something that’s better for all.”

The next fifteen years will bring use cases nobody’s imagined yet. The architecture is flexible enough to adapt. The community is strong enough to evolve it.

For St. Leger, looking back on fifteen years means recognizing the community he helped build.

“I have some DPDK DNA, maybe some of my DNA in the community itself.”

Contributing to DPDK

The project needs contributions across multiple areas. Code review is always valuable, the codebase is substantial, and thorough review catches issues before they hit releases. Testing on specific hardware platforms helps ensure DPDK works across the silicon diversity it’s meant to support. Documentation improvements lower barriers for new users.

Start small. Fix a bug. Improve documentation. Add test coverage. Work your way up to larger contributions. The community welcomes contributors who show up with working code and willingness to collaborate.

Get involved: Review your first patch


About the DPDK Project

The Data Plane Development Kit (DPDK) consists of libraries to accelerate packet processing workloads running on a wide variety of CPU architectures. By moving packet processing to the user space, DPDK allows for higher performance than is typically possible using the kernel’s network stack.

About the Linux Foundation

The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects, including Linux, Kubernetes, Model Context Protocol (MCP), OpenChain, OpenSearch, OpenSSF, OpenStack, PyTorch, Ray, RISC-V, SPDX and Zephyr, provide the foundation for global infrastructure. The Linux Foundation is focused on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

Last Updated: 03/26/2026

What to Watch at DPDK Summit 2026: Routers, Offloads, Verification, and Real-World DPDK

By Blog

Estimated reading time: 5 minutes

Table of contents

DPDK Summit 2026 is where the community’s current work becomes visible all at once. This year’s schedule is not just a list of talks. It is a snapshot of where contributors, reviewers, and users are pushing next: production routing, cloud native deployments, hardware offload, verification, observability, and developer workflow.

With the full schedule now live, the summit is shaping up to be a strong two-day program for anyone building, deploying, or tuning DPDK-based systems. The event takes place May 12–13, 2026 in Stockholm, Sweden, with a virtual option for remote attendees. Times are displayed in Central European Summer Time, and the schedule is subject to change.


TL;DR

  • Two days of DPDK-focused talks on routing, offload, observability, verification, and developer tooling
  • The program highlights real deployment stories alongside core technical work, including Grout, FRR integration, Kubernetes-based routing, CERN data acquisition, rte_flow offload limits, and DPDK CI
  • Community speakers from across the ecosystem are on the agenda, including voices from Red Hat, Intel, Marvell, ByteDance, CERN, Ericsson, Huawei Ireland Research Center, UNH Interoperability Labs, DynaNIC Semiconductors, HPE, BISDN, and more
  • In-person registration includes talks, lightning talks, community sessions, the all-attendee reception, lunch, breaks, and an event T-shirt
  • Review the schedule, pick your priority sessions, and register now

Why this year’s schedule matters

One of the most useful things about a DPDK Summit schedule is what it reveals between the lines. You can see where people are spending time, where pain points are surfacing, and which ideas are moving from theory into production. This year’s program highlights work across the full stack: from kernel boundaries and packet pipelines to routers on Kubernetes, cryptographic offload, eBPF tooling, eventdev concurrency, and test automation.

That breadth matters. It shows contributors where interesting work is happening next. It shows operators which deployment patterns are maturing. And it gives newer participants a fast way to understand where technical energy is building across the project.

The strongest schedules do more than fill time slots. They show where a project is headed.

Four themes to watch at DPDK Summit 2026

1) Production routing is having a strong moment

Some of the most compelling sessions this year focus on what it takes to move from capability to deployable systems. Grout: Two Years in – Building a Production-Ready DPDK Router, Integrating FRR With Grout, and Running a High-Performance DPDK-Based Router on Kubernetes point directly at a theme many in the community care about: how DPDK-powered networking stacks behave in the real world, not only in isolated performance setups.

That makes this track especially relevant for people building network functions, edge platforms, or cloud native datapaths. It is also a strong signal for anyone who wants more talks grounded in operating experience, integration work, and production trade-offs.

2) Offload and datapath evolution remain front and center

DPDK has always been close to the metal, and this year’s program keeps that focus sharp. Sessions such as DPDK and 802.11, Bridging DPDK and NIC HQoS With Priority-Aware Backpressure, Accelerate RSS Hash in Software With GFNI, Using DPDK on Embedded RISC-V Cores of a NIC, and Cryptographic Offloading To DPU/XPU PCI Cards Using Virtio-Crypto and DPDK Ethdev as Transport show how broad the hardware and acceleration story has become.

There is also a welcome dose of realism in Beyond Throughput: Exploring the Ambiguities and Limits of rte_flow Offloading. That kind of session usually lands well with core community audiences because it goes beyond feature checklists and gets into the practical boundary conditions that matter in design and deployment.

3) Correctness, visibility, and confidence are climbing the agenda

Another clear pattern in this year’s lineup is the growing emphasis on understanding what the datapath is doing and proving that it behaves as intended. Talks including AI-Assisted Formal Verification of the DPDK eBPF Verifier, Packet Capture Tool Based on eBPF for DPDK, Packet Capture Challenges, Multithreading on Eventdev, Yelled at by LLMs: Putting a Megaphone To AI Models in DPDK CI, and Develop With Confidence: Integrating the DPDK Test Suite With Your Development Workflow speak to a community that cares not just about speed, but about developer confidence and operational clarity.

That is a healthy sign. High-performance software becomes more durable when observability, verification, and CI are treated as first-class engineering concerns rather than afterthoughts.

4) The speaker roster reflects a broad, active ecosystem

A good summit schedule is also a community map. This year’s program brings together familiar contributors, reviewers, independent experts, and engineers from across vendors, operators, and research environments. Names on the schedule include Stephen Hemminger, Bruce Richardson, Mattias Rönnblom, Robin Jarry, Aaron Conole, Akhil Goyal, Patrick Robb, Robert McMahon, Roland Sipos, Claudia Cauli, and many others.

That diversity is part of the value. It means attendees are not getting one narrow view of the ecosystem. They are seeing DPDK from the angles of performance engineers, integrators, operators, tooling builders, and deployers.

Why it matters

If you review patches, ship packets, tune queues, evaluate offload strategies, or maintain DPDK-based systems, this year’s summit offers more than updates. It offers context.

It is a chance to see:

  • where deployment patterns are getting more mature
  • where offload promises meet real constraints
  • how tooling and CI are evolving around the core datapath
  • which topics are attracting fresh energy across the community

For long-time contributors, that context helps frame the next set of technical conversations. For newer participants, it is one of the fastest ways to understand the project’s current center of gravity.

Who should attend

This schedule should be especially relevant for:

  • DPDK maintainers and reviewers tracking where new energy is showing up
  • application developers building or integrating DPDK-based networking software
  • platform teams working on cloud native or Kubernetes-based networking
  • NIC, offload, crypto, and accelerator engineers
  • operators running high-performance packet processing in production

How to plan your summit

The official schedule notes that times are displayed in Central European Summer Time and that the program is subject to change, so it is worth reviewing your agenda in your local timezone before the event.

A simple way to get value from the two days:

  1. Pick one core architecture session. Good options include talks on 802.11, kernel path decisions, rte_flow offload, or eventdev.
  2. Pick one real deployment session. Grout, FRR integration, Kubernetes routing, and CERN are strong candidates.
  3. Pick one tooling and confidence session. eBPF capture, formal verification, CI, and the DPDK Test Suite are likely to generate useful follow-up conversations.

If you are attending in person, stay for the all-attendee reception on Tuesday and the BoF sessions on Wednesday. Those are often where follow-up questions turn into real collaboration.

Register now

DPDK Summit 2026 runs May 12–13 in Stockholm, with both in-person and virtual attendance available. The schedule is live, the sessions are strong, and the speaker lineup reflects real depth across the ecosystem.

Now is the right time to review the agenda, decide which conversations matter most to you, and register.


About the Linux Foundation

DPDK is a part of The Linux Foundation, the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects, including Linux, Kubernetes, Model Context Protocol (MCP), OpenChain, OpenSearch, OpenSSF, OpenStack, PyTorch, Ray, RISC-V, SPDX and Zephyr, provide the foundation for global infrastructure. The Linux Foundation is focused on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

Last Updated: 03/04/2026

AI Disclosure This post used artificial intelligence tools for research, structural assistance, or grammatical refinement. The final content was reviewed, edited, and validated by human contributors to DPDK to ensure accuracy and alignment with our community standards. We remain committed to transparency in the use of generative technologies within the open source ecosystem.

Thomas Monjalon: Twelve Years of Building DPDK’s Open Source Community

By Community Spotlight

TL;DR Summary

Thomas Monjalon shares insights from twelve years as a DPDK maintainer, exploring the technical necessity of user space networking and the intensive human work required to build vendor trust in a competitive ecosystem.

A conversation on technical trust, vendor politics, and why the hardest bugs to fix are in governance.

The industry needed DPDK because traditional kernel networking couldn’t keep pace. At line rate, interrupt handling and context switching consumed so much CPU overhead that you were spending more time managing packet arrival than actually processing packets.

Telecommunications operators couldn’t deliver 4G services. Data centers were struggling with exponentially growing traffic. Cloud providers were building massive-scale infrastructure. The hardware existed to handle these workloads. The software bottlenecks prevented systems from using it.

“We needed it for telecommunications, without it, 5G and 6G would be very difficult to deliver. Even for 4G it was a real requirement,” Thomas Monjalon explains. “We needed it in data centers where we have to manage very high throughput for all the data.”

DPDK solved the performance problem. It let applications bypass the Linux kernel and process network packets directly in user space, achieving throughput that kernel networking simply cannot match. The architecture optimized for every CPU, used hardware features fully, and gave applications direct access to network devices. Early implementations delivered 10x performance over the kernel stack.

The technical bet paid off immediately. The governance bet nearly killed it.

Thomas Monjalon joined French networking company 6WIND in 2012 to work on Software Defined Networking and Network Functions Virtualization. DPDK was essential infrastructure for that work. He started contributing to the project, learning its architecture, becoming part of the nascent community.

Within a year, he became one of the first DPDK maintainers from outside Intel, joining other early community builders. Over the next twelve years, he would become one of the key architects of DPDK’s transformation, building trust through patient, daily work, establishing governance structures, guiding contributors, mediating technical debates between competing vendors, and ensuring every detail reinforced DPDK’s commitment to genuine openness.

What DPDK Actually Does

DPDK is a framework for building high-performance network and security applications. It provides direct access to network hardware, bypassing the kernel to achieve maximum throughput and minimum latency. The framework handles packet processing, memory management, and device drivers for CPUs and NICs from multiple vendors.

“DPDK lets us use the full power of the hardware,” Thomas says. “When you use DPDK on new hardware, you don’t need to over-provision.” Using hardware to its full potential means purchasing less of it, consuming less power, reducing waste.

The project started focused on networking but expanded as industry needs evolved. “In networking, you often need cryptography as well,” Thomas explains. DPDK now includes cryptography support and is building complete libraries for security protocols. The project has comprehensive IPsec support and is now supporting TLS, MACsec, and PDCP. WireGuard and QUIC are on the roadmap.

The Governance Problem

Intel released DPDK under an open source license, but licensing alone doesn’t create an open community. The project was called “Intel DPDK.” The roadmap reflected Intel’s priorities. Contribution mechanics were opaque. Other hardware vendors, Intel’s direct competitors, needed DPDK for their own products, but joining meant potentially subsidizing a rival’s platform.

“At the beginning when the project was internally managed, nobody wanted to join,” Thomas explains. Companies sent patches. They asked for features. They never received clear signals about whether their contributions fit the project’s direction.

The industry needed DPDK, but if it remained locked to one vendor’s interests, the ecosystem would fracture. For DPDK to succeed as infrastructure for the entire industry, it needed governance that everyone could trust. Someone had to build that trust patch by patch, decision by decision, conflict by conflict.

The Part Nobody Talks About Enough

Before the achievements: DPDK is genuinely hard to integrate, and Thomas will tell you so directly.

“It’s quite difficult to integrate DPDK because it’s mostly responsible for low-level layers,” he says. “When you build an application from scratch, you also need to write all the upper layers.”

DPDK applications bypass kernel protections and standard networking APIs. Developers must understand hardware details that kernel abstractions normally hide. The learning curve is steep. For workloads where throughput and latency are critical, DPDK makes the right tradeoffs.

Proving Openness Through Actions

Thomas became a DPDK maintainer in 2013, one of the first from outside Intel. His role evolved from writing code to building community infrastructure. The contribution process needed documentation. The governance model needed transparency. The technical board needed representation from competing vendors.

In 2014, control of releases and contributions moved to dpdk.org. But concerns remained. Other hardware providers, particularly ARM, wouldn’t participate in a project they saw as Intel-controlled. Intel faced a choice: maintain de facto control, or commit to genuinely shared governance. They chose shared governance, recognizing that genuine open source requires more investment, not less.

the Linux Foundation provided the neutral ground this transformation required, giving the project institutional independence from any single company. Companies including Intel, NXP, Marvell, Mellanox (now NVIDIA), and others committed funding and engineering resources.

“It took years to build trust through actions, not just words,” Thomas reflects.

Bridging Vendor Cultures

Hardware vendors approach problems differently. DPDK needed these vendors to collaborate on shared infrastructure while competing in the market.

“Some design decisions create winners and losers,” Thomas notes. “You have to manage those decisions carefully so everyone understands the real goal is a truly open source project.”

Over time, vendors learned they could influence DPDK more effectively through participation than through pressure. The community grew because participants saw their investments rewarded with actual influence over the project’s direction.

The Work of Building Community

Thomas’s days involve what he calls “improving every detail.” Maintainership means reviewing patches, guiding contributors, refining processes, and ensuring quality. It means paying attention to the small things that collectively determine whether a community functions well or poorly.

“The technical board is very important because that’s where we can step back and think about what the real priorities are, where we should go,” Thomas explains.

The Long View

Twelve years after Thomas and other early maintainers started working to build trust in DPDK’s governance, the work continues. The framework must support emerging security protocols. GPUs are being integrated for specialized processing workloads.

Thomas remains focused on continuous improvement. “When working on DPDK, I’m constantly improving details,” he says. The trust built through consistent, fair treatment of all contributors now enables companies and developers worldwide to build on DPDK confidently.

Getting Involved

“Start with simple modifications,” Thomas suggests. “It’s better if you begin with something small, it’s your first time training in the process. You’re learning from others.”

Code review doesn’t require being a maintainer or years of DPDK experience—just the ability to read patches carefully and provide thoughtful feedback. “When you start contributing and getting involved, try to be consistent in your investment,” Thomas emphasizes.

Get involved: Review your first patch


About the DPDK Project

The Data Plane Development Kit (DPDK) consists of libraries to accelerate packet processing workloads running on a wide variety of CPU architectures. By moving packet processing to the user space, DPDK allows for higher performance than is typically possible using the kernel’s network stack.

About the Linux Foundation

The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects, including Linux, Kubernetes, Model Context Protocol (MCP), OpenChain, OpenSearch, OpenSSF, OpenStack, PyTorch, Ray, RISC-V, SPDX and Zephyr, provide the foundation for global infrastructure. The Linux Foundation is focused on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

Last Updated: 03/24/2026

Fifteen Years of Getting Stronger: What DPDK’s Community Built Together

By Community Spotlight

TL;DR Summary

Rashid Khan reflects on 15 years of DPDK, discussing how reliable architecture, 100% member retention, and a philosophy of leading by influence have made the project critical infrastructure for 5G, CERN, and beyond.

Most open source projects don’t make it to fifteen years. The ones that do are often in maintenance mode, watching contributor counts decline. DPDK turned fifteen this year, and member retention stands at 100%. Companies aren’t just staying, they’re maintaining their sponsorship levels year after year.

Rashid Khan led Red Hat’s DPDK efforts and served as Chair of the Governing Board during a critical growth phase. When he reflects on what fifteen years of sustained community investment actually looks like, he doesn’t reach for the big numbers first. He reaches for a word most engineers would consider a compliment.

“In the Linux environment ‘boring’ is good,” Khan notes. “DPDK strives to become boring. Meaning it is easy to deploy, and it just works out of the box. DPDK has hit the sweet spot; it works reliably and is part of high-visibility mission critical projects like particle accelerators, extraterrestrial missions, and critical communication infrastructure. That’s when you know you’ve built something that lasts.”

“DPDK is mature now. It’s in a very healthy state, and it has a long runway ahead.”

Maturity Isn’t Bloat

At conferences, engineers occasionally raise the inevitable question: newer projects exist with cleaner codebases. Why stick with DPDK? Khan’s response addresses the reality of infrastructure software.

“It’s always easier to say this is shiny and new when it works with brand new pieces of hardware,” he explains. “DPDK did start at that point with the latest hardware of that time. However, it takes a lot of effort to develop a mature and stable project that enables new hardware, and new features without destabilizing existing functionality.”

Backwards compatibility isn’t technical debt, it’s a requirement. Companies depend on DPDK working reliably on hardware deployed five to ten years ago.

“DPDK has to provide longer-term support for older hardware and at the same time work with new hardware,” Khan says. “Newer releases of DPDK should not be breaking mature hardware out in the field. Backwards compatibility is extremely important. DPDK has done a good job maintaining that.”

Learning to Lead Sideways

Khan spent years in Red Hat’s networking team before taking on DPDK governance. His background was board support packages, kernel networking, upstream contributions, engineering where technical merit determines what gets merged. Leading DPDK’s Governing Board meant applying that same philosophy to governance.

“Leading in open source is mostly leading by influence,” Khan explains. “Most of the people who contribute in the upstream communities do not report to the person leading the upstream project. It’s like leading a team that does not report to you, and the only way to influence them is through the merits of the solution, the merits of the patches proposed, features introduced, and how they will serve the customers and the partners.”

This creates tension for anyone coming from corporate engineering. Product teams need features by specific dates. Upstream communities review patches when reviewers have time.

“The upstream community doesn’t necessarily adhere to the timelines which are needed for commercial products,” Khan notes. “A lot of times they just want to create the best open source solution and not worry about time or release pressures.”

Khan had to convince engineers and decision makers from Intel, Nvidia, NXP, Marvell, and cloud providers that technical directions made sense for everyone, not just Red Hat. His tenure as Chair saw 100% member retention with companies maintaining their financial commitments unchanged. That doesn’t happen unless leadership creates value that members recognize.

Reflecting on the Journey

When Rashid reflects on his time leading DPDK’s governance, specific achievements stand out, not abstract improvements, but concrete changes that made the project stronger.

“We brought in automated testing with the UNH IOL labs,” he says. “We made that in the critical path for the patches and releases. I’m very proud of that effort.”

The University of New Hampshire’s InterOperability Laboratory provided something no single company could: a trustworthy, vendor-neutral testing environment with hardware diversity spanning multiple silicon vendors. Making automated testing mandatory through UNH-IOL meant patches got validated fairly across competing vendors before merging. It also reduced the burden on Thomas Monjalon, one of DPDK’s core maintainers, who had been doing much of that validation work manually.

Financial stability mattered too. “We were able to repeatedly bring back all of the members and their sponsorships. We were also able to keep the financial side of the project extremely healthy.” That financial health enabled investments in documentation and marketing—not glamorous line items, but clear documentation lowers barriers for new users, and marketing efforts help people discover that DPDK solves their problems.

“All in all, we were able to move the needle on many different fronts—on the testing, marketing, documentation, and financial stability,” Khan reflects.

The Moments That Mattered

“A lot of 5G networks started using DPDK in their critical path,” Khan says. “That was a proud moment for us.”

When telecom operators deploy 5G base stations running DPDK, that validates fifteen years of engineering. It means the architecture decisions made in the early 2010s—user space packet processing, polling mode drivers, dedicated CPU cores—solved real problems at scale.

“When we saw more and more DPDK going into production environments and becoming critical infrastructure, that is quite a proud moment.” When you make a phone call, stream a video, or check your bank balance, DPDK is moving those packets somewhere in the chain. Financial trading platforms executing microsecond-critical transactions run DPDK between their servers and network fabric. At CERN’s Large Hadron Collider, particle accelerators use it to move colossal data volumes from collision detectors.

Hands on Keyboards

“Red Hat has been supporting the project on multiple fronts,” Khan explains. “Membership and leadership support, but more importantly, the engineers. We provided people who take care of the maintenance branches, people who run the testing labs, hands on the keyboard doing the actual work.”

Engineers like Kevin Traynor, David Marchand, and Maxime Coquelin became core contributors. DPDK was their focus, not a side project. The commitment makes business sense: “We use DPDK for fast and predictable processing of the packets for virtual environments. It is in the critical path of Red Hat products.”

“Red Hat’s commitment to DPDK is not ending,” Khan emphasizes. “For the foreseeable future, we want to continue supporting financially and with technical expertise.”

The Breadth Nobody Sees

“The DPDK project is not a one or two-company project,” Khan says. “There are so many people who contributed from academia, and many organizations of different sizes. We need to remember the wide breadth of contributions from all.”

University researchers contributed algorithms. Small networking companies submitted drivers. Individual contributors fixed bugs and improved documentation. Cloud providers optimized for deployment patterns critical to their infrastructure. That diversity strengthened the project: no single company could dictate its direction, and technical merit determined what got accepted.

“We worked very hard in improving the governance of the project, and the fundamental thing that we brought in was transparency,” Khan explains. Meetings became predictable. Agendas arrived before meetings, giving people time to think. “We tried to change the meetings from discussions to decisions.”

“Earlier the meetings were a little bit ad hoc. Sometimes meetings were called at the last minute, sometimes the agendas or the slide decks were not sent beforehand. We worked hard to reduce the chaos, and bring governance and maturity to the project.” The 100% retention rate validates that governance predictability and clarity matters.

The Next Fifteen Years

“DPDK works very well with virtual machines and provides critical fast packet forwarding,” Khan says. “We have to evaluate the pros and cons of DPDK working with containers in the kernel context.”

Power efficiency creates new design challenges. “Customers want ultra-low latency, but they also do not want systems to be using 100% power all the time. If there is a way to have some creative thinking where we can provide low latency but not use the full CPU at full tilt, that will be fantastic!”

This becomes critical as AI workloads dominate data center power budgets. “The electricity and cooling demands on data centers are just going to continue increasing because of AI workloads.”

Many-core CPUs require rethinking scaling assumptions. “How many of those cores need to be dedicated to DPDK, and does DPDK scale linearly as you put more and more cores towards network processing?” There is a lot of good work going on to address this.

“As more AI workloads move to the edge, as the ecosystem diversifies, DPDK can play a crucial role,” Khan says. “The framework’s multi-vendor support and ultra-low latency data movement become increasingly valuable as organizations evaluate different architectures and protocols for their specific use cases.”

Welcome to Year Sixteen

The project needs help across multiple areas. Testing and validation remain priorities—if you’re deploying DPDK in production, contribute test cases for your hardware configuration. LTS maintenance needs ongoing attention: long-term support branches need backports, bug fixes, and security patches. Documentation improvements lower barriers for new users.

DPDK needs new reviewers because every patch benefits from fresh eyes. You don’t need commit access or prior experience, just the ability to read code and ask honest questions.

Get involved: Review your first patch


About the DPDK Project

The Data Plane Development Kit (DPDK) consists of libraries to accelerate packet processing workloads running on a wide variety of CPU architectures. By moving packet processing to the user space, DPDK allows for higher performance than is typically possible using the kernel’s network stack.

About the Linux Foundation

The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects, including Linux, Kubernetes, Model Context Protocol (MCP), OpenChain, OpenSearch, OpenSSF, OpenStack, PyTorch, Ray, RISC-V, SPDX and Zephyr, provide the foundation for global infrastructure. The Linux Foundation is focused on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

Last Updated: 03/19/2026

How CHIME’s Correlator Team uses DPDK to Turn Raw Sky into Science

By User Stories

TL;DR / Key Results

  • Throughput shaped in cache: CHIME’s GPU correlator path ingests UDP from FPGAs and, via DPDK poll-mode + DDIO, parses in L3 and writes non-temporal to exact DRAM offsets, pre-arranged for GPU math.
  • Memory ops halved: The design targets ~2 host-memory operations per byte delivered to GPUs (DRAM write, then GPU read), avoiding extra reorder passes.
  • Feeding GPUs at line-rate: Legacy CHIME nodes sustained ~25.6 Gb/s per CPU; current upgrades target ~100 Gb/s per NUMA with distributor cores.
  • Commodity CPUs, fewer cores: 6-core hosts handle capture/placement because the CPU mostly copies; DPDK minimizes per-packet cycles.
  • Portable framework: A single pipeline framework (“Kotekan”) abstracts DPDK boilerplate; different telescopes plug in stages and YAML pipelines.

“DPDK let us look at the header while it was still in L3 and write the payload exactly where the GPU expects it.” — Andre Renard

We needed the cluster to ingest over 6.4 Tb/s without major CPU resources.

Opening

“Instruments break not with loud bangs but with slow math: a firehose of packets the CPU can’t place, ring buffers that miss by a cacheline, correlators that stall because a matrix never quite arrived in order.”

When the Canadian Hydrogen Intensity Mapping Experiment (CHIME) started seeing the sky as streams of UDP from thousands of digitizers, Andre Renard had one job that mattered: get every packet where the GPU expects it, on time, without roasting host memory bandwidth.

CHIME’s design bet early on GPUs for correlation, cheap FLOPs, tensor cores on the horizon, rapid iteration. That created a different bottleneck: host memory. Traditional paths (kernel sockets, two-pass reorders, GPU-side reshuffles) burned cycles and DRAM bandwidth they didn’t have. Renard’s team started looking at DPDK.

The move was pragmatic. Poll-mode to avoid context switches; DDIO to inspect headers while the bytes are still in LLC; non-temporal writes to land payloads directly at precomputed strides. One pass across cache, one write to DRAM, one GPU read.

The Human Story

Andre Renard (University of Toronto / CHIME Collaboration) joined CHIME as project staff: a computer scientist embedded in a physicist-led experiment. “It’s definitely not a solo project,” he says. Multiple institutions, from UBC to Perimeter to McGill, share software development; 5-10 engineers contribute at any time across telescopes. Renard took the network path: FPGAs push UDP; GPUs correlate; the host makes it look easy.

“I’m proud we made the world’s largest radio correlator of its time actually work, bandwidth, antennas, the whole thing, and that our piece of the pipeline held up.”

Industry Consensus / Problem Identification

By the time CHIME began building, GPUs for radio astronomy had moved from a curiosity to a credible option. FPGAs and ASICs still dominated the front end, but matrix-heavy, low-bit-depth math made GPUs attractive and cost-effective. CHIME’s architecture took advantage of that:

  • F-engine (FPGA): Digitize and channelize. Split broadband into thousands of narrow frequency channels; perform the corner-turn so each downstream node sees all inputs for a subset of frequencies.
  • X-engine (GPU): Perform cross-correlation across all inputs (outer products → Hermitian matrices), then hand results to post-processing and imaging.

The catch was scale. The project moved UDP Packets at over 6.4 Tb/s across point-to-point links from F engines to X-engine GPU nodes. The canonical approaches in similar systems—split headers/payloads with verbs, land payloads, then second-pass reorder on CPU or GPU—double-touch DRAM and overuse cores.

“We hit host memory bandwidth early. That was our wall, more than PCIe or GPU FLOPs.”

The idea that “the kernel can take it” was a non-starter. Even older CHIME nodes ran ~25.6 Gb/s per CPU, and upgrades now target ~100 Gb/s per NUMA. That mandates kernel bypass and ruthless avoidance of extra passes.

Technical Challenge

Make a UDP firehose look like a tidy, GPU-ready matrix without:

  • Using kernel sockets or copy-heavy paths
  • Performing a reorder pass in CPU DRAM
  • Wasting GPU global memory to reorder there
  • Spinning too many cores on per-packet overhead

CHIME’s additional constraint: they maintain a RAM ring buffer of incoming baseband (raw) data. If an event (e.g., FRB) triggers, they pull the raw segment from RAM. SSDs can’t keep up (endurance and bandwidth), and spinning disks are out of the question at these rates. That rules out “NIC→GPU only” paths: the data must pass through host DRAM anyway.

“The dream of NIC DMA straight into GPU is nice, but our science needs a full-rate copy in host RAM.”

So the path had to both feed the GPU and preserve a DRAM copy, with minimal memory traffic.

The Unconventional Approach

The team leaned into three ideas:

  • Poll-mode everywhere (DPDK): Avoid context switches and per-packet kernel overhead; dedicate cores; treat the CPU as a very fast, very predictable copier.
  • DDIO locality: Receive into LLC; inspect headers while they’re still in L3; decide final destinations before touching DRAM.
  • Non-temporal scatter-writes: From L3, perform NT stores into multiple DRAM offsets per packet, arranged so the GPU sees exactly the matrix tiles it expects.

This flips the usual reorder pattern. Instead of landing payloads “somewhere,” sorting later, and writing again, the RX path places each packet once where it belongs in the final GPU-consumable layout. Then the GPU reads once, and math begins.

“We can even scatter/gather: same packet payload written into multiple precomputed strides so the final matrix shape is perfect for the kernel.”

That last part matters in that correlation is outer products over many inputs and channels. Arranging memory in the right order translates directly into higher GPU occupancy and simpler kernels.

Cultural Translation

CHIME sits at the intersection of astronomy, HPC, and network systems. Each community brings different mental models:

  • Astronomers speak in beams, baselines, and FRBs. The requirement is scientific: don’t drop packets; preserve baseband; map the sky.
  • HPC/GPU folks want coalesced reads, tensor core throughput, and tile shapes.
  • Network engineers obsess over queues, NUMA locality, and cachelines.

CHIME’s software framework, Kodakan, bridges the gap. It hides DPDK boilerplate (NIC init, RX queue mapping, core pinning) behind base classes and YAML pipeline descriptions. Teams across instruments can implement a new “stage” without learning every DPDK nuance or pthread trick.

“One binary can run different telescopes by swapping the YAML pipeline. In some limited cases, you can build a new instrument mostly by writing a new config.”

What It Actually Does

At the packet path level:

  • F-engines send UDP frames containing channelized samples.
  • DPDK poll-mode RX cores deque packets while they’re still in L3 (DDIO).
  • The code parses a custom header (still hot) to compute target offsets.
  • It performs non-temporal stores to scatter the payload into DRAM addresses computed from header details.
  • A GPU stage then DMA-reads those regions and launches correlation kernels (outer products → Hermitian matrices).
  • In parallel, a baseband ring buffer in host RAM retains a rolling window of raw data for later retrieval if a trigger fires.

Scope & limits (explicit):

Scope: Host-side packet → DRAM placement optimized for GPU consumption; baseband retention in RAM; portable across several telescopes via a shared framework.
Limits: UDP ingress expects packet order gaps; logic tolerates reordering but assumes very low loss; still host-DRAM mediated (no NIC→GPU direct placement), by design.

Addressing Concerns

“Isn’t verbs/RDMA the modern way?”
Renard’s team considered verbs-based split and reorder. The challenge: extra passes. Either a CPU second pass to reorder or a GPU reorder that burns global memory and adds complexity. Their constraint, full-rate baseband in RAM, means NIC→GPU doesn’t remove the DRAM trip. DPDK minimizes it to one write.

“Poll-mode wastes cores.”
They run on 6-core CPUs in many nodes, intentionally small, because the CPU’s job is mostly copy/placement with few cycles per packet. DPDK’s low overhead made that feasible. On newer 100 Gb/s per NUMA nodes, they add distributor cores; the model still holds.

“Kernel bypass is dated; smart NICs can fix this.”
Smart NICs or programmable NIC pipelines could help, but economics and programmability matter. Commodity NICs plus DPDK delivered, repeatedly, across multiple instruments. The hardware dream Andre sketches, programmable on-NIC address calculation from custom headers, remains compelling if it arrives as a commodity surface.

“The bet is simple: one pass across cache, one write to DRAM, one GPU read. Anything extra pays interest in costs bandwidth you don’t have.”

Real-World Impact

  • CHIME correlator: At build time, largest radio correlator by bandwidth × antennas. The DPDK-based path is a critical link in sustained operations.
  • Throughput milestones: Legacy nodes around 25.6 Gb/s per CPU; upgrades targeting 100 Gb/s per NUMA with distributor cores.
  • Multi-site operations: Software and framework used across ~6 sites and by external users who download and adapt stages.
  • Science enabled: Mapping 21-cm neutral hydrogen to probe baryon acoustic oscillations; pulsar timing; prolific fast radio burst (FRB) detection with outrigger stations for precise localization.
  • Maintainable deployments: Preference for Ubuntu-bundled DPDK eases adoption across collaborations without bespoke build hurdles.

Reproduce It (Engineering Notes)

Goal: Land UDP payloads into GPU-ready DRAM tiles in a single pass.

Environment (representative):

  • NIC: Commodity 10/25/100 GbE supporting DDIO on host platform
  • CPU: 1–2 sockets; ensure NUMA-local RX queues; 6 cores workable at ~25 Gb/s; add distributor cores at 100 Gb/s/NUMA
  • GPU: 4× per node typical; correlation kernels tuned for int4/int8/tensor cores
  • RAM: Large (e.g., ≥1.5 TB per node) to hold baseband ring buffer
  • DPDK: Use distro-packaged (Ubuntu) for reproducibility across sites; pin lcores via YAML/pipeline config in Kodakan

Build/Run sketch (framework-agnostic pseudocode):

// Pseudocode: single-pass placement
while (rx_dequeue(pkts, RX_BURST)) {
  for (pkt in pkts) {
    hdr = parse_header(pkt);              // still in LLC via DDIO
    // Compute one or more target offsets for scatter
    for (t in layout_targets(hdr)) {
      nt_store(t.dst, pkt->payload, t.len); // non-temporal write to DRAM
    }
  }
}
// GPU stage DMA-reads the arranged tiles and launches corr kernels.

Config checklist:

  • Map RX queues to NUMA-local cores and target DRAM on the same socket.
  • Disable interrupt moderation; poll-mode only.
  • Use hugepages for DPDK mbufs; align scatter destinations to GPU-friendly strides.
  • Validate LLC hit rates and memory ops with Intel PCM (or analogous counters).
  • At 100 Gb/s, add a distributor core fan-out to multiple placement workers per NUMA.

Sanity checks:

  • Zero-loss on long runs at target line rate (synthetic F-engine traffic OK).
  • PCM shows ~2 memory ops/byte path (DRAM write, then GPU read).
  • GPU kernels see expected tile shapes without an internal reorder step.

Trade-offs

  • Host RAM dependency is intentional (for baseband capture); NIC→GPU bypass would under-deliver CHIME’s needs.
  • Poll-mode demands dedicated cores; it buys predictability and low tail latency at the cost of idle power.
  • Scatter-write complexity shifts logic to RX; it simplifies GPU kernels and reduces total memory traffic.

Community Impact

The correlator work sits alongside and in conversation with broader radio astronomy efforts, teams exploring NIC→GPU placement, terabit class ingress, and tensor-core-tailored kernels. Renard calls out ASTRON work (e.g., John Romein) exploring DPDK for GPU memory regions and extreme bandwidth. While CHIME’s current path stays DRAM-centric by design, these lines of work are converging on the same question: How do we feed accelerators at scale without melting host resources?

“Long term, everyone faces the same problem: feeding GPUs without burning CPUs or DRAM.”

Future & Next Steps

  • CHIME X-engine upgrade: Modern GPUs, tensor-core kernels, updated Kotekan pipelines; sustained 100 Gb/s/NUMA paths.
  • CORD (sister telescope): Dish-based array next to CHIME; newer FPGAs; similar DPDK path via a switch fabric.
  • HIERAX (South Africa): Sister project targeting similar 100 Gb/s/NUMA ingest with Kotekan stages.

Wishlist for NICs + APIs:

  • Bulk enqueue semantics akin to verbs: “Next N packets land at base + stride S”
  • Programmable address calculators on NICs: turn custom headers into DMA addresses (and scatter lists)
  • A commodity path for FPGA→RDMA encapsulation that’s feasible without massive RTL investments

How to Contribute

  • DPDK stages in Kodakan: New packet processors for alternative F-engine formats; distributor-core strategies for 100G.
  • Performance tooling: Portable PCM-like sampling, NUMA heatmaps, and cache residency metrics integrated into pipelines.
  • GPU kernels: int4/int8 correlation kernels tuned for new tensor cores; memory-layout co-design with host scatter logic.
  • Reliability: Long-haul, zero-loss regression harnesses; packet gap simulation; time-sync checks across multi-site deployments.

Onboarding path:

  1. Start with docs/tests in Kodakan: run a synthetic F-engine generator → verify placement maps and GPU tiles.
  2. Implement a toy stage: parse a minimal header, scatter to two destinations; validate with a tile checker.
  3. Add metrics hooks (per-queue drops, L3 hit rate proxies, DRAM BW, GPU DMA time).
  4. Join the mailing lists; discuss NUMA layouts and YAML pipelines before touching hot paths.
  5. Only then propose core changes to shared DPDK abstractions.

Project links:

Closing

Ask Renard what he’d change in DPDK, “It does what we need.” Then the engineer resurfaces: bulk enqueue semantics, on-NIC programmable address transforms, a commodity way for FPGAs to produce RDMA-placeable streams without heroic RTL. None of that contradicts CHIME’s DRAM-first reality. It simply opens options for the next instruments.

“I’d love a commodity NIC where I upload a tiny program: here’s my header, here’s the formula, put the packet exactly there.”

If you’re a developer who enjoys cachelines, NUMA maps, and the satisfaction of shaving one more pass off a hot path, CHIME’s approach shows the shape of the work: make placement decisions earlier; touch memory fewer times. Bring that energy to DPDK, to Kotekan, and to the telescopes that still need to be made real.

Get Involved: Review your first DPDK patch


About CHIME

The Canadian Hydrogen Intensity Mapping Experiment (CHIME) is a fixed, wide-field radio telescope located at the Dominion Radio Astrophysical Observatory near Penticton, British Columbia. It uses four stationary, 100-meter-long cylindrical reflectors in a drift-scan configuration: as Earth rotates, CHIME continuously maps a narrow north–south strip of the sky. Its science focuses on three pillars: 21-cm cosmology (tracing large-scale structure via neutral hydrogen and baryon acoustic oscillations), pulsar timing (including gravitational-wave–related studies), and fast radio bursts (FRBs), with outrigger stations added for high-precision FRB localization.

On the compute side, CHIME pairs FPGA “F-engine” front ends (digitization and channelization with a corner-turn) with GPU “X-engine” correlators that perform massive outer-product math to form visibilities and images. The collaboration spans multiple institutions—the Dominion Radio Astrophysical Observatory, McGill University, and many other institutions.—with shared software frameworks that enable related instruments (e.g., sister arrays in Canada and South Africa) to reuse pipeline components and configurations.

About the Linux Foundation

The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects, including Linux, Kubernetes, Model Context Protocol (MCP), OpenChain, OpenSearch, OpenSSF, OpenStack, PyTorch, Ray, RISC-V, SPDX and Zephyr, provide the foundation for global infrastructure. The Linux Foundation is focused on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

Last Updated: 03/19/2026

Beyond Classification: Deep Packet Inspection, DPDK, and the Future of Encrypted Traffic Intelligence

By User Stories

Tobias Roeder, Application Engineer at ipoque – a Rohde & Schwarz company, has spent years working at the intersection of deep packet inspection (DPI), open source packet processing, and telecom infrastructure. In a recent interview and follow-up to his DPDK Summit presentation, Tobias offered a candid view into how ipoque’s DPI engine integrates with DPDK rte_table API , and how their customer base, spanning startups to large telcos, leverages DPDK features to build intelligent, efficient, and secure networks.

DPI at Scale: A Practical Overview

ipoque provides a commercial DPI SDK that classifies traffic in real time without requiring decryption. It’s used to classify a wide variety of consumer and enterprise applications (e.g., WhatsApp, Netflix, MS Teams or VPN-Services) up to industrial IOT protocols (e.g. MQTT, Modbus, OPC UA..). The ultimate goal is to identify encrypted flows, and support decisions in firewalls, gateways, loadbalancers, UPFs, and other network functions.

While DPI isn’t new, its complexity has risen dramatically with the growth of encrypted and obfuscated protocols. As Tobias explains, “It used to be simple, most traffic was unencrypted or used more verbose TLS handshakes. Now, TLS 1.3 ESNI and QUIC obfuscation make traditional methods ineffective. Our DPI uses supervised machine learning to differentiate things like video-streaming versus video-downloading. ”

“It used to be simple, most traffic was unencrypted or used basic TLS. Now, TLS 1.3 and QUIC make traditional methods ineffective. Our DPI uses supervised machine learning to differentiate things like video-streaming vs video-downloading.”

Why DPDK?

For ipoque, DPDK serves as the abstraction layer that simplifies NIC access and enables rapid deployment across diverse environments, from embedded NXP devices to BlueField DPUs. “DPDK creates a very well-maintained base layer that abstracts away network card complexity,” Tobias notes. “It is our customers’ first choice of open-source packet processing frameworks.”

“DPDK creates a very well-maintained base layer that abstracts away network card complexity,” Tobias notes. “It is our customers’ first choice of open-source packet processing frameworks.”

Many of ipoque’s customers already have DPDK integrated into their stacks. Others migrate with Tobias’s team’s help. DPDK’s LTS stability and availability of tooling like testpmd are cited as core strengths in onboarding new users.

Feature Focus: Flow Offload and State Tracking

Two DPDK features stand out for ipoque’s use cases:

  • rte_flow offload: After DPI classifies a flow (especially long-running “elephant flows” like video or file-transfers), it can be offloaded to the NIC for efficient hardware processing.
  • rte_table / rte_cuckoo_hash: These libraries enable robust flow tracking, which is critical for stateful inspection. See details of performance comparisons in Tobias`s DPDK Summit presentation

These libraries simplify otherwise complex aspects of connection tracking, which would need to be built and maintained independently.

AI and ML in DPI

While AI in packet processing is often discussed at the infrastructure level (CI systems, test automation, or inference at the edge), ipoque integrates supervised Machine Learning and Deep Learning algorithms directly into its SDK. This helps identify traffic types even when protocol handshakes offer no visibility.

For instance, distinguishing a user video-streaming from video-downloading (both encrypted) is no longer feasible through traditional methods “Enhancing DPDK QoS with DPI allows for video optimization in 4G/5G packet cores.” Tobias explains.

“Enhancing DPDK QoS with DPI allows for video optimization in 4G/5G packet cores.”

Technical Challenges and DPI Resilience

Tobias acknowledges the performance burden posed by encrypted traffic and evolving transport protocols like QUIC. Features like multiplexed streams over UDP present new challenges to fair scheduling in mobile networks. For this reason, DPI-enabled user plane functions (UPFs) benefit from accurate traffic classification within DPDK-forwarding paths.

Recent Developments at ipoque and Rohde & Schwarz

Tobias’s insights also align with a wave of recent updates from the company:

  • Encrypted Traffic Intelligence (ETI): Unveiled at MWC 2025, ETI enhances the classification of TLS 1.3, QUIC, and ESNI traffic using advanced AI without needing decryption. It’s embedded in ipoque’s core DPI engines.
  • Open RAN DPI Analytics Report: A 2025 study showed that 74% of RAN vendors view DPI as critical to telemetry, real-time traffic analytics, and slicing logic in Open RAN deployments.
  • Partnership with ElastiFlow: By combining IPFIX flow records with DPI insights, the collaboration brings observability into fine-grained, application-aware dimensions — especially valuable for CSPs managing encrypted or obfuscated traffic.
  • Expanded 5G Solutions: At MWC, Rohde & Schwarz showcased next-gen monitoring and QoE/QoS tools that integrate ipoque’s DPI stack for real-time 5G visibility.

DPI and the Future of DPDK

When asked what excites him most about the future, Tobias points to cross-project collaboration and expanding DPDK’s reach beyond traditional telecom and networking. “It’s been amazing to see projects like radio telescopes and sensor analytics using DPDK. We’re eager to support use cases that aren’t just about routers, 4G/5G cores and firewalls.”

“It’s been amazing to see projects like radio telescopes and sensor analytics using DPDK. We’re eager to support use cases that aren’t just about routers, 4G/5G cores and firewalls.”

As protocols evolve and encryption deepens, DPI’s role becomes more nuanced. The toolkit must be accurate, passive, and fast, and the packet processing framework underneath must be efficient, adaptable, and stable.

For ipoque, DPDK is the first choice for user-space packet processing. And for the wider ecosystem, Tobias’s work highlights how DPI isn’t just surviving encryption, it’s evolving with it.

Article content

[img source p5. https://www.ipoque.com/media/brochures/Solution_guide_en_DPI_3608-7309-62_v0201_144dpi.pdf]

Learn More:

Building Kubernetes-Native SDN with DPDK: The dpservice Story

By User Stories

Kubernetes revolutionized application orchestration, but infrastructure management? Still a mess of REST APIs and shell scripts that barely integrate with the ecosystem. Guvenc Gulce and IronCore Team saw datacenter operators wrestling with networking solutions that treated Kubernetes as an afterthought, bolted on rather than built in.

The vision was clean: pure IPv6 underlay networks, software-defined overlays, SmartNIC offloading, all controlled through native Kubernetes APIs. No NAT boxes. No firewall appliances. Just L3 routing with SDN on top. One of the important challenges of this endeavor ? Building a dataplane fast enough to hit line-rate while maintaining the flexibility to integrate deeply with Kubernetes.

That’s where dpservice comes in – and where DPDK became essential.

The Gap in Infrastructure Management

“We thought that there was a need for a good solution for Kubernetes based infrastructure management for real and virtualized resources in a datacenter environment with software components designed from the beginning to integrate nicely with the Kubernetes ecosystem,” Güvenç explains.

That motivation drove the creation of dpservice as a key component of the SDN layer for IronCore – an open source, EU-funded project under The Linux Foundation and NeoNephos Foundation.

The problems in the open source infrastructure space were clear. “A lot of the infrastructure resource management projects were using REST APIs and/or script based solutions lacking operational logic and they would integrate half-heartedly with Kubernetes and they were not really designed with Kubernetes in mind,” Güvenç notes. These solutions treated Kubernetes as just another API endpoint rather than embracing it as the foundation for infrastructure orchestration.

The architectural vision went further.

“We also think that datacenter underlay traffic can be simple and only using IPv6 is enough. This would ease network operations and reduce the amount of used appliances in the datacenter. (NAT / Firewall boxes),” says Güvenç.

The idea: dpservice sitting on top of a simple IPv6 underlay network would offer software defined networking functionality by making use of SmartNICs, while IPv4 and IPv6 could still be offered in the customer virtual network.

This wasn’t about recreating existing virtual switches. It was about rethinking datacenter networking from first principles with Kubernetes as the control plane.

Why DPDK Was Non-Negotiable

When you’re building high-performance SDN, your options narrow quickly.

“If you need fast / low latency / high throughput software defined networking in datacenters, you don’t have that many options. EBPF and DPDK are the first two dominant technologies that come to your mind,” Güvenç explains.

The team chose DPDK for specific reasons: “it offers a rich ecosystem of libraries to develop the dataplane/packet processing logic and offers a nice software abstraction to offload the traffic completely to the hardware.”

The performance target wasn’t ambitious – it was absolute.

“By using DPDK, we can reach line-rate in the software defined network functions we use which is actually the highest performance you can get.”

Line-rate means the theoretical maximum throughput of the hardware. There’s no performance left on the table.

This matters because dpservice isn’t handling toy workloads. It’s the SDN layer for production infrastructure supporting virtualized and bare metal resources. Packet forwarding, routing, NAT, firewalling – all happening in software at wire speed. DPDK’s library ecosystem made this achievable without writing everything from scratch.

The hardware offload abstraction proved equally critical. SmartNICs can take over packet processing tasks entirely, but only if the software can communicate with them effectively. DPDK provides that layer, letting dpservice treat hardware acceleration as a configuration choice rather than a complete architectural rewrite.

What dpservice Actually Is

At its core, dpservice is a DPDK-based dataplane designed for a specific architectural vision. Unlike OVS-DPDK or VPP, it’s built around assumptions that simplify datacenter operations.

“OVS-DPDK and VPP are two prominent examples when it comes to DPDK based virtual switches and routers but they both also have their pros and cons and would not fit to our use-case 100%,” Güvenç explains. “OVS is very L2 oriented for example but our solution aims for simplifying the underlay network where we keep the L2 networks very small (2 members) and run the communication purely L3 based.”

VPP presented different challenges.

“VPP’s code is also mostly not based on DPDK libraries and has a steep learning curve, if you want to adapt it to your needs. DPDK is doing a much better job here and the VPP’s graph based dataplane approach can be used also in the DPDK ecosystem where dpservice is also doing it.”

The distinctive features emerge from these design choices. “The unique features of dpservice are being very L3 oriented, supporting IPv6 from the beginning, Supporting SR-IOV and hardware flow offloading from the beginning,” says Güvenç. The project also ships with “a kubernetes controller and API which can be used to create Virtual Networks, Virtual Interfaces and Network Functions.”

That Kubernetes integration isn’t an afterthought. The metalnet controller (https://github.com/ironcore-dev/metalnet) bridges dpservice into the broader IronCore ecosystem, making network resources manageable through standard Kubernetes patterns.

Güvenç’s own summary captures it well:

“dpservice project delivers a high-performance DPDK based dataplane for SR-IOV virtual functions, seamlessly integrating into Kubernetes environments through its metalnet controller to provide scalable software defined networking services.”

Why Kubernetes Integration Matters

The push for Kubernetes-native infrastructure management isn’t about following trends. “Seamless Kubernetes integration is important as we think that Software Defined Networking should have all the positive effects of a Kubernetes based infrastructure management, like self-healing of managed systems and easier Day-2 operations with a better central insight to the managed systems underneath,” Güvenç explains.

For operators, the abstraction changes daily work. “For an operator, the managed virtual machines, metal machines and virtual networks are like abstract resources and he doesn’t need to deal with specific machines and customer networks in the infrastructure. These are declared as kubernetes specifications and they get materialized with Kubernetes controllers in place. Operator’s job can be simplified and automated.”

The benefits extend to AI-driven operations. “It would be even easier to inject AI based decisions into an IaaS system which uses Kubernetes as there are mature AI based decision helpers which nicely integrate with Kubernetes,” notes Güvenç.

Developers gain leverage too. “A developer can also rely on the battle-tested Kubernetes libraries / testing frameworks when he/she develops his/her resource management logic and this would make possible to concentrate on the real value delivered (like in our case an SDN layer) as the rest is already a mature technology which can be leveraged.”

The observability story integrates naturally. “We also use Prometheus and Grafana from CNCF project suite to give a better observability to dpservice internals. Prometheus exporters can nicely integrate with DPDK’s telemetry interface.” The entire cloud-native ecosystem becomes available once you’re Kubernetes-native.

The IronCore and European Sovereign Cloud Context

dpservice doesn’t exist in isolation. It’s the SDN layer for IronCore, which tackles infrastructure-as-a-service challenges in the NeoNephos Foundation context. “IronCore is the project in Neonephos context which concentrates on infrastructure management. It is a typical IaaS project/offering and it is one of the important building blocks to provide the high level services in the sovereign cloud context, like platform mesh and it integrates nicely with other Neonephos projects like Gardener and Garden Linux.”

The European sovereign cloud effort addresses real concerns about infrastructure independence and data sovereignty. dpservice provides the high-performance networking layer that makes this vision technically feasible. “dpservice is providing the SDN layer of the IronCore IaaS and making it an important piece in the overall context.”

The project is young in the open source world. “The project is not so widely known yet as it was donated to The Linux Foundation only 3 months ago by SAP,” Güvenç notes. The contributor base reflects this early stage: “We have on our github page 14 contributors at the moment. I am the single maintainer and technical lead of the dpservice project at the moment but we have 3 more key people contributing to dpservice. The initiator of the project is Malte Janduda and the other two key contributors are Jaromír Smrček and Tao Li .”

The organizational backing is visible. “The organisations which are involved are also the same organisations which are members of the Neonephos Foundation. This can be seen publicly on the Neonephos page: https://neonephos.org/members

Engaging with the DPDK Community

The dpservice team is actively seeking connections with the broader DPDK community.

“We would be happy to get feedback about the dpservice project from the DPDK community,” says Güvenç.

“I am already in close contact with the maintainers and technical committee of the DPDK and presented dpservice to them. We also explore possibilities of what we can upstream from dpservice to the DPDK ecosystem. There are the first ideas emerging like re-usable DPDK Graph nodes which can be contributed to the DPDK community.”

This upstream engagement could benefit both projects. DPDK gains real-world validation of its graph-based dataplane approach and potentially reusable components. dpservice gains visibility and community feedback that can strengthen the project.

What’s Coming Next

The roadmap is public and actively maintained: https://github.com/orgs/ironcore-dev/projects/13

Two major features dominate the near-term plan. “The most important two things we plan to do in the near future is to give the ability to dpservice encrypt the traffic leaving from it to the wire and decrypt the traffic it receives from the wire,” Güvenç explains. Wire-level encryption adds another layer of security for sovereign cloud deployments where data protection is paramount.

“The second important thing on the roadmap is to integrate High Availability to dpservice so that dpservice can run with two instances and there is the possibility of seamless failover from one instance to the other one.” Production infrastructure demands resilience, and HA support moves dpservice from interesting technology to production-grade component.

Getting Started and Contributing

You don’t need a datacenter to experiment with dpservice. The team built ironcore-in-a-box specifically to lower the barrier to entry. “If someone wants to try dpservice or IronCore. You don’t need first a complex infrastructure for it. We have the ironcore-in-a-box project which uses the Kind cluster to demonstrate the usage of the IronCore project. TAP device based dpservice is included. Installation is very easy.” (https://github.com/ironcore-dev/ironcore-in-a-box)

For developers looking to contribute, Güvenç provides clear starting points. “For the potential contributors, I would recommend to start with the developer documentation of dpservice https://github.com/ironcore-dev/dpservice/tree/main/docs/development and for the overall understanding of IronCore, I would recommend to start with the IronCore documentation https://ironcore.dev/iaas/getting-started.html and especially networking part of it.”

A technical deep dive is available for those who want more detail: https://guvenc.github.io/software%20engineering/2024/10/18/dpservice.html

The project welcomes engagement. “We also welcome contributions / comments and more stars for the GitHub page of the dpservice.” (https://github.com/ironcore-dev/dpservice)

The Reward

Building infrastructure software can be thankless work – months of effort invisible to end users. But Güvenç finds motivation in real-world impact. “I think the most exciting and rewarding moment is to see other people use dpservice / IronCore and they can get an added value out of it.”

The development experience itself offered early wins. “During the build phase it was very exciting to make fast progress to implement the first features of dpservice as the DPDK has nice examples and a wide range of libraries which make the first success moments quickly possible.”

That’s DPDK’s strength showing through – not just raw performance, but an ecosystem that accelerates development. When your networking dataplane needs to hit line-rate while integrating with Kubernetes, talk to hardware SmartNICs, and support production workloads, you need a foundation that handles the complexity. DPDK provides that foundation.

dpservice shows what becomes possible when you build on it.


Try dpservice:

Connect with the team on Linkedin:

Malte Janduda

Guvenc Gulce

Jaromír Smrček

Tao Li

DPDK Dispatch Q4 – The Quarterly Newsletter

By Newsletter

Welcome to the Q4 DPDK Dispatch, your quarterly update on the latest developments, insights, and highlights from the open source community driving the evolution of high-performance network software and applications.

Main Announcements

  • The DPDK Summit 2026 is confirmed and we’re looking at several European cities for the first week of June with more info to come!
  • Heads up, the DPDK major release 25.11 LTS arrives next month with many new drivers

Blogs, User Stories and Developer Spotlights

Check out a new hybrid project user story with Guvenc Gulce, Malte Janduda, Jaromír Smrček, and Tao Li who designed dpservice. Learn how dpservice combines DPDK, Kubernetes, and SmartNICs to deliver wire-speed, cloud-native.

Read it here →

See how Tobias Roeder at ipoque – a Rohde & Schwarz company uses DPDK to power 5G cores, UPFs, and secure edge infrastructure, in our latest user story.

Read it here →

Planning your next DPDK upgrade? Start here. New updates in 22.11.10, 23.11.5, and 24.11.3 strengthen LTS branches for production across telecom, enterprise, and cloud—backed by targeted fixes and broad community support.

Read the update here →


Learn how to quickly clone the DPDK repository and run its end-to-end test suite (DTS) in just minutes. Understand the structure of a DTS test suite—covering setup and teardown, naming conventions, documentation standards, and practical examples using the DPDK TestPMD application.

Whether you’re a new contributor or a seasoned developer looking to understand DPDK’s testing framework, these videos provides a clear, hands-on introduction to DTS.

Watch the videos →


DPDK & Technologies in the news:


Performance Reports & Meeting Minutes


This newsletter is sent out to thousands of DPDK developers, it’s a collaborative effort. If you have a project release, pull request, community event, and/or relevant article you would like to be considered as a highlight for next month, please reply to marketing@dpdk.org or dm Benjamin Thomas.

Want to support the community? Like and share this post!

Thank you for your continued support and enthusiasm.