Apply to Speak at the DPDK Summit Montreal by July 31.
Skip to main content
Daily Archives

June 5, 2024

DPDK Dispatch June

By Monthly Newsletter

1. Main Announcements

3. User Stories, Dev Spotlights

  • Submit a blog here
  • Submit a developer spotlight here

4. DPDK & Technologies in the news:

5. Performance Reports & Meeting Minutes

This newsletter is sent out to thousands of DPDK developers, it’s a collaborative effort. If you have a project release, pull request, community event, and/or relevant article you would like to be considered as a highlight for next month, please reply to marketing@dpdk.org

Thank you for your continued support and enthusiasm.

DPDK Team.

Microsoft Azure Mana DPDK Q&A

By Blog

In today’s rapidly evolving digital landscape, the demand for high-speed, reliable, and scalable network solutions is greater than ever. Enterprises are constantly seeking ways to optimize their network performance to handle increasingly complex workloads. The integration of the Data Plane Development Kit (DPDK) with Microsoft Azure’s Network Adapter (MANA) is a groundbreaking development in this domain.

Building on our recent user story, “Unleashing Network Performance with Microsoft Azure MANA and DPDK,” this blog post delves deeper into how this integration is revolutionizing network performance for virtual machines on Azure. DPDK’s high-performance packet processing capabilities, combined with MANA’s advanced hardware offloading and acceleration features, enable users to achieve unprecedented levels of throughput and reliability.

In this technical Q&A, Brian Denton Senior Program Manager, at Microsoft Azure Core further illuminates the technical intricacies of DPDK and MANA, including the specific optimizations implemented to ensure seamless compatibility and high performance. He also elaborates on the tools and processes provided by Microsoft to help developers leverage this powerful integration, simplifying the deployment of network functions virtualization (NFV) and other network-centric applications.

1. How does Microsoft’s MANA integrate with DPDK to enhance the packet processing capabilities of virtual machines on Azure, and what specific optimizations are implemented to ensure compatibility and high performance?

[Brian]: MANA is a critical part of our hardware offloading and acceleration effort. The end goal is to maximize workloads in hardware and minimize the host resources needed to service virtual machines. Network Virtual Appliance (NVA) partner products and large customers leverage DPDK to achieve the highest possible network performance in Azure. We are working closely with these partners and customers to ensure their products and services take advantage of DPDK on our new hardware platforms.

2. In what ways does the integration of DPDK with Microsoft’s Azure services improve the scalability and efficiency of network-intensive applications, and what are the measurable impacts on latency and throughput?

[Brian]: Network Virtual Appliances are choke points in customers networks and are often chained together to protect, deliver, and scale applications. Every application in the network path adds processing and latency between the endpoints communicating. Therefore, NVA products are heavily focused on speeds and feeds and designed to be as close to wire-speed as possible. DPDK is the primary tool used by firewalls, WAF, routers, Application Delivery Controllers (ADC), and other networking applications to reduce the impact of their products on network latency. In a virtualized environment, this becomes even more critical.

3. What tools and processes has Microsoft provided for developers to leverage DPDK within the Azure ecosystem, and how does this integration simplify the deployment of network functions virtualization (NFV) and other network-centric applications? 

[Brian]: We provide documentation on running testpmd in Azure: https://aka.ms/manadpdk. Most NVA products are on older LTS Linux kernels and require backporting kernel drivers, so having a working starting point is crucial for integrating DPDK application with new Azure hardware.

4. How does DPDK integrate with the MANA hardware and software, especially considering the need for stable forward-compatible device drivers in Windows and Linux?

[Brian]: The push for hardware acceleration in a virtualized environment comes with the drawback that I/O devices are exposed to the virtual machine guests through SR-IOV. Introducing the next generation of network card often requires the adoption of new network drivers in the guest. For DPDK, this depends on the Linux kernel which may not have drivers available for new hardware, especially in older long-term support versions of Linux distros. Our goal with the MANA driver is to have a common, long-lived driver interface that will be compatible with future networking hardware in Azure. This means that DPDK applications will be forward-compatible and long-lived in Azure.

5. What steps were taken to ensure DPDK’s compatibility with both Mellanox and MANA NICs in Azure environments?

[Brian]: We introduced SR-IOV through Accelerated Networking early 2018 with the Mellanox ConnectX-3 card. Since then, we’ve added ConnectX-4 Lx, ConnectX-5, and now the Microsoft Azure Network Adapter (MANA). All these network cards still exist in the Azure fleet, and we will continue to support DPDK products leveraging Azure hardware. The introduction of new hardware does not impact the functionality of prior generations of hardware, so it’s a matter of ensuring new hardware and drivers are supported and tested prior to release.

6. How does DPDK contribute to the optimization of TCP/IP performance and VM network throughput in Azure?

[Brian]: See answer to #2. DPDK is necessary to maximize network performance for applications in Azure, especially for latency sensitive applications and heavy network processing.

7. How does DPDK interact with different operating systems supported by Azure MANA, particularly with the requirement of updating kernels in Linux distros for RDMA/InfiniBand support?

[Brian]: DPDK applications require a combination of supported kernel and user space drivers including both Ethernet and RDMA/InfiniBand. Therefore, the underlying Linux kernel must include MANA drivers to support DPDK. The latest versions of Red Hat and Ubuntu support both the Ethernet and InfiniBand Linux kernel drivers required for DPDK.

8. Can you provide some examples or case studies of real-world deployments where DPDK has been used effectively with Azure MANA?

[Brian]: DPDK applications in Azure are primarily firewall, network security, routing, and ADC products provided by our third-party Network Virtual Appliance (NVA) partners through the Marketplace.  With our most recent Azure Boost preview running on MANA, we’ve seen additional interest by some of our large customers in integrating DPDK into their own proprietary services.

9. How do users typically manage the balance between using the hypervisor’s virtual switch and DPDK for network connectivity in scenarios where the operating system doesn’t support MANA?

[Brian]: In the case where the guest does not have the appropriate network drivers for the VF, the netvsc driver will automatically forward traffic to the software vmbus. The DPDK application developer needs to ensure that they support the netvsc PMD to make this work.

10.What future enhancements or features are being considered for DPDK in the context of Azure MANA, especially with ongoing updates and improvements in Azure’s cloud networking technology?

[Brian]: The supported feature list is published in the DPDK documentation: 1. Overview of Networking Drivers — Data Plane Development Kit 24.03.0-rc4 documentation (dpdk.org). We will release with the current set of features and get feedback from partners and customers on demand for any new features.

11. How does Microsoft plan to address the evolving needs of network performance and scalability in Azure with the continued development of DPDK and MANA?

[Brian]: We are focused on hardware acceleration to drive the future performance and scalability in Azure. DPDK is critical for the most demanding networking customers and we will continue to ensure that it’s supported on the next generations of hardware in Azure.

12. How does Microsoft support the community and provide documentation regarding the use of DPDK with Azure MANA, especially for new users or those transitioning from other systems?

[Brian]: Feature documentation is generated out of the codebase and results in the following:

Documentation for MANA DPDK, including running testpmd, can be found here: https://aka.ms/manadpdk

13. Are there specific resources or training modules that focus on the effective use of DPDK in Azure MANA environments?

[Brian]: We do not have specific training resources for customers to use DPDK in Azure, but that’s a good idea. Typically, DPDK is used by key partners and large customers that work directly with our development teams.

14. Will MANA provide functionality for starting and stopping queues?

[Brian]: TBD. What’s the use case and have you seen a need for this? Customers will be able to change the number of queues, but I will have to find out whether they can be stopped/started individually.

15. Is live configuration of Receive Side Scaling (RSS) possible with MANA?

[Brian]: Yes. RSS is supported by MANA.

16. Does MANA support jumbo frames?

[Brian]: Jumbo frames and MTU size tuning are available as of DPDK 24.03 and rdma-core v49.1

17. Will Large Receive Offload (LRO) and TCP Segmentation Offload (TSO) be enabled with MANA?

[Brian]: LRO in hardware (also referred to as Receive Segment Coalescing) is not supported (software should work fine)

18. Are there specific flow offloads that MANA will implement? If so, which ones?

[Brian]: MANA does not initially support DPDK flows. We will evaluate the need as customers request it.

19. How is low migration downtime achieved with DPDK?

[Brian]: This is a matter of reducing the amount of downtime during servicing events and supporting hotplugging. Applications will need to implement the netvsc PMD to service traffic while the VF is revoked and fall back to the synthetic vmbus.

20. How will you ensure feature parity with mlx4/mlx5, which support a broader range of features?

[Brian]: Mellanox creates network cards for a broad customer base that includes all the major public cloud platforms as well as retail.  Microsoft does not sell the MANA NIC to retail customers and does not have to support features that are not relevant to Azure. One of the primary benefits of MANA is we can keep functionality specific to the needs of Azure and iterate quickly.

21. Is it possible to select which NIC is used in the VM (MANA or mlx), and for how long will mlx support be available?

[Brian]: No, you will never see both MANA and Mellanox NICs on the same VM instance. Additionally, when a VM is allocated (started) it will select a node from a pool of hardware configurations available for that VM size. Depending on the VM size, you could get allocated on ConnectX-3, ConnectX-4 Lx, ConnectX-5, or eventually MANA. VMs will need to support mlx4, mlx5, and mana drivers till hardware is retired from the fleet to ensure they are compatible with Accelerated Networking.

22. Will there be support for Windows and FreeBSD with DPDK for MANA?

[Brian]: There are currently no plans to support DPDK on Windows or FreeBSD. However, there is interest within Microsoft to run DPDK on Windows.

23. What applications are running on the SoC?

[Brian]: The SoC is used for hardware offloading of host agents that were formerly ran in software on the host and hypervisor. This ultimately frees up memory and CPU resources from the host that can be utilized for VMs and reduces impact of neighbor noise, jitter, and blackout times for servicing events.  

24. What applications are running on the FPGA?

[Brian]: This is initially restricted to I/O hardware acceleration such as RDMA, the MANA NIC, as well as host-side security features.

Read the full user story ‘Unleashing Network Performance with Microsoft Azure MANA and DPDK’

Cache Awareness in DPDK Mempool

By Blog

Author: Kamalakshitha Aligeri – Senior Software Engineer at Arm

The objective of DPDK is to accelerate packet processing by transferring the packets from the NIC  to the application directly, bypassing the kernel. The performance of DPDK relies on various factors such as memory access latency, I/O throughput, CPU performance, etc.

Efficient packet processing relies on ensuring that packets are readily accessible in the hardware  cache. Additionally, since the memory access latency of the cache is small, the packet processing  performance increases if more packets can fit into the hardware cache. Therefore, it is important  to know how the packet buffers are allocated in hardware cache and how it can be utilized to get  the maximum performance. 

With the default buffer size in DPDK, hardware cache is utilized to its full capacity, but it is not  clear if this is being done intentionally. Therefore, this blog helps in understanding how the  buffer size can have an impact on the performance and things to remember when changing the  default buffer size in DPDK in future. 

In this blog, I will describe, 

1. Problem with contiguous buffers 

2. Allocation of buffers with cache awareness 

3. Cache awareness in DPDK mempool 

4. l3fwd performance results with and without cache awareness 

Problem with contiguous buffers 

The mempool in DPDK is created from a large chunk of contiguous memory. The packets from  the network are stored in packet buffers of fixed size (objects in mempool). The problem with  contiguous buffers is when the CPU accesses only a portion of the buffer, such as in cases like  DPDK’s L3 forwarding application where only metadata and packet headers are accessed. Rest of  the buffer is not brought into the cache. This results in inefficient cache utilization. To gain a better  understanding of this problem, its essential to understand how the buffers are allocated in hardware  cache. 

How are buffers mapped in Hardware Cache? 

Consider a 1KB, 4-way set-associative cache with 64 bytes cache line size. The total number of  cache lines would be 1KB/64B = 16. For a 4-way cache, each set will have 4 cache lines. Therefore, there will be a total of 16/4 = 4 sets. 

As shown in Figure1, each memory address is divided into three parts: tag, set and offset. 

• The offset bits specify the position of a byte within a cache line (Since each cache line is  64 bytes, 6 bits are needed to select a byte in a single cache line). 

• The set bits determine which set the cache line belongs to (2 bits are needed to identify the  set among 4 ways).

• The tag bits uniquely identify the memory block. Once the set is identified with set bits,  the tag bits of the 4 ways in that set is compared against the tag bits of the memory address,  to check if the address is already present in the cache. 

Figure 1 Memory Address 

In Figure 2, each square represents a cache line of 64 bytes. Each row represents a set. Since it’s a  4-way cache, each set contains 4 cache lines in it – C0 to C3. 

Figure 2 Hardware Cache 

Let’s consider a memory area that can used to create a pool of buffers. Each buffer is 128 bytes,  hence occupies 2 cache lines. Assuming the first buffer address starts at 0x0, the addresses of the  buffers are as shown below. 

Figure 3 Contiguous buffers in memory

In the above figure the offset bits are highlighted in orange, set bits in green and tag bits in blue. Consider buffer 1’s address, where set bits “00” means the buffer maps to set0. Assuming initially  all the sets are empty, buffer 1 occupies the first cache line of 2 contiguous sets. 

Since buffer 1 address is 0x0 and the cache line size is 64 bytes, the first 64 bytes of the buffer  occupy the cache line in set0. For the next 64 bytes, the address becomes 0x40 (0b01000000) indicating set1 because the set bits are “01”. As a result, the last 64 bytes of the buffer occupy the  cache line in set1. Thus, the buffer is mapped into cache lines (S0, C0) and (S1, C0). 

Figure 4 Hardware cache with buffer 1 

Similarly, buffer 2 will occupy the first cache line of next two sets (S2, C0) and (S3, C0).

Figure 5 Hardware cache with 2 buffers 

The set bits in buffer 3 address “00” show that the buffer 3 maps to set 0 again. Since the first  cache line of set0 and set1 is occupied, buffer 3 occupies second cache line of set 0 and 1 (S0, C1)  and (S1, C1). 

Figure 6 Hardware cache with 3 buffers 

Similarly buffer 4 occupies the second cache-line of sets 2 and 3 and so on. Each buffer is  represented with a different color and a total of 8 buffers can occupy the hardware cache without  any evictions.  

Figure 7 Allocation of buffers in hardware cache 

Although the buffer size is 128 bytes, CPU might not access all the bytes. For example, for 64 bytes packets, only the first 64 bytes of the buffer are consumed by the CPU (i.e. one cache line  worth of data).  

Since the buffers are two cache lines long, and are contiguous, and only the first 64 bytes of each  buffer is accessed, only sets 0 and sets 2 are populated with data. Sets 1 and 3 go unused (unused  sets are shown with pattern in Figure 8).

Figure 8 Unused sets in hardware cache 

When buffer 9 needs to be cached, it maps to set 0 since set bits are “00”. Considering a LRU  replacement policy, the least recently used cache line of 4 ways (buffer 1, 3, 5 or 7) in set0 will be  evicted to accommodate buffer 9 even though set 1 and set 3 are empty. 

This is highly inefficient, as we are not utilizing the cache capacity to the full.  

Solution – Allocation of buffers with Cache awareness 

In the above example, if the ununsed cache sets can be utilized to allocate the subsequent buffers (buffers 9 – 16), we would utilize the cache in a more efficient manner. 

To accomplish this, the memory addresses of the buffers can be manipulated during the creation  of mempool. This can be achieved by inserting one cache line padding after every 8 buffers,  effectively aligning the buffer addresses in a way that utilizes the cache more efficiently. Let’s take the above example of contiguous buffer addresses and then compare it with same buffers  but with cache line padding. 

Figure 9 Without cache lines padding Figure 10 With cache lines padding

From figure 9 and 10, we can see that the buffer 9 address has changed from 0x400 to 0x440. With 0x440 address, the buffer 9 maps to set1. So, there is no need to evict any cache line from set0 and  we are utilizing the unused cache set 1. 

Similarly, buffer 10 maps to set3 instead of set2 and so on. This way buffer 9 to buffer 16, can  occupy the sets1 and 3 that are unused by buffers1 to 8. 

Figure 11 Hardware cache with cache awareness 

This approach effectively distributes the allocation of buffers to better utilize the hardware cache. Since for 64-byte packets, only the first cache line of each buffer contains useful data, we are  effectively utilizing the hardware cache capacity by accommodating useful packet data from 16  buffers instead of 8. This doubles the cache utilization, enhancing the overall performance of the  system. 

Padding of cache lines is necessary primarily when the cache size is exactly divisible by the buffer  size (which means buffer size is a power of 2). In cases where the buffer size does not divide  evenly into the cache size, part of the buffer is left unmapped. This residual portion effectively  introduces an offset like the one achieved through padding. 

Cache Awareness in DPDK Mempool 

In DPDK mempool, each buffer typically has a size of 2368 bytes and consists of several distinct  fields – header, object and trailer. Let’s look at each one of them.

Figure 13 Mempool buffer fields 

Header: This portion of the buffer contains metadata and control information needed by DPDK to  manage buffer efficiently. It includes information such as buffer length, buffer state or type and  helps to iterate on mempool objects. The size of the object header is 64 bytes. Object: This section contains actual payload or data. Within the object section, there are additional  fileds such as mbuf, headroom and packet data. The mbuf of 128 bytes contains metadata such as  message type, offset to start of the packet data and pointer to additional mbuf structures. Then  there is a headroom of 128 bytes. The packet data is 2048 bytes that contains packet headers and  payload. 

Trailer: The object trailer is 0 bytes, but a cookie of 8 bytes is added in debug mode. This cookie acts as a marker to prevent corruptions. 

With a buffer size of 2368 bytes (not a power of 2), the buffers are inherently aligned with cache  awareness without the need for cache line padding. In other words, the buffer size is such that it  optimizes cache utilization without the need for additional padding. 

The buffer size of 2368 bytes does not include the padding added to distribute buffers across  memory channels. 

To prove how the performance can vary with a buffer size that is power of 2, I ran an experiment  with 2048 buffer size and compared it against the default buffer size of mempool in DPDK. In the experiment 8192 buffers are allocated in the mempool and a histogram of cache sets for all  the buffers was plotted. The histogram illustrates the number buffers allocated in each cache set. 

Figure 14 Histogram of buffers – 2048 bytes 

With a buffer size of 2048 bytes, the same sets in the hardware cache are hit repeatedly, whereas  other sets are not utilized (we can see that from the gaps in the histogram) 

Figure 15 Histogram of buffers – 2368 bytes

With a buffer size of 2368 bytes, each set is being accessed only around 400 times. There are no  gaps in the above histogram, indicating that the cache is being utilized efficiently. 

DPDK l3fwd Performance 

The improved cache utilization observed in the histogram, attributed to cache awareness, is further  corroborated by the throughput numbers of the l3fwd application. The application is run on a  system with 64KB 4-way set associative cache. 

Below chart shows the throughput in MPPS for single core l3fwd test with 2048 and 2368 buffer  sizes 

Figure 16 l3fwd throughput comparison

There is a 17% performance increase with the 2368 buffer size. 

Conclusion 

Contiguous buffer allocation in memory with cache awareness enhances performance by  minimizing cache evictions and maximizing hardware cache utilization. In scenarios where the  buffer size is exactly divisible by the cache size (e.g., 2048 bytes), padding cache lines creates a offset in the memory addresses and better distribution of buffers in the cache. This led to a 17%  increase in performance for DPDK l3fwd application. 

However, with buffer sizes not precisely divisible by the cache size, as is the default in DPDK,  padding of cache lines already occurs because of the offset in the buffer addresses, resulting in an improved performance. 

For more information visit the programmers guide

Tracing Ciara Power’s Path: A Leap from Mathematics to DPDK Expertise at Intel

By Community Spotlight

Welcome to the latest installment of our DPDK Developer Spotlight series, where we share the unique journeys and insights of those who contribute to the DPDK community. This edition highlights Ciara Power, a former Technical Lead and Network Software Engineer at Intel. We explore her path into open source development from a math enthusiast at school to a software developer shaping the future of DPDK.

Early Life and Education

A Mathematical Foundation

Ciara’s pathway into the world of computer science and programming was not straightforward. Initially grounded in mathematics, her educational journey began in an environment where technical subjects were rarely emphasized, particularly at an all-girls school in Ireland, that did not prioritize technological advancements. Despite this, Ciara’s inherent love for math led her to pursue it at the university level. 

Discovering Programming

While pursuing her studies at the University of Limerick, Ciara encountered a pivotal moment—a chance to explore programming through an introductory taster course subject. This opportunity resonated with a piece of advice she had received from her mother since childhood: she was destined to be a programmer. 

Transitioning to Computer Science 

A Turning Point

This insight from her mother proved to be more than mere encouragement; it was a recognition of Ciara’s innate abilities and potential for finding joy and fulfillment in a realm she had yet to explore. Indeed, this was a powerful testament to the foresight and intuition that mothers often have about their children’s hidden talents like they say, ‘Mother knows best’’.

After finishing the programming subject course, Ciara reached a turning point. The practical aspects of problem solving appealed to her more than theoretical mathematics. Driven by this preference, and after several challenging weeks, she decided to exit the mathematics course. That September, she took a notable step by starting a computer science course at the Waterford Institute of Technology.

The first year of her computer science studies confirmed her decision; she thrived in this environment, where she could apply logical thinking to tangible problems. The satisfaction of crafting solutions and the joy of creative exploration grounded her. 

Balancing Hobbies and Career

A Blend of Technical and Artistic Talents

Ciara’s enthusiasm for her studies crossed over into other areas of her life, enriching her creative pursuits. From painting and drawing to woodworking and knitting, she embraced a wide array of hobbies, each providing a different outlet for her creative expression. This blend of technical skill and artistic talent became a defining feature of her approach to both work and leisure. 

Ciara’s engagement with her various hobbies provides a crucial balance and unique perspective that enhances her programming work: the ability to visualize the broader picture before delving into details. Just as a painter steps back to view the whole canvas, Ciara applies a similar approach in her coding practices. This allows her to assess a project from various angles. 

Her method of drawing diagrams on a whiteboard is emblematic of her systematic approach to problem-solving, juxtaposed with her ability to incubate ideas and contemplate them from different perspectives. 

This blend of logic and creativity marks her programming style, making her adept at tackling complex problems with innovative solutions. Her ability to think outside the box and not get overly absorbed in minutiae gives her an edge, making her work both methodical and inspired.

Moreover, these pursuits offer Ciara a form of catharsis, a way to decompress and process information subconsciously, which in turn feeds into her professional work. 

Her dual approach—systematic yet open to creative leaps—illustrates how her hobbies not only complement but actively enhance her capabilities as a programmer. This synergy between her personal interests and professional skills exemplifies how diverse experiences can contribute to professional excellence in technology and programming.

Professional Development at Intel

Internship and Real-World Experience

Ciara’s transition from academia to the practical, fast-paced world of software development provided her with an invaluable perspective that she would carry throughout her career. Her internship with the DPDK team at Intel in Shannon, Ireland, was not just about gaining professional experience; it was a deep dive into the collaborative and iterative processes of real-world technology development.

Challenges and Adaption

During her eight-month placement, Ciara engaged directly with complex projects that were far more advanced than her college assignments. This experience was crucial for her; it wasn’t just about coding but also about understanding how large-scale software development projects function, how teams interact, and how products evolve from a concept to a market-ready entity.

One significant challenge was her initial foray into the open source community through DPDK. Coming from an academic background where open source wasn’t a focus, the learning curve was steep. 

She had to quickly adapt to the open source ethos of sharing, collaborative open development, and the transparent critique of code. Learning to navigate and contribute to discussions on mailing lists, where she interacted with developers of varying seniority from around the world, was initially daunting.

As a newcomer, she was initially anxious about how she might be received, given the prevalent challenges women often face in tech environments. However, her experience was overwhelmingly positive. From the onset, she was treated with the same respect and consideration as any seasoned developer. This egalitarian approach was not only affirming but also empowering.

To ingratiate herself within the DPDK community, Ciara adopted a humble approach to learning and contributing. She began by actively listening and understanding the community dynamics before making her contributions. 

Reviewing others’ code and providing constructive feedback became a routine that not only helped her understand the nuances of professional coding but also built her reputation as a thoughtful and capable developer. This proactive engagement helped her transition from an intern at Intel to a respected member of the community.

Projects and Technical Accomplishments

Ciara’s technical journey with DPDK deepened significantly, largely due to the interactions and guidance from OG maintainers Bruce Richardson (Network Software Engineer at Intel Corporation) and Akhil Goyal (Principal Engineer at Marvell Semiconductor). 

Her first major project was contributing to the development of the Telemetry Library V1 a library for retrieving information and statistics about various other DPDK libraries through socket client connections. This not only honed her technical skills but also gave her a solid understanding of handling community feedback for large patchsets, with plenty of discussion around how to implement the library.

In terms of her main contributions, Ciara refactored the unit test framework, adding support for nested testsuites. This included reworking the cryptodev autotests to make use of nested testsuites and ensure all testcases are counted individually in test summaries. This, in turn, improved the testing experience for the user, making it easier to see which testcases are passing/failing [0].

She was also Involved in various improvements for Intel IPsec-mb SW PMDs, including combining PMDs to use common shared code [1], adding multiprocess support [2], and adding Scatter-Gather List support [3] [3.1]

Ciara also worked on removing Make build system from DPDK. Meson had been introduced a few releases prior, so it was time to completely remove the old build system, with help from many others. A huge task, it touched on nearly every document, library and driver. This involved significant collaboration in the community, with plenty of reviews and testing taking place by other developers and maintainers. [3].

She Added an API and commandline argument to set the max SIMD bitwidth for EAL. Previously, a number of components in DPDK had optional AVX-512 or other vector paths which can be selected at runtime by each component using its own decision mechanism. This work added a single setting to control what code paths are used. This can be used to enable some non-default code paths e.g. ones using AVX-512, but also to limit the code paths to certain vector widths, or

to scalar code only, which is useful for testing. [4]

Additionally Ciara Improved the cryptodev library Asymmetric session usage, by hiding the structure in an internal header, and using a single mempool rather than using pointers to private data elsewhere [4]. She also Enabled numerous QAT devices and algorithms, including most recently, new GEN3 and GEN5 devices [5].

Bug Fixing

Ciara’s proactive engagement led her to work on fixing various bugs. By utilizing bug detection tools like Address Sanitiser and Coverity, she debugged and resolved a wide range of bugs. This process was not just about resolving immediate issues; it also helped her build a deeper understanding of better programming practices that could be applied in future feature development.  

By contributing significant patches and actively participating in community discussions, Ciara received encouragement instead of the skepticism or condescension often found in other communities. This supportive atmosphere helped her quickly find her footing and gain confidence in her abilities. Her contributions were evaluated solely on their merit, reflecting the DPDK community’s commitment to contributor diversity.

Community Engagement and Recognition

Active participation and support 

Throughout her journey, the open source community, particularly her interactions on the DPDK forums and mailing lists, played a crucial role. Under the guidance of Bruce Richardson, Pablo de Lara Guarch and Akhil Goyal, Ciara not only contributed significantly but also gained insights that helped shape her technical and strategic acumen. 

This exposure allowed her to understand diverse perspectives and collaborative methods essential for open development and open governance across technical communities.

Major Accomplishments

Reflecting on her significant milestones with DPDK, Ciara highlights two major accomplishments. During her internship at Intel, she contributed to the development of the Telemetry Library V1, a library for retrieving information and statistics about various other DPDK libraries through socket client connections. 

Upon returning as a graduate, she was entrusted with the complete rewrite of this library, leading to the development of Telemetry V2. This task demonstrated her progression as a developer, showcasing her ability to significantly improve and build upon her earlier work within a relatively short span of time. 

Her involvement in developing this library was a significant learning journey, filled with complex challenges and intensive problem-solving that required her to engage deeply with the technology and the DPDK community. 

The Telemetry library project stood out not only for its technical demands but also for the collaborative effort it required. Ciara navigated through numerous technical discussions, debates, and feedback loops, integrating community insights to implement and enhance the robustness of the code. 

Another notable highlight was her handling of large patch sets. These weren’t monumental in features but were substantial in scope and impact, involving critical enhancements and fixes that improved DPDK’s functionality and reliability.

Valued advice and the Importance of Code Reviews

One of the most impactful pieces of advice Ciara received from the DPDK community centered on the importance of code reviews. Embracing this practice not only honed her technical skills but also cultivated a mindset geared towards continuous improvement and collaboration. 

This advice underscored the necessity of meticulously reviewing her own code as well as that of others, which facilitated a deeper understanding of various coding approaches and strategies.

Ciara learned that taking a step back to scrutinize every detail of her work from a broader design perspective was crucial. This approach allowed her to explore alternative solutions and methodologies that might not be immediately apparent. 

Engaging in thorough reviews helped her identify potential issues before they escalated, enhancing the overall quality and reliability of her contributions.

Personal Achievement and Awards

Ciara has been recognized multiple times for her contributions at Intel, underscoring her influence and impact within the tech giant. One of her notable accolades includes the Intel Women’s Achievement Award 2021, a testament to her substantial and measurable impact on Intel’s business, profitability, and reputation. 

This award is particularly significant as it celebrates individuals who not only excel in their roles but also drive meaningful change across the organization.

In addition to this, Ciara has received multiple Intel Recognition Awards. These commendations highlight her exceptional development work and her proactive approach to risk management, which has helped prevent bottlenecks in community projects. 

Her efforts around major patch sets during this period were instrumental in her winning these awards. They were not just routine contributions but were pivotal in enhancing Intel’s technological frameworks. 

DPDK Events and the Importance of In-Person Collaboration

Ciara’s experiences at DPDK events provide an illustration of her integration and active participation in the community. After completing her internship at Intel, Ciara attended the DPDK Summit as a participant, not as a speaker. 

This event was particularly significant as it occurred shortly after she returned to college in September, marking her first engagement with the community outside of a professional capacity.

During the summit, Ciara experienced the surreal yet affirming moment of connecting faces to the names of those she had interacted only via the mailing list —individuals who had reviewed her work and those whose code she had studied. 

The recognition she received from other community members, often unexpectedly knowing who she was, played a crucial role in her sense of belonging and validation within the technical community. This recognition, while surprising to her, underscored the impact of her contributions and her growing reputation within the community.

Life Beyond Work 

Balancing life with Nature and Adventure

Ciara’s life outside her technical career is focused on enhancing her well-being and providing a counterbalance to her intensive work in tech. 

A dedicated hiker, she has participated in significant events like a charity hike for Cystic Fibrosis Ireland with colleague Pable De Lara Guarch, where a group of hikers scaled Mt. Kilimanjaro, in Tanzania, (5,895 meters) to watch Siobhan Brady set a new world record performing her Celtic harp at the summit! 

This particular hike, dubbed the “highest harp concert,” is one of life’s highlights she fondly recalls. You can watch the incredible performance here

Ciara finds a unique kind of solace close to nature, living just minutes from the coast in the south of Ireland. Her daily walks on the beach, and in the summer, swimming in the ocean are more than just routine; they are a fundamental aspect of her life, crucial for her mental and physical well-being. 

These moments by the sea allow her to unwind, reflect, and regain balance, proving essential for maintaining her productivity and creativity in her professional life.

As she prepares to transition from Intel, with plans to move to Sydney, Australia, Ciara looks forward to exploring new professional landscapes and personal adventures. This move not only signifies a change in her career but also underscores her willingness to embrace new experiences and challenges, whether in tech or in her personal pursuits. 

The future holds unknowns, but Ciara approaches it with enthusiasm and excitement about the possibilities that lie ahead in both her professional and personal life.

To learn more about the benefits of contributing to DPDK read on here