BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//DPDK - ECPv6.15.17.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:DPDK
X-ORIGINAL-URL:https://www.dpdk.org
X-WR-CALDESC:Events for DPDK
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20180311T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20181104T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20190310T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20191103T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20200308T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20201101T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;VALUE=DATE:20191112
DTEND;VALUE=DATE:20191114
DTSTAMP:20260407T011007
CREATED:20190802T200810Z
LAST-MODIFIED:20191127T182442Z
UID:1543-1573516800-1573689599@www.dpdk.org
SUMMARY:DPDK Summit North America\, Mountain View CA\, November 12-13
DESCRIPTION:SESSION SUMMARY\nTo access the summary\, slides\, and video links for a specific session\, click on each of the tabs below. \n[vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″ shape_divider_position=”bottom” bg_image_animation=”none” shape_type=””][vc_column centered_text=”true” column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_link_target=”_self” column_shadow=”none” column_border_radius=”none” width=”1/1″ tablet_width_inherit=”default” tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid” bg_image_animation=”none”][toggles style=”minimal”][toggle color=”Default” title=”Opening Remarks”][vc_column_text]Opening Remarks \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video » \n[vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”DPDK and PCIe Gen4 Benchmarking”]\n[vc_column_text]DPDK and PCIe Gen4 Benchmarking\nAmir Ancel\, Mellanox & Keesang Song\, AMD \nThis collaborative presentation with AMD will introduce PCIe fundamentals for networking engineers\, including the new features on PCIe 4.0.We will then show DPDK performance when running 200Gb/s Mellanox device using PCIe Gen4 and AMD 2nd Generation EPYC (Rome) CPU.This presentation will also depict peak performance as well as the key advantages of the new architecture that optimizes local and remote NUMA node performance. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] View Slides »[/vc_column_text] \n[vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”DPDK PMD for NTB”]\n[vc_column_text]DPDK PMD for NTB\nJingjing Wu & Omkar Maslekar\, Intel\n \nNTB (Non-Transparent Bridge) can provide a non-transparent bridge between two separate systems so that they can communicate with each other. Thus\, many user cases can benefit from this technique\, such as fault tolerance and visual acceleration. In this presentation\, we will share our recent work about enabling a DPDK Polling Mode driver for NTB. Firstly\, we will briefly introduce NTB raw device driver skeleton. Then we will present the implementation details about how to use memory windows\, doorbell and scratchpad registers to do handshake between 2 systems. Lastly\, an efficient ring design on mapped memory will be introduced\, and based on this ring layout\, DPDK typical applications can seamlessly transmit packets by NTB device. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] View Slides »[/vc_column_text] \n[vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”DPDK Acceleration with GPU”][vc_column_text]DPDK Acceleration with GPU\nElena Agostini\, Nvidia\, Cliff Burdick\, ViaSat & Shahaf Shuler\, Mellanox \nWe demonstrate the applicability of GPUs as packet processing accelerators\, especially for compute-intensive tasks. The following techniques and challenges will be discussed: \n– Allowing GPUDirect RDMA Rx and Tx\, in which the packets are exchanged directly between the NIC and the GPU. \n– For zero-copy\, mbuf data needs to be located in a memory usable by both devices\, therefore the external buffer feature of mbuf is used\, with the external buffer located in GPU on-chip memory or GPU-addressable CPU memory. \n– Rx queue can optionally be configured to split incoming packets between CPU and GPU memory which allows CPU processing of packet headers and GPU direct access to packet payload. \nVarious applications are demonstrating these techniques\, including:\n– An L2 forwarding application using a CUDA kernel.\n– An application matching flows to process on the GPU\, with the use of CPU/GPU header/data split.\n– Modified version of testpmd using GPU memory \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] View Slides »[/vc_column_text] \n[vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”DPDK Unikernel with Unikraft”][vc_column_text]DPDK Unikernel with Unikraft\nSharan Santhanam\, NEC Laboratories Europe GmbH \nUnikernels have​ ​shown​ ​immense​ ​performance potential​ (e.g.\, throughout in the range of 10-40 Gb/s\, boot times of only a few ms\, image sizes of only hundreds of KBs). However\, most of these have been manually built and have used rather obscure or research prototype software (e.g.\, the Click modular router) to handle packets. \nIn this talk we will present how we tackle these two issues at once. First\, we will describe Unikraft\, a Linux Foundation project that severely reduces the time to develop new unikernels. Second\, we will show our port of DPDK to it\, the result of which is the first unikernel\, to the best of our knowledge\, fully specialized to run DPDK-only workloads. Finally\, we will show performance numbers from running this unikernel\, as well as discuss future work. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] View Slides »[/vc_column_text] \n[vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”Running Multi-process DPDK App on Kubernetes with Operator SDK”][vc_column_text]Running Multi-process DPDK App on Kubernetes with Operator SDK\nYasufumi Ogawa\, NTT \nWe will talk an approach to run DPDK multi-process app on Kubernetes by using Operator SDK. We have developed a DPDK called Soft Patch Panel\n(SPP) for Service Function Chaining in NFV environment and it enables to connect DPDK apps running on host\, virtual machines and also containers. We can use Multus for running DPDK app on Kubernetes\, but supported type of network interface are still restricted. SPP has several types of PMD\, for example\, physical\, vhost\, ring or so. We have realized zero copy packet forwarding between Kubernetes DPDK container apps by using Operator SDK which is a toolkit to manage Kubernetes native applications. Operator enables to manage complex stateful applications on top of Kubernetes\, and is appropriate for managing multi-process app. For SPP\, we defined custom resource manager by which users can organize processes via Kubernetes CLI. In terms of implementation\, Operator SDK is a set of tools for scaffolding and code generation to bootstrap a new project fast so that you can deploy your application rapidly. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] View Slides »[/vc_column_text] \n[vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”DPDK & Containers: Challenges + Solutions”][vc_column_text]DPDK & Containers: Challenges + Solutions\nWang Yong\, ZTE \nWhen DPDK is applied to containerized scenarios\, it brings some problems and challenges that have not been encountered in normal cases. This presentation focuses on several typical problems and challenges\, and gives the corresponding solutions or suggestions. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] View Slides »[/vc_column_text] \n[vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”Transparent Container Solution for DPDK Applications”][vc_column_text]Transparent Container Solution for DPDK Applications\nTanya Brokhman\, SW Architect & Shahar Belkar\, Toga Networks \nDuring the presentation\, we will present an innovative plug-in\, developed by our team in TRC\, which enables DPDK applications running inside a container with virtually no bandwidth nor latency penalties\, compared with the same application running directly on the host. Our solution extends the Docker CNM capabilities by enabling users to run DPDK applications inside a Docker** container using DPDK for networking and delivers the best performance on the market for applications running DPDK over containers. We welcome you to join our trip on the DPDK traffic highway! \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] View Slides »[/vc_column_text] \n[vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”OVS DPDK Pitfalls in Openstack and Kubernetes”][vc_column_text]OVS DPDK Pitfalls in Openstack and Kubernetes\nYi Yang\, Inspur \nOur customers require high performance networking\, we’re struggling to switch to OVS DPDK from OVS\, but we encountered many issues\, it seems they are insoluble unless we change our infrastructure\, this brings many challenges\, for example\, very poor tap interface performance\, but Openstack floating IP\, router and SNAT are using it\, I will show all the issues we found in this presentation\, we would like to share them to the community in order that developers in the community can help fix them. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] View Slides »[/vc_column_text] \n[vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”Offloading Context Aware Flows\, OVS-DPDK Connection Tracking Use Case”][vc_column_text]Offloading Context Aware Flows\, OVS-DPDK Connection Tracking Use Case\nRoni Bar Yanai\, Mellanox \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] View Slides »[/vc_column_text] \n[vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”Flow Offloads for DPDK Applications: The Partial\, The Full\, and The Graceful”][vc_column_text]Flow Offloads for DPDK Applications: The Partial\, The Full\, and The Graceful\nMesut Ali Ergin\, Intel \nDPDK offers libraries to accelerate packet processing workloads running on a wide variety of CPU architectures. Some of these libraries rely on offloading tasks to hardware entities other than CPU cores in order to accelerate the functionality they provide. There are also those libraries designed to facilitate applications’ offload requests to the relevant hardware. Among those\, rte_flow API provides a generic means to offload the process of matching specific ingress or egress traffic\, as well as taking actions on those matched packets. In this presentation\, we will demonstrate benefits of using rte_flow offload capabilities on an OVS DPDK case study\, and discuss practical implications as to when\, where and how much one can offload. We will also discuss some potential algorithms and improvements to DPDK to be able to efficiently partition and utilize the packet processing resources in the platform\, gracefully. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] View Slides »[/vc_column_text] \n[vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”Stabilizing the DPDK ABI and What it Means for You”][vc_column_text]Stabilizing the DPDK ABI and What it Means for You\nStephen Hemminger\, Microsoft \nDPDK has its roots as a toolkit for developing Packet Processing appliances\, where realizing packet processing performance is traditionally the highest priority. Since then it has grown into the new usage models of Network Function Virtualization and Cloud\, where there is now the competing demands to continue the pace of innovation and also provide ABI Stability\, Seamless Upgrades\, Long Term Support\, and OS Packaging as primary means of distribution. \nABI Stability will help bring these numerous benefits listed above and possibly more\, however it will mean changes to the often permissive culture that has existed around ABI changes in the past. This presentation will dig into what these changes will mean for end consumer of DPDK; Network Operators and Telecom Equipment Manufactures\, and how it will ultimately be a positive change for the DPDK User Experience. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] View Slides »[/vc_column_text] \n[vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”A Comparison Between HTM and Lock-Free Algorithms”][vc_column_text]A Comparison Between HTM and Lock-Free Algorithms\nDharmik Thakkar\, Arm \nAs the number of CPU cores packed in a single SoC increases\, scalability of algorithms becomes important. In this presentation\, I will talk about Hardware Transactional Memory (HTM) and Lock-Free mechanisms in terms of basic working\, requirements\, and challenges. Both of these mechanisms improve scalability and thereby speed up the execution of multi-threaded software. DPDK is in a unique position wherein the rte_hash library implements an HTM optimized algorithm as well as a lock-free algorithm. This presentation will further talk about the performance comparison of HTM and Lock-Free in rte_hash library. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] View Slides »[/vc_column_text] \n[vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”Rte_flow Optimization in i40e Driver”][vc_column_text]Rte_flow Optimization in i40e Driver\nChenmin Sun\, Intel \nRte_flow is widely used for accelerating packet processing in cloud services\, therefore the flow refresh rate is vitally important. Currently\, the insertion and deletion flow operation are slow in the original driver\, which limits the ability of typical cloud switching applications such as OVS-DPDK/VPP to timely respond in a rapidly changing cloud networking. \nThis presentation introduces the rte_flow driver optimization for i40e driver. In the refactored code\, we introduced rte_bitmap and software pipeline to manage hardware resources and avoid synchronization waiting for hardware. Meanwhile\, the consumed cycles are further compressed via optimizing the dynamic memory allocation code. The performance of the revised code is 20\,000 times better than the original code. \nFinally\, this presentation will demonstrate that rte_flow optimization can gain huge performance improvement in OVS-DPDK hardware offload scenario. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] View Slides »[/vc_column_text] \n[vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”Lightning Talk – DPDK Perf Plug-ins for Containers Ver0\n“][vc_column_text]Lightning Talk – DPDK Perf Plug-ins for Containers Ver0 \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] View Slides »[/vc_column_text] \n[vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”Custom Meta Data in PMDs”][vc_column_text]Custom Meta Data in PMDs\nHonnappa Nagarahalli\, ARM \nThere are packet processing applications\, created before DPDK came into existence\, both in open source as well as in private development. Some examples in open source are VPP\, OVS etc. These applications define their own packet meta data. The protocol stacks in these applications use that meta data extensively. These applications have integrated DPDK to make use of the rich set of PMDs. However\, they cannot use the meta data from rte_mbuf directly in their protocol stacks as that would require the protocol stack re-write. Hence they end up converting from rte_mbuf to their application specific meta data format. This results in a performance penalty of ~20% to ~30%. This is forcing these applications to write their own native PMDs resulting in duplicated code/effort across DPDK and these projects. \nIt is possible to create an abstraction layer in PMDs such that the descriptor to rte_mbuf conversion code can be user defined. This will allow applications to avoid rte_mbuf to application specific packet meta data format conversion\, thus saving the performance penalty. \nThis presentation talks about the need for the abstraction layer\, how such an abstraction can be created and its benefits. Please note that this is still work under progress. There is no guarantee that it will succeed\, in which case this presentation will talk about what was attempted and the issues faced. May be the community can suggest solutions. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] View Slides »[/vc_column_text] \n[vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”Hairpin – Offloading Load Balancer and Gateway Applications”][vc_column_text]Hairpin – Offloading Load Balancer and Gateway Applications\nOri Kam\, Mellanox \nThis presentation is detailing the hairpin feature which is used to offload forwarding traffic from the wire back to the wire\, while modifying the packet header.\nThis feature is managed via ethdev and is proposed in 19.11.\nThe hairpin is a good fit for QoS features.\nIt will show the use cases and the improvements that can be achieved using this feature.\nIt will also show the future roadmap for this including hairpin between port and devices. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] View Slides »[/vc_column_text] \n[vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”HW Offloaded Regex/DPI Appliance”][vc_column_text]HW Offloaded Regex/DPI Appliance\nShahaf Shuler\, Mellanox \nPrevious talk on Bordeaux summit focused on the new Regex subsystem in DPDK\, where the Regex device acts as a look aside accelerator.\nThis talk will be a follow up of the previous one\, and will have a wider scope of looking into all the component a Regex/DPI needs. \nWe will overview the common SW pipeline of Regex/DPI appliance\, and will describe the DPDK components that will help application to orchestrate the data movement. For example – Connection awareness library\, IPSEC/TLS termination\, flow classification and more.\nIn specific we will describe how the different pipeline stages that can be offloaded into HW using the existing or newly introduce APIs. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video » \n[vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”The Design and Implementation of a New User-level DPDK TCP Stack in Rust”][vc_column_text]The Design and Implementation of a New User-level DPDK TCP Stack in Rust\nLilith Stephenson\, Microsoft Research \nModern datacenter applications require low latency\, high throughput access to the network. Using DPDK\, applications can achieve signficantly better performance by bypassing the OS kernel; however\, they still need support for traditional networking protocols like TCP. Existing user-level TCP libraries simply re-purpose existing kernel stacks or optimize for only high throughput\, not low latency. We found that these libraries are too slow to meet the latency requirements of datacenter applications with new 100Gb datacenter networks offering 5 microsecond RTTs. To meet our requirements\, we built a new TCP stack from the ground up for DPDK applications using Rust. Rust provides both memory and concurrency safety while remaining appropriate for low-latency environments. In this talk\, I discuss our experience building a new low-latency TCP stack using Rust. I will present preliminary performance experiments and welcome input and contributions from the DPDK community in the continuing development of this stack. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video » \n[vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”TLDKv2: the TCP/IP Stack for Elastic and Ephemeral Serverless Apps”][vc_column_text]TLDKv2: the TCP/IP Stack for Elastic and Ephemeral Serverless Apps\nJianfeng Tan\, Ant Financial & Konstantin Ananyev\, Intel \nTLDK is a “DPDK-native” userspace TCP/IP stack targeting extreme performance\, but also inherits some shortcomings of DPDK (for example\, heavy and nearly static memory footprint). \nIn cloud-native environments\, we need a stack to be performant but also (or more importantly) easy-of-use\, lightweight\, scalable\, robust\, and secure. \nIn this talk\, we will present our work to enhance TLDK to meet these requirements. To ease integration of the existing applications a socket layer (POSIX semantic\, I/O event notification facility) is added. To reduce initial memory footprint while keeping the performance\, dynamic memory model is adopted at different levels (memseg\, mempool\, and stream management); we got to start an instance with several MBs\, and scale to large number of open connections. At last\, we will talk about the test frameworks for function test\, performance regression\, and fuzzing. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] View Slides »[/vc_column_text] \n[vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”Validating DPDK Application Portability in Multi-cloud/Hybrid-cloud Environments”][vc_column_text]Validating DPDK Application Portability in Multi-cloud/Hybrid-cloud Environments\nSubarna Kar Software\, Microsoft \nAs DPDK gains new and complex features with each release\, there is an increased divergence in feature support by different NIC vendors. the developers would want their DPDK based SDN applications to work on large number of underlying platforms especially in a multi-cloud or hybrid cloud environment. There might be performance difference between various platforms depending on the feature set supported by the underlying adapter\, but the actual functionality should not break. \nThis talk will discuss some of the DPDK usage patterns typically encountered in our SDN environment\, and will especially focus on some of the challenges we have encountered in using the rte_flow APIs for network packet filtering. Rte_flow supports a wide range of patterns and actions which are usually not be supported by various drivers that offer DPDK support. Currently the best known method to find out whether a flow can be offloaded to a NIC or not is to code it using rte_flow\, and subsequently verify it manually. Such verification approach is cumbersome because it relies on accurately coding the target feature set\, and requires expert knowledge of the physical hardware. \nWe propose a more efficient approach that is based on a unique test suite that can create flows for common use cases and run it for all drivers. This will give developers an overview of the kind of features being supported by each driver. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] View Slides »[/vc_column_text] \n[vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”4G/5G Granular RSS Challenge”][vc_column_text]4G/5G Granular RSS Challenge\nRoni bar Yanai\, Mellanox \nLately we see a massive trend of 4G/5G towards virtualization\, vRAN\, vEPC\, MEC…etc. As demand continues to grow rapidly vendors are\nseeking for offload solutions. will present a short introduction about 4G/5G world and virtualization trends\, then will present the required support of RSS granularity. 4G/5G requires new RSS modes per traffic type\, for example RSS on inner source ip (over GTP tunnel)\, RSS on destination ip for ip with no tunnel traffic type (termination point)\, for some use cases RSS must be symmetric\, while RSS is done on different fields according to the traffic direction. All options should work in harmony and flexibility\, while still supporting all existing modes. We show a demo done lately for one of the vendors\, and discuss the requirements and API. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video » \n[vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”Using DPDK APIs as the I/F between UPF-C and UPF-U”][vc_column_text]Using DPDK APIs as the I/F between UPF-C and UPF-U\nBrian Klaff & Barak Perlman\, Ethernity \nUPF (User Plane Function) is the main data path element in 3gpp architecture for 5G.\nSeveral carriers have announced their plans to place UPF in edge locations as part of their 5G deployment plans. \nCarriers are looking for HW acceleration for UPF\, as compute resources at edge locations are limited.\nThere’s a need to define a standard interface between the UPF application (UPF-C) and the SmartNICs (UPF-U). \nWe suggest using DPDK APIs as the interface between UPF-C and UPF-U.\nThe presentation will also list the missing APIs we need to add to DPDK for fully offloading UPF functionality. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] View Slides »[/vc_column_text] \n[vc_column_text][/vc_column_text][/toggle][/toggles][/vc_column][/vc_row][vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″ shape_divider_position=”bottom” bg_image_animation=”none”][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_link_target=”_self” column_shadow=”none” column_border_radius=”none” width=”1/1″ tablet_width_inherit=”default” tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid” bg_image_animation=”none”][vc_column_text]\n[/vc_column_text][/vc_column][/vc_row]
URL:https://www.dpdk.org/event/dpdk-summit-na-mountain-view/
LOCATION:Computer History Museum\, 1401 N Shoreline Blvd.\, Mountain View\, CA\, 94043
END:VEVENT
END:VCALENDAR