BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//DPDK - ECPv6.15.17.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:DPDK
X-ORIGINAL-URL:https://www.dpdk.org
X-WR-CALDESC:Events for DPDK
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20160313T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20161106T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20170312T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20171105T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20180311T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20181104T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;VALUE=DATE:20171114
DTEND;VALUE=DATE:20171116
DTSTAMP:20260405T034243
CREATED:20180612T153639Z
LAST-MODIFIED:20180914T150110Z
UID:637-1510617600-1510790399@www.dpdk.org
SUMMARY:DPDK Summit\, San Jose
DESCRIPTION:Opening Remarks & Governing Board\nJim St. Leger (Intel) \nIntroduction to the event\, including a review of the agenda\, logistics and expectations. An update from the Governing Board on who the Governing Board are\, what their responsibilities are\, progress to date\, future priorities/challenges for the project. \nVideo | Opening Remarks & Governing Board  \n\nCommunity Survey Feedback\nJohn McNamara (Intel) \nWe conducted a survey of the DPDK community\, soliciting input on a variety of topics including DPDK usage\, roadmap\, performance\, patch submission process\, documentation and tools. This session will present the results of the survey\, which will help to guide the future direction of the project. \nVideo | Community Survey Feedback \n\nReducing Barriers to Adoption – Making DPDK Easier to Integrate into Your Application\nBruce Richardson (Intel) \nWhile DPDK is a widely-adopted software package for high-performance networking applications\, there are a number of ways in which it is harder to use than it otherwise needs to be. This is especially true when it comes to integrating DPDK with an existing legacy codebase. This presentation will look at some of the issues and provide an update on current development and prototyping work to simplify DPDK integration with existing code. \nVideo | Slide \n\nNew Command Line Interface for DPDK\nKeith Wiles (Intel) \nThe current command line interface for DPDK called cmdline has a number of limitation and a complex user design. The next command line for DPDK called CLI is more dynamic with a simple directory style design. The directory style design allows for commands to be placed in a hierarchy for easy integration\, plus supporting a simple argc/argv function interface. Using these features reduced the LOC in test-pmd cmdline file from 12K to ~4K. The presentation includes an example usage. \nVideo | Slide \n\nEvent Adapters – Connecting Devices to Eventdev\nNikhil Rao (Intel) \nRecently\, the DPDK has enabled applications to use dynamically load balanced pipelines with the introduction of libeventdev. In addition to using eventdev for CPU to CPU pipelines\, devices such as ethdev\, cryptodev and timers need to be able to inject events into eventdev. Currently\, we are in the process of upstreaming extensions to eventdev called eventdev adapters for each of these devices that would allow applications to configure event input from these devices to the event device. We will discuss each of the adapter APIs and show example code that allow event based applications to be written in a platform independent manner. \nVideo | Slide \n\nGRO/GSO Libraries: Bring Significant Performance Gains to DPDK-based Applications\nJiayu Hu (Intel) \nA major part of packet processing has to be done on a per-packet basis\, such as switching and TCP/IP header processing. The overhead of the per-packet routines\, however\, exerts a significant impact on the performance of network processing. Generic Receive Offload (GRO) and Generic Segmentation Offload (GSO) are two effective techniques for mitigating the per-packet processing overhead by reducing the number of packets to be processed. Specifically\, GRO merges the receiving packets of the same flow in RX\, while GSO delays packet segmentation in TX. \nVideo | Slide \n\nPower Aware Packet Processing\nChris MacNamara (Intel) \nA drive to deliver OPEX saving and performance where and when it’s needed. Enter a new era of power optimized packet processing. This talk reviews new & existing DPDK extensions for policy based power control proposed in August and the associated performance benefits. \nVideo | Slide \n\nEnhanced Memory Management\nLaszlo Vadkerti (Ericsson)\, Jiangtao Zhang (Ericsson) \nIn this presentation we will be reviewing Enhanced Memory Management techniques and multi-process enhancements as a possible way to seamlessly solve burning issues like slow initialization\, memory protection\, memory hotplug\, dynamic scale up/down\, physically vs virtually contiguous\, inter-vm shared memory etc. \nVideo | Slide \n\nMaking networking apps scream on Windows with DPDK\nJason Messer (Microsoft)\, Manasi Deval (Intel) \nNetwork bandwidth is precious and milliseconds matter for many user-mode applications and virtual appliances running on both Linux and Windows. In order to get the best network throughput to process and forward packets\, developers need direct access to the NIC without going through the host networking stack. Until now\, only developers on Linux and FreeBSD platforms were able to use DPDK to obtain these performance benefits but\, we are happy to announce that we have an implementation of DPDK for the Windows platform! \nVideo | Slide \n\nMediated Devices: Better Userland IO\nFrançois-Frédéric Ozog (Linaro) \nUnbinding Linux kernel drivers to allow userland IO through VFIO has a number of disadvantages such as another large touchy code base to deal with the hardware\, loss of standard Linux tools (ifconfig\, ethtool\, tcpdump\, SNMPd…) and impossibility to accelerate container networking. Mediated device introduced in Linux kernel 4.10 for GPUs and provisions for additional devices hold the promise of collaboration between kernel drivers and userland application in need of direct datapath steering. \nVideo | Slide \n\nMellanox bifurcated driver model\nRony Efraim (Mellanox) \nMellanox PMD uses verbs instead of taking full control over the device (PCI). That allows the kernel (netdev) and more than a single PMD to run on a single PCI function. If the DPDK app is not steering by rte_flow\, all the traffic the packets be processed by the kernel net device. \nVideo | Slide \n\nDPDK with KNI – Pushing the Performance of an SDWAN Gateway to Highway Limits!\nSabyasachi Sengupta (Nuage Networks) \nAn SDWAN gateway is usually built with an x86 commercial off-the-shelf (COTS) hardware that often runs a variant of Linux Operating System and requires high throughput for connecting a corporate’s branch network with its Data Centers. However owing to the inherent limitations of standard 4K sized pages without dedicated resource allocations in a general-purpose Linux kernel\, it has been seen that even a high-end SDWAN gateway hardware cannot forward traffic to its full potential. \nVideo | Slide \n\nDPDK as microservices in ZTE Paas\nYong Wang (ZTE)\, Songming Yan (ZTE) \nTo provide high performance for ICT (Information Communications Technology) area\, we use DPDK as a micro service in container networking. We used primary/secondary mode\, rte_ring\, sharing meory and so on\, to promote the performance of datapath. We achieved bidirectional zero-copy between containers in contrast to only dequeue zero copy in vhost-user/virtio-user. \nVideo | Slide \n\nAccelerate Clear Container Network performance\nJun Xiao (CloudNetEngine) \nClear Container is a great technology to secure a container with a fast and lightweight hypervisor\, and there might be very different type of workloads running inside Clear Containers\, e.g. some workloads require high packet processing rate (PPS) and some workloads require massive data transfer (BPS)\, given Clear Container’s much higher density than Virtual Machine\, a high performance virtual switch is very critical and demands is highly emerged\, but current available virtual switches is still far behind those demands. \nVideo | Slide \n\nThe Path to Data Plane Microservices\nRay Kinsella (Intel) \nDPDK revolutionized software packet processing initially for discrete appliances and then for Virtual Network Functions. Containers and µServices technology are extensively used as a means to scale up and out in the Cloud. These technologies now include Comms Service Providers among their advocates\, and embracing these technologies with their scaling model and resiliency is the new frontier in software packet processing. \nVideo | Slide \n\nContainer Panel Discussion\nA panel discussion with Yong Wang\, Songming Yan\, Jun Xiao and Ray Kinsella to discuss DPDK enablement of containers and micro-services. \nVideo \n\nAccelerate storage service via SPDK\nJim Harris (Intel) \nSPDK (storage performance development kit\, http://spdk.io) is an open source library used to accelerate the storage service (e.g.\, file\, block) especially for PCIe SSDs (e.g.\, 3D Xpoint SSDs). The foundation of SPDK is the user space\, asynchronous and polled mode drivers (e.g.\, IOAT and NVMe)\, and the idea of which is similar to DPDK. \nVideo | Slide \n\nAccelerating P4-based Dataplane with DPDK\nPeilong Li (University of Massachusetts Lowell) \nThe high-level P4 programming language promises protocol and hardware-agnostic design of network functions. As the low-level functional implementation\, the P4 Behavior Model (BMv2) provides the necessary constructional blocks (parser\, deparser\, lookup tables\, and action primitives\, etc.) into which any P4 dataplane programs can be compiled. \nVideo | Slide \n\nImplementation and Testing of Soft Patch Panel\nTetsuro Nakamura (NTT)\, Yasufumi Ogawa (NTT) \nSPP is a framework to easily interconnect DPDK applications on host and guest virtual machines together\, and assign resources dynamically to these applications. As a carrier service provider\, we expect that SPP improves performance and usability for inter-VM communication for large scale NFV environment. \nVideo | Slide \n\nReflections on Mirroring With DPDK\nE. Scott Daniels (AT&T Labs) \nDebugging network problems is often hard\, and further complicated when a guest O/S is provided with an SR-IOV VF bound to a DPDK driver because tools running on the physical host (e.g. tcpdump) lose visibility to the interface. Hardware mirroring of traffic to another VF provides the ability to regain visibility and to help facilitate the troubleshooting process. \nVideo | Slide \n\nA network application API on top of device APIs\nFrançois-Frédéric Ozog (Linaro) \nNFV promise is to be able to instantiate or even live migrate VMs on different platforms and have applications benefit from whatever acceleration is available. As a result\, the application developer shall not make compilation or define application architecture based on what he/she expects from the runtime environment. ODP and DPDK have in common the concept of “device” APIs (Ethernet\, crypto\, events\, IPsec\, compression…) with distinct approaches. \nVideo | Slide \n\nSafetyOrange – a tiny server class multi-purpose box with DPDK\nAndras Kovacs (Ericsson)\, Laszlo Vadkerti (Ericsson) \nSafetyOrange is a portable (4.3 liter) and silent Xeon computer. Well\, it is larger than ‘DPDK in a box’ but it supports two NICs (as of now sporting 2 XL710 cards)\, has 32G of memory and 14 cores. We have been using it for testing both native and virtualized DPDK appliances also whole virtual routers and served as a traffic generator for performance tests (DPDK pktgen)\, too. It is a brilliant development environment\, too. And at the end of the day it still fits into a regular backpack. \nVideo | Slide \n\nTechnical Roadmap\nTechnical Board \nAn update from the Technical Board covering the future roadmap and technical challenges for the project. \nVideo | Slide \n\nrte_raw_device: implementing programmable accelerators using generic offload\nHemant Agrawal (NXP)\, Shreyansh Jain (NXP) \nThere are various kinds of HW accelerators available with SoCs. Each of the accelerators may support different capabilities and interfaces. Many of these accelerators are programmable devices. In this talk we will discuss the rte_raw_device and implementing a sample driver with it for NXP AIOP generic programmable accelerator. \nVideo | Slide \n\nDPDK support for new hardware offloads\nAlejandro Lucero (Netronome) \nFully programmable SmartNICs allow new offloads like OVS\, eBPF\, P4 or vRouter\, and the Linux kernel is changing for supporting them. Having these same offloads when using DPDK is a possibility although the implications are not clear yet. We present Netronome’s perspective for adding such a support to DPDK mainly for OVS and eBPF. \nVideo | Slide \n\nFlexible and Extensible support for new protocol processing with DPDK using Dynamic Device Personalization\nAndrey Chilikin (Intel)\, Brian Johnson (Intel) \nDynamic Device Personalization allows a DPDK application to enable identification of new protocols\, for example\, GTP\, PPPoE\, QUIC\, without changing the hardware. The demo showcases a DPDK application identifying and spreading traffic on GTP and QUIC. Dynamic Device Personalization can be used on any OS supported by DPDK\, for example we showcase a QUIC protocol classification demo on Windows OS. \nVideo | Slide \n\nServerless DPDK – How SmartNIC resident DPDK Accelerates Packet Processing\nNishant Lodha (Cavium) \nCloud architectures and business models are driving the need to ensure that all server compute resources have a revenue tie-in\, heralding the march towards the serverless dataplane. This session presents a unique way to harness the power of DPDK to accelerate packet processing by pushing the data plane into a SmartNIC. We will discuss the motivation\, benefits and challenges of implementing a DPDK based data plane running on the compute resources embedded in a SmartNIC. \nVideo | Slide \n\nEnabling hardware acceleration in DPDK data plane applications\nDeclan Doherty (Intel) \nThis presentation will look at the challenges faced in leveraging hardware acceleration in DPDK enabled applications\, addressing some of the problems posed in creating consistent hardware agnostic APIs to support multiple accelerators with non-aligned features\, and the knock implications this can have to application designs. \nVideo | Slide \n\nrte_security: enhancing IPSEC offload\nHemant Agrawal (NXP)\, Declan Doherty (Intel)\, Boris Pismenny (Mellanox) \nIn this talk we present a joint work of NXP\, Intel and Mellanox on offloading security protocol processing to hardware providing better utilization of host CPU for packet processing. This talk provides the overview of new enhancement in the rte_security APIs to support various features of IPSEC offloads as inline or lookaside offload. \nVideo | Slide \n\nMellanox FPGA\nBoris Pismenny (Mellanox) \nThe FPGA allows a wide variety of features to be supported in DPDK. We observe that programmable HW is useful for packet-processing pipelines. For example\, consider a pipeline of multiple match-action operations\, in which actions may also specify generic packet modifications that are carried out by accelerators. In this case\, the CPU is only involved at the beginning (transmission) or end (reception) of the pipeline\, while the accelerator invocations are initiated by NIC matching operations. \nVideo | Slide \n\nSMARTNIC\, FPGA\, IPSEC Panel discussion\nA panel discussion with Hemant Agrawal\, Alejandro Lucero\, Andrey Chilikin\, Brian Johnson\, Nishant Lodha\, Declan Doherty and Boris Pismenny to discuss DPDK enablement for smart NICs\, FPGA and IPsec. \nVideo \n\nVPP Host Stack\nFlorin Coras (Cisco) \nAlthough packet forwarding with VPP and DPDK can now scale to tens of millions of packets per second per core\, lack of alternatives to kernel-based sockets means that containers and host applications cannot take full advantage of this speed. To fill this gap\, VPP was recently added functionality specifically designed to allow containerized or host applications to communicate via shared-memory if co-located\, or via a high-performance TCP stack inter-host. \nVideo | Slide \n\nDPDK’s best kept secret – Micro-benchmark performance tests\nMuthurajan Jayakumar (Intel) \nTo have apple to apple comparisons\, developers need a common ground of base level metrics. That common ground is ability to identify the basic DPDK building block of importance (as well as relevance to the work load) e.g.\, producer/consumer rings and measure the cycle cost associated with basic operation like enque/dequeing – bulk versus single. \nVideo | Slide \n\nDPDK on Microsoft Azure\nDaniel Firestone (Microsoft)\, Madhan Sivakumar (Microsoft) \nSDN is at the foundation of all large scale networks in the public cloud\, such as Microsoft Azure. But how do we make a software network scale to an era of 40/50+ gigabit networks and provide great performance for network applications and NFV in VMs? In this presentation\, Daniel Firestone and Madhan Sivakumar will detail Azure Accelerated Networking for Linux with DPDK\, using Azure’s FPGA-based SmartNICs to accelerate Linux workloads using SR-IOV. \nVideo | Slide \n\nOpenNetVM: A high-performance NFV platforms to meet future communication challenges\nK. K. Ramakrishnan (Univ. of California\, Riverside) \nTo truly achieve the vision of a high-performance software-based network that is flexible\, lower-cost\, and agile\, a fast and carefully designed NFV platform along with a comprehensive SDN control plane is needed. Our high-performance NFV platform\, OpenNetVM\, exploits DPDK and enables high bandwidth network functions to operate at near line speed\, while taking advantage of the flexibility and customization of low cost commodity servers. \nVideo | Slide \n\nMake DPDK’s software traffic manager a deployable solution for vBNG\nCsaba Keszei (Ericsson) \nAchieving network functions parity across purpose-built ASIC implementation and virtual implementation is not straightforward. Irrespective of differences in performance capability between purpose-built and virtual environments. Functional disfiguration represents a significant obstacle in operators’ adoption of virtualization as it implies a dependency on access/aggregation network topology and configuration. \nVideo | Slide \n\nOpenVswitch hardware offload over DPDK\nRony Efraim (Mellanox) \nTelcos and Cloud providers are looking for higher performance and scalability when building nextgen datacenters for NFV & SDN deployments. While running OVS over DPDK reduces the CPU overload of interrupt driven packet processing\, CPU cores are still not completely freed up from polling of packet queues. \nVideo | Slide \n\nAccelerating NFV with VMware’s Enhanced Network Stack (ENS) and Intel’s Poll Mode Drivers (PMD)\nJin Heo (VMware)\, Rahul Shah (Intel) \nNetwork Functions Virtualization (NFV) deployments are happening at a rapid pace. This is driving the need to more efficiently consolidate compute\, storage and communication workloads. NFV enables Communications Service Providers to migrate their fixed function networking elements to a general purpose server; however there is the need preserve the existing performance and latency. To support such workloads a vSwitch that enables both high throughput and low latency is a must. \nVideo | Slide \n\nDPDK Membership Library\nSameh Gobriel (Intel) \nIn this talk we will present the new DPDK Membership Library\, this library is used to create what we call a “set-summary” which is a new data structure that is used to summarize large set of elements. It is the generalization and extension to the traditional filter structure\, e.g. bloom filter\, cuckoo filter\, etc to efficiently test if a key belongs to a large set. \nVideo | Slide \n\nIntegrating and using DPDK with Open vSwitch\nAaron Conole (Red Hat)\, Kevin Traynor (Red Hat) \nSome applications are written from the ground up with DPDK in mind\, but Open vSwitch is not one of them. This talk will look at how Open vSwitch integrated and uses DPDK. It will look at various aspects such as DPDK initialization\, threading\, and the usage of DPDK PMD’s and libraries. It will also talk about DPDK usability aspects such as LTS and API/ABI stability and the effect they have on Open vSwitch with DPDK. \nVideo | Slide \n\nLagopus Router\nTomoya Hibi (NTT)\, Hirokazu Takahashi (NTT) \nIn this talk\, we introduce a new open source router implementation called Lagopus Router. It is an extensible microservice architecture router that consists of a DPDK router dataplane\, router agents\, and a pub/sub-based centralized configuration manager. These modules are written in Go and C and are loosely coupled to each other by gRPC. \nVideo | Slide \n\nvSwitch Panel Discussion\nA panel discussion with Rony Efraim\, Jin Heo\, Rahul Shah\, Sameh Gobriel\, Charlie Tai\, Aaron Conole\, Kevin Traynor\, Tomoya Hibi and Hirokazu Takahashi to discuss DPDK acceleration of vswitches. \nVideo \n\nClosing Remarks\nJim St. Leger (Intel) \nVideo
URL:https://www.dpdk.org/event/dpdk-summit-usa-2017/
LOCATION:Club Auto Sport\, 521 Charcot Ave\, San Jose\, CA\, 95131\, United States
CATEGORIES:DPDK Summit
ATTACH;FMTTYPE=image/jpeg:https://www.dpdk.org/wp-content/uploads/sites/23/2018/06/summit-thumb-usa-2017.jpg
GEO:37.384043;-121.9144208
X-APPLE-STRUCTURED-LOCATION;VALUE=URI;X-ADDRESS=Club Auto Sport 521 Charcot Ave San Jose CA 95131 United States;X-APPLE-RADIUS=500;X-TITLE=521 Charcot Ave:geo:-121.9144208,37.384043
END:VEVENT
END:VCALENDAR