BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//DPDK - ECPv6.15.17.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:DPDK
X-ORIGINAL-URL:https://www.dpdk.org
X-WR-CALDESC:Events for DPDK
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20130310T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20131103T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20140309T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20141102T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20150308T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20151101T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20160313T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20161106T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20170312T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20171105T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20180311T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20181104T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20190310T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20191103T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;VALUE=DATE:20140908
DTEND;VALUE=DATE:20140909
DTSTAMP:20260410T045006
CREATED:20140908T200202Z
LAST-MODIFIED:20180914T150227Z
UID:727-1410134400-1410220799@www.dpdk.org
SUMMARY:DPDK Summit\, San Francisco
DESCRIPTION:DPDK Summit San Francisco 2014 – September 8\, 2014\nThe first DPDK Summit brought the DPDK open source community together at the San Francisco Marriott Marquis Hotel with the vision to share DPDK usage and implementation; to hear from DPDK developers\, contributors\, and users; and to build the DPDK community. \n\n\nDPDK Summit Kick-off\nJim St. Leger (Software Product Line Manager\, Intel) \nJim starts off the summit by discussing the objectives\, DPDK history\, the community\, and its contributors. \nSlides \n\nIs It Time to Revisit the IP Stack in the Linux Kernel and KVM?\nJun Xu (Principal Engineer\, Futurewei Technologies\, Inc.) \nWe might take too many things for granted\, like the Linux kernel providing a TCP/IP stack since its inception\, whereas UNIX did not. Fast forward to today’s world with virtualization\, where most hypervisors derived from modern OSs also supply an IP stack. Should we take out the IP stack to let the OS and hypervisor focus on their main tasks\, including process scheduling\, resource management\, and virtualization\, or let them be the monolithic piece for all these elements? \nVideo | Slides\n \n\nMulti-Socket Ferrari for NFV\nLászló Vadkerti (Lead Software Developer\, Ericsson)\, András Kovács (Lead Software Developer\, Ericsson) \nThis presentation describes an approach to support latency-sensitive applications by diving into best practices in augmenting DPDK to deliver lower jitter and high availability in addition to higher packet throughputs. We will review Enhanced Memory Management techniques and multi-process enhancements to the DPDK library foundation.We also will describe our experience and optimizations in using DPDK with a XEN Hypervisor including the addition of NUMA awareness. \nVideo | Slides\n \n\nLightning Fast I/O for Windows Server v.Next with PacketDirect\nGabriel Silva (Program Manager Windows Server Networking\, Microsoft) \nMicrosoft operates some of the world’s largest data centers\, such as Bing\, Office365\, Xbox Live\, and Azure\, to name a few. One of the key fundamentals enabling Microsoft to operate efficiently at such hyper scale is NIC performance. This talk addresses how to drive up packet processing performance for the network functions running in their data centers. \nVideo | Slides\n \n\nA High-Performance vSwitch of the User\, by the User\, for the User\n\nYoshihiro Nakajima (Researcher\, NTT Network Innovation Laboratories) \n\n\n\nA high-performance software switch is a key component for next-generation telecom infrastructure\, especially for NFV and SDN. NTT Laboratories developed a high-performance and highly-scalable SDN/OpenFlow software switch\, called Lagopus\, that leverages state-of-the-art server and software technologies\, including Intel® processors\, Intel® Ethernet Controllers\, and the DPDK. \n\n\nVideo | Slides\n \n\nApplication Performance Tuning and Future Optimizations in DPDK\nVenky Venkatesan (DPDK Architect\, Intel) \nIn this session\, one of the original authors of the DPDK library will share insight into how to best use the available tools and library hooks when looking to optimize system packet performance. The session will also provide insight into concepts under consideration to facilitate discussion and prioritization feedback into the future planning process. \nVideo | Slides\n \n\nDPDK in a Virtual World\nBhavesh Davda (Senior Staff Engineer\, VMware)\, Rashmin Patel (DPDK Virtualization Engineer\, Intel) \nThe usage of virtualized DPDK applications has increased tremendously. This session will review how the DPDK APIs take advantage of platform technologies like SR-IOV\, direct device assignment (VT-d) and para-virtual as well as emulated devices offered by the underlying platform to achieve higher packet throughput at predictable latency. The session will primarily focus on the virtualization options offered by DPDK for the VMware ESXi Hypervisor environment. The session will conclude by sharing the future vision and commitment to enhance the API even further to enable community developers and end users to get most out of underlying Intel Architecture and Hypervisor target. \nVideo | Slides\n \n\nHigh-Performance Networking Leveraging the DPDK and the Growing Community\nThomas Monjalon (Packet Processing Engineer and DPDK.org Maintainer\, 6WIND) \nHigh-performance networking stacks can be designed using the DPDK and packet processing software. This presentation covers the development of high-performance applications with examples for IPsec\, TCP\, virtual switching\, and virtual networking functions for NFV. \nVideo | Slides\n \n\nClosing Remarks\nTim O’Driscoll (Software Engineering Manager for DPDK\, Intel) \nTim brings the summit to a close by reviewing the DPDK open source journey\, soliciting feedback on the summit\, and discussing increased involvement in the DPDK community. \nVideo | Slides
URL:https://www.dpdk.org/event/dpdk-summit-san-francisco-2014/
LOCATION:San Francisco Marriott Marquis Hotel\, San Francisco\, CA\, United States
CATEGORIES:DPDK Summit
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20150421
DTEND;VALUE=DATE:20150422
DTSTAMP:20260410T045006
CREATED:20150421T200004Z
LAST-MODIFIED:20180914T150222Z
UID:724-1429574400-1429660799@www.dpdk.org
SUMMARY:DPDK Summit\, Beijing
DESCRIPTION:DPDK Summit China 2015 – April 21\, 2015\nThe DPDK community met at the JW Marriott Hotel in Beijing to discuss the application of DPDK to a variety of industry segments including telecom\, cloud\, enterprise\, security\, and financial services. The event enabled the DPDK open source community to share DPDK usage and implementation; to hear from DPDK developers\, contributors\, and users; and to build the DPDK community. \n\n\n\n 
URL:https://www.dpdk.org/event/dpdk-summit-beijing-2015/
LOCATION:JW Marriott Beijing\, Beijing\, China
CATEGORIES:DPDK Summit
GEO:39.9041999;116.4073963
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20150817
DTEND;VALUE=DATE:20150818
DTSTAMP:20260410T045006
CREATED:20150817T195723Z
LAST-MODIFIED:20180914T150214Z
UID:721-1439769600-1439855999@www.dpdk.org
SUMMARY:DPDK Summit\, San Francisco
DESCRIPTION:DPDK Summit San Francisco 2015 – August 17\, 2015\nThe DPDK community met at the Westin St. Francis Hotel in San Francisco to discuss the application of DPDK to a variety of industry segments including telecom\, cloud\, enterprise\, security\, and financial services. The event enabled the DPDK open source community to share DPDK usage and implementation; to hear from DPDK developers\, contributors\, and users; and to build the DPDK community. \n\nOpening Remarks and Kickoff to DPDK Summit\nTim O’Driscoll (Software Engineering Manager for DPDK\, Intel) \nOn August 17\, 2015\, Tim O’Driscoll\, an Engineering Manager from Intel\, provided the opening remarks to kick off the DPDK Summit 2015. \nSlide \n\nLeveraging DPDK to Scale-Out Network Functions Without Sacrificing Networking Performance\nTim Mortsolf (CTO and Co-Founder\, RIFT.io)\, Scott Myelle (VP Solutions Architecture\, RIFT.io) \nNFV application workloads are deployed in ecosystems with varying network attachment conditions that determine the availability of specific DPDK technologies. DPDK technology has rapidly evolved to support multiple I/O models ranging from dedicated access with PCI pass-through\, shared access with SRIOV\, and vhost-user offload using a DPDK enabled Open vSwitch. This presentation demonstrates how to write a flexible network function that can utilize DPDK to its full potential while retaining the ability to run in a non-DPDK environment. \nVideo | Slide \n\nAspera’s FASP Protocol Uses Standard Hardware and DPDK to Achieve 80Gbps Data Transfer\nCharles Shiflett (Senior Software Engineer\, IBM Aspera Solutions) \nCharles Shiflett will review the technologies and design approach to send data at a rate in excess of 1 TB every two minutes. Aspera Fast\, Adaptive\, and Secure Protocol (FASP®) is a breakthrough transfer protocol that leverages existing WAN infrastructure and commodity hardware. Code samples showing the use of DPDK\, AES-NI\, FASP Sockets\, and direct I/O to create a zero-copy transfer technology will be discussed. \nVideo | Slide \n\nFuture Enhancements to the DPDK Framework\nKeith Wiles (Staff Architect\, Intel) \nThis session will provide insight and gather community input on forward-looking activity in advancing DPDK to include connectivity to hardware accelerators and SOC support. Keith will review the need for additional devices and functionality within the DPDK framework including supporting non-PCI configuration\, external memory manager and event programming model utilized by many SOCs. The session will drill down on the use of a crypto device within the DPDK framework by reviewing an early proof of concept of a software and hardware implementation of the device. \nVideo | Slide \n\nIt’s Kind of Fun to Do the Impossible with DPDK\nYoshihiro Nakajima (Researcher\, NTT Network Innovation Laboratories) \n\n\n\nNTT Network Innovation Laboratories will present lessons learned from a one year experiment on SDN/OpenFlow Lagopus Switch development and trials on ShowNet SDN-IX from Interop Tokyo 2015. A co-design of FPGA NIC\, DPDK library extension and software data plane is indispensable to improve packet lookup/processing performance and to reduce CPU resources for 100Gbps packet processing performance. Additionly\, NTT will share a carrier use case activity on hybrid SDN with autonomous network control and network policy control by their Lagopus switch and OpenFlow technologies. \n\n\n\n\nVideo | Slide \n\nDesign Considerations for a High-Performing Virtualized LTE Core Infrastructure\n\nArun Rajagopal (Technology Architect\, NFV and Wireless Core\, Sprint)\, Sameh Gobriel (Senior Research Scientist\, Intel Labs) \n\nSprint’s expectation is to achieve similar performance in moving from purpose built ASIC based platforms to virtualized network solutions running on high volume servers. This session will discuss the technical challenges in achieving a scalable solution that addresses the required transaction rates and throughput of a carrier network. Learn how DPDK\, VM to VM communication optimizations\, and cluster scaling technologies work together to create a scalable LTE core infrastructure built on high volume servers. \nVideo | Slide \n\nEvaluation and Characterization of NFV Infrastructure Solutions on Hewlett-Packard Server Platforms\nAl Sanders (Lead Developer\, Hewlett-Packard) \nThe HP Servers NFV Infrastructure Lab was formed to evaluate DPDK based environments to ensure that HP Server Platforms can provide the best possible performance for hosting NFV Solutions. The lab has partnered with a number of NFV providers\, including Intel’s Open Network Platform and 6WIND. Our testing methodology will be presented with a focus on packet processing throughput and latency in a variety of DPDK enabled configurations\, including bare metal\, SR-IOV\, and accelerated virtual switches. Examples of results using Intel’s ONP and 6WIND technologies will be presented. \nVideo | Slide \n\nOpen Discussion Panel (Q&A with Speakers)\nJim St. Leger (Software Product Line Manager\, Intel Network Platforms Group) \nJim St. Leger\, Intel’s Software Product Line Manager\, led a discussion with DPDK experts. During this time\, DPDK Summit attendees had the opportunity to ask detailed questions of the day’s presenters. \nVideo
URL:https://www.dpdk.org/event/dpdk-summit-san-francisco-2015/
LOCATION:Westin St. Francis\, San Francisco\, CA\, United States
CATEGORIES:DPDK Summit
GEO:37.7749295;-122.4194155
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20160518
DTEND;VALUE=DATE:20160519
DTSTAMP:20260410T045006
CREATED:20160518T185721Z
LAST-MODIFIED:20180914T150158Z
UID:716-1463529600-1463615999@www.dpdk.org
SUMMARY:DPDK Summit China
DESCRIPTION:DPDK Summit China/Asia Pacific 2016 – May 18\, 2016\nThe DPDK community met at the Renaissance® Shanghai Yangtze Hotel to discuss the application of DPDK to a variety of industry segments including telecom\, cloud\, enterprise\, security\, and financial services. The event enabled the DPDK open source community to share DPDK usage and implementation; to hear from DPDK developers\, contributors\, and users; and to build the DPDK community. \n\n  \n\n\nDPDK Community Update and an Introduction to the Fast Data\, FD.io\, Project\nJim St. Leger (Software Product Line Manager\, Intel) \nThe DPDK Community and open source software project has been growing steadily for the past four years. This session will look at the growth of the community in terms of contributors and commitments\, discuss who is involved and contributing to the community today\, and provide guidance on how everyone and anyone can start contributing to DPDK.org today. The Fast Data or FD.io Project launched in February 2016.  This session will provide the background on the open source project creation including the gap it fills in the NFV/SDN data plane\, packet processing capability stack.  It will also talk about the rapid initial growth of the project including membership and future direction. \n\nVideo | Video (In China) | Slide \n\n\nAccelerate virtio/vhost Using DPDK in NFV/Cloud Environment\n\nHuawei Xie / 谢华伟 (Software Engineer\, Intel)\, Jianfeng Tan / 谈鉴锋 (Software Engineer\, Intel) \n\nAs the standard para-virtualization interface\, the performance and stability of virtio and vhost are the key to the success of NFV. In this presentation\, we would like to summarize our many years of pioneering work around DPDK virtio/vhost. We introduce the performance optimization techniques around virtio ring layout and vhost TSO to accelerate TCP/IP stack based applications in the guest VM. To enhance the robustness of vhost\, we created ‘vhost reconnect’ to support vhost restarting and reconnecting to QEMU in the event of a crash. We also leverage VMFUNC to provide protected and fast inter-VM channels. To provide high throughput and low latency interface in the container environment\, we address the device simulation and translation gap and successfully created virtio interface in the container. \n\nVideo | Video (In China) | Slide \n\n\nNext Gen Virtual Switch\nJun Xiao / 肖骏 (Founder & CTO\, CloudNetEngine) \nWith the increasing prevalence of cloud computing\, there is a proliferation of east-west traffic in clouds\, and growing demands for virtualizing network I/O intensive workloads at telcos with big data. All those put huge amount of challenges on existing virtual switches. In this presentation\, we will share with the DPDK community what we learned while building the CloudNetEngine virtual switch\, which is based on great open source projects like DPDK\, OVS\, NPF\, etc. We’ll have deep dive discussions on how to boost performance\, improve CPU efficiency\, implement a rich feature set needed by public and private clouds — overlay\, security group\, QoS\, HW/SW offload\, load balance\, monitoring and ease of integration with cloud ecosystem. \n\nVideo | Video (In China) | Slide \n\n\nmTCP: A High-Speed User-Level TCP Stack on DPDK\nDr. KyoungSoo Park (Associate Professor Electrical Engineering\, KAIST) \nScaling the performance of short TCP connections on multicore systems is fundamentally challenging. Despite many proposals that have attempted to address various shortcomings\, inefficiency of the kernel implementation still persists. For example\, even state-of-the-art designs spend 70% to 80% of CPU cycles handling TCP connections in the kernel\, leaving very little room for innovation in the user-level program. In this talk\, I will present mTCP\, a high-performance user-level TCP stack for multicore systems. mTCP addresses the inefficiencies from the ground up — from packet I/O and TCP connection management to the application interface. In addition to adopting well-known techniques\, our design (1) translates multiple expensive system calls into a single shared memory reference\, (2) allows efficient flow-level event aggregation\, and (3) performs batched packet I/O for high I/O efficiency. Our evaluations on an 8-core machine showed that mTCP improves the performance of small message transactions by a factor of 25 compared to the latest Linux TCP stack and a factor of 3 compared to the MegaPipe system. It also improves the performance of various popular applications by 33% to 320% compared to those on the Linux stack. \n\nVideo | Video (In China) | Slide \n\n\nDPDK: A Journey of Migration to Linux Kernel\nLou Yang / 娄扬 (Software Architect\, TOPSEC) \nIn some special cases\, we can not use DPDK directly\, for instance\, if our software runs in linux kernel mode or we use a special software architecture. But we can migrate DPDK’s key elements into our own products. The key elements include efficient memory management\, polling mode NIC driver\, undisturbed data plane\, etc. This talk will introduce the practices of how to migrate DPDK into a network firewall product. \n\nVideo | Video (In China) | Slide \n\n\nBuilding High-Performance Networked Systems with Innovative Software and Hardware\nDr. Kai Zhang / 张凯 (Ph.D.\, University of Science and Technology of China) \nAs network I/O speed has been unleashed by new software techniques such as DPDK\, network processing is no longer the bottleneck of networked systems. Consequently\, networked systems need redesign to meet the increasing demand for high speed network processing. In this talk\, we make a strong case for GPUs to serve as special-purpose devices to greatly accelerate the operations of in-memory key-value stores. Specifically\, we present the design and implementation of Mega-KV\, a GPU-based in-memory key-value store system that achieves high performance and high throughput. Effectively utilizing the high memory bandwidth and latency hiding capability of GPUs\, Mega-KV provides fast data accesses and significantly boosts overall performance. Running on a commodity PC installed with two CPUs and two GPUs\, Mega-KV can process up to 160+ million key-value operations per second\, which is 1.4 – 2.8 times as fast as the state-of-the-art key-value store system on a conventional CPU-based platform. \n\nVideo | Video (In China) | Slide \n\n\nVortex from UCloud\nXu Liang / 徐亮 (Director\, UCloud.cn) \nAs an IaaS company we developed many NFV applications via DPDK\, but just recently we released Vortex\, a Layer-4 load balancer. We will discuss the challenges in developing a large-scale\, multi-tenant and Overlay Network NFV application and how we handled these challenges with DPDK. We will also discuss our experience and lessons learned. \n\nVideo | Video (In China) | Slide \n\n\nA Deep Dive Into Memory Access\nWang Zhihong / 王志宏 (Software Engineer\, Intel) \nMemory efficiency is critical to VNF performance. It is challenging to design and implement memory friendly networking software\, especially in a multiple core/processor environment. Understanding of the microarchitecture and underlying memory hierarchy helps software architects and developers analyze and optimize software performance.\nThis session uses a DPDK based NFV example to illustrate the actual memory behavior behind software abstraction & C code and common optimizing techniques. It also uses CPU cycles to explain the overhead of each type of memory operation to give a sense of what should be avoided and what’s the right thing to do in practice. \n\nVideo | Video (In China) | Slide \n\n\nLight and NOS\nDr. Dan Li (Associate Professor in Computer Science\, Tsinghua University( \nWe designed and implemented a user-level network stack based on DPDK\, named Light. The benefit of Light is that it does not need the application to modify anything\, and the protocol stack does not affect the performance of the application. \n\nVideo | Video (In China) | Slide \n\n\nWhen Ceph Meets DPDK\nHaomai Wang / 王豪迈 (CTO\, XSKY) \nIn this presentation we will discuss the integration with DPDK\, SPDK and Ceph. Ceph is a popular open-source storage system which includes block\, file\, and object interfaces. We implement a new DPDK network stack in Ceph which contains userspace TCP/IP stack. SPDK is another Intel open-source technology which implements Userspace NVME protocol. With MBUF from DPDK\, we make the whole data packet without copy to store to NVME device. The whole Userspace stack is numa friendly\, zero copy and nearly lock free. \n\nVideo | Video (In China) | Slide
URL:https://www.dpdk.org/event/dpdk-summit-china-may-18-2016/
LOCATION:Renaissance® Shanghai Yangtze Hotel\, China
CATEGORIES:DPDK Summit
ATTACH;FMTTYPE=image/jpeg:https://www.dpdk.org/wp-content/uploads/sites/23/2018/06/summit-thumb-asia-2016.jpg
GEO:35.86166;104.195397
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20160810
DTEND;VALUE=DATE:20160812
DTSTAMP:20260410T045006
CREATED:20160810T101958Z
LAST-MODIFIED:20180914T150143Z
UID:711-1470787200-1470959999@www.dpdk.org
SUMMARY:DPDK Summit\, San Jose
DESCRIPTION:The DPDK community met at The Tech Museum of Innovation in San Jose at a two-day event to discuss the application of DPDK to a variety of industry segments including telecom\, cloud\, enterprise\, security\, and financial services. The event enabled the DPDK open source community to share DPDK usage and implementation; to hear from DPDK developers\, contributors\, and users; and to build the DPDK community. \n\n\n\n\nIntroduction\nJim St. Leger (Software Product Line Manager\, Intel) \nThis presentation will outline the roadmap for future DPDK releases including 16.11 and 17.02. \n\n\n\nVideo | Slide\n \n\n\nRoadmap for Future Releases\n\nTim O’Driscoll (Software Engineering Manager for DPDK\, Intel) \n\nThis presentation will outline the roadmap for future DPDK releases including 16.11 and 17.02. \n\nVideo | Slide\n \n\nDPDK on Embedded Networking SoCs – Experience & Needs\n\n\nHemant Agrawal (NXP)\, Shreyansh Jain (NXP) \n\nThis presentation will focus on NXP experiences in contributing to DPDK\, and areas of DPDK that need to be enhanced to improve support for ARM-based processors.\n\nVideo | Slide\n \n\nExtending DPDK to Add an Event Driven Programming Model\n\nJerin Jacob (Cavium) \n\nCavium will provide an overview of event driven programming and the RFC API proposal for extending DPDK to adapt such a model. The presentation will cover introduction to event driven programming model concepts\, characteristics of hardware-based event manager devices\, RFC API proposal\, example use case\, and benefits of using the event driven programming model. \nVideo | Slide\n \n\nHigh Performance Framework for Symmetric Crypto Packet Processing in DPDK\n\nDeepak Jain (Network Platform Group\, Intel) \n\nThis presentation will provide an overview of cryptodev framework in DPDK. It will show how both software and hardware crypto accelerators can be used transparently from the application\, providing an overview of the framework\, its API\, a performance analysis and comparisons of software and hardware solutions\, and finally an example of NFV use case.\nVideo | Slide\n \n\nUser Perspectives on Trying to Use DPDK for Accelerating Networking in End-system Applications\n\nSowmini Varadhan (Oracle Mainline Linux Kernel Group) \n\nThis presentation will describe Oracle’s experiences in using DPDK to accelerate I/O for typical end-systems applications. The talk will attempt to generate some discussion about areas where API constructs to provide access to key DPDK features would be valuable to enable an easy transition for typical real-world socket applications.\nVideo | Slide\n \n\nFlow Classification Optimizations in DPDK\n\n\nSameh Gobriel (Intel)\, Charlie Tai (Intel) \n\nThis presentation will cover flow table design using DPDK\, as well as new algorithmic and hardware optimizations for the RTE hash library that improves on the lookup and the flow update/insert performance. Furthermore\, a new research proof of concept (POC) to optimize the OVS flow lookup using a two-layer lookup technology based on DPDK libraries will be highlighted with some preliminary research results.\n\nVideo | Slide\n \n\nIntel® 40G Ethernet Controller Architecture\, Application and Performance\n\nMuthurajan Jayakumar (M Jay) (Intel)\, Helin Zhang (Intel) \n\nThis presentation will describe how to achieve maximum I/O performance. It will include an architectural view focusing on the key elements in fast path Rx/Tx\, typical application usage scenarios\, and methods for optimizing performance. \nVideo | Slide\n \n\nAccelerating SSL and OVS at 100G by Leveraging DPDK\n\nEyal Cohen (Silicom) \n\nThis presentation will describe how to use DPDK together with Intel® Quick Assist Technology and the Intel® Ethernet Multi-host Controller FM10000 Family (FM10K) to achieve 100G throughput and OVS offload. \nVideo | Slide \n\n\nVirtualization of Network Packet Monitoring Systems Using DPDK\n\n\nDharmraj Jhatakia (GM and Head of DCT\, Happiest Minds) Jessel Mathews (Technical Lead\, Happiest Minds) \n\n\nIn order to better plan and utilize their networks\, Network Administrators need solutions which give them the visibility into the network. Happiest Minds enabled the transformation of a leading Network Packet Monitoring company to co-create the Network Virtual Packet Monitoring system leveraging the key technology innovations like DPDK. \n\nVideo | Slide\n \n\n NFV Use-case Enablement on DPDK and FD.io \n\n\nCristian Dumitrescu (Software Architect\, Intel) \n\nThis presentation will describe the development of NFV use cases such as a virtualized provider edge router (vPE) using the DPDK and FD.io projects. \n\nVideo | Slide\n \n\n\nTechnical Panel\n\nVenky Venkatesan (Intel)\, Stephen Hemminger (Microsoft)\, Jerin Jacob (Cavium)\, Hemant Agrawal (NXP)\, Sowmini Varadhan (Oracle) \n\nThe panel will be comprised of some of the technical experts from the DPDK community. It will involve an interactive Q&A with the audience. \n\nVideo \n \n\n\nDPDK in a Box\n\nDave Hunt (Intel) \n\nDPDK in a Box is small\, low-cost DPDK platform running on a Minnowboard. It’s not intended for volume production\, but may be useful for universities and independent developers who want to work on DPDK but have a limited budget. \n\nVideo | Slide\n \n\n\nDPDK Vhost/Virtio Status\n\nYuanhan Liu (Intel) \n\nA lot of development effort has been done to DPDK vhost-user/virtio recently\, including improving the performance\, enhancing the stability and adding more functionality. This presentation will describe some recent enhancements including vhost-user multiple-queue and vhost-user reconnect. \n\n Slide \n\nScalable High-Performance User Space Networking for Containers\n\n\nCunming “Steve” Liang (Intel)\, Jianfeng Tan (Intel) \n\n\nContainer-based networking is becoming more and more popular because of the short provisioning time\, low overhead\, good scalability and reusability. This paper describes virtio for container technology\, providing a scalable\, high-performance\, user space virtual network interface for L2/L3 VNFs. \nSlide \n\n\nUnderstanding the Performance of DPDK as a Computer Architect\n\nDr. Peilong Li (University of Massachusetts\, Lowell) \n\nIn our experiments\, OVS­DPDK can achieve a maximum of 8x throughput increase compared with vanilla OVS. To understand the performance difference\, we leverage advanced profiling tools such as Intel VTune Amplifier and Linux perf [4] to investigate in detail what system architecture parameters are affected by OVS­DPDK for achieving the speedups. \n\nVideo | Slide\n \n\n\nDPDK\, VPP/FD.io and Accelerated Open vSwitch\n\nTom Herbert (SDN Group\, Red Hat) \n\nIn this talk\, Mr. Herbert compares VPP with Open vSwitch. Although both VPP and OVS utilize DPDK for data plane acceleration\, they are very different in internal architecture and implementation. Mr. Herbert will discuss these differences in the context of various use cases and how performance can vary and how in different uses one may shine while the other may falter. \n\nVideo  | Slide\n \n\n\nPISCES: A Programmable\, Protocol-Independent Software Switch\n\nSean Choi (Stanford University) \n\nSoftware switches are typically based on a large body of code\, and changing the switch is a formidable undertaking. Instead\, it should be possible to specify how packets are processed and forwarded in a high-level domain-specific language (DSL) such as P4\, and compiled to run on a software switch. We present PISCES\, a software switch derived from Open vSwitch (OVS) DPDK-based implementation\, a hard-wired hypervisor switch\, whose behavior is customized using P4. \n\nVideo |  Slide\n \n\n\nBerkeley Extensible Soft Switch (BESS)\n\n\nSangjin Han (UC Berkeley)\, Christian Maciocco (Intel) \n\n\nThis presentation will describe the Berkeley Extensible Soft Switch (BESS). \n\nVideo  | Slide\n \n\n\n\n\nDecibel: Dense Disaggregated Disks for the Datacenter\n\nMihir Nanavati (University of British Columbia) \n\nIn this talk\, we take the position that volumes today should represent a core building block of datacenter storage\, analogous to virtual machines. In addition to providing a logical block interface\, volumes must also provide additional data plane services necessary in multi-tenant environments\, such as performance-isolated resource sharing and access control\, at the line-speed of expensive non-volatile memories. \n\n\n\n\n\nChange Before You Have to Be Claimed\n[from  Change Before You Have To. (Jack Welch)]\n\n\nTomoya Hibi (NTT Network Innovation Labs)\, Yoshihiro Nakajima (NTT Network Innovation Labs)\, Hirokazu Takahashi (NTT Network Innovation Labs) \n\n\nIn this talk\, we share the latest experiment and performance tuning knowledge of scale-out NFV environment in ShowNet of Interop Tokyo 2016. We deployed a set of DPDK-enabled routing VNFs on DPDK-enabled hypervisor vSwitch called Lagopus vSwitch with DPDK vhost-user PMD to examine their performance scalability. \n\nVideo  | Slide\n \n\n\nTransport Layer Development Kit (TLDK)\n\nKonstantin Ananyev (Intel) \n\nThis presentation describes the Transport Layer Development Kit (TLDK)\, which is an FD.io project that provides a set of libraries to accelerate L4-L7 protocol processing for DPDK-based applications. The initial implementation supports UDP\, with work on TCP in progress. The project scope also includes integration of TLDK libraries into Vector Packet Processing (VPP) graph nodes to provide a host stack. \n\nVideo  | Slide\n \n\n\nDPDK in Overlay Networks and How it Affects NFV Performance\n\n\nRaja Sivaramakrishnan (Distinguished Engineer\, Juniper Networks)\, Aniket Daptari (Sr. Product Manager\, Juniper Networks) \n\n\nOne approach to network virtualization is via end-system IP/VPN based overlays. To implement these end-system IP/VPNs\, often a kernel based software module is leveraged. However\, when the module resides in the host kernel\, it incurs a performance penalty. To alleviate these performance penalties\, the OpenContrail implementation leveraged DPDK and ported the kernel based distributed forwarding module to the user space. \n\nVideo | Slide\n \n\n\nInnovative NFV Service-Slicing Solution Powered by DPDK\n\nHayato Momma (Principal Engineer\, NEC Communication Systems\, Ltd.) \n\nIn this presentation\, the speaker will talk about: \n\nIntroduction of ‘Service-Slicing-Gateway’ that realizes the IoT-service-slicing\nWhy DPDK is necessary\nHow we overcome the issues faced in the past NFV development\n\n\nVideo | Slide\n \n\n\nWhy are Open and Programmable Data Planes Critical to the Future of Networking?\n\nPrem Jonnalagadda (Barefoot Networks) \n\nThis talk will present the need for programmability and openness of the data plane and the benefits to the networking industry as a whole. Specifically the talk will include details on P4\, a high-level\, networking domain-specific and open programming language and the ecosystem that is burgeoning around it. \n\nVideo | Slide\n \n\n\nGetting Your Code Upstream Into DPDK\n\nJohn McNamara (Intel) \n\nIn you have a simple one-line patch or a full blown Poll Mode Driver this talk will explain how to get that code upstream into DPDK. It will discuss the DPDK community\, the mailing list\, the patch process\, the contributors guides\, the ABI policy\, code reviews\, documentation and other aspects that make up the DPDK ecosystem. \n\nVideo | Slide\n \n\n\nPutting DPDK in Production\n\n\nFranck Baudin (Principal Product Manager\, OpenStack NF)\, Anita Tragler (Senior Product Manager\, Red Hat Enterprise Linux NFV) \n\n\nThis presentation will cover: \n\nUser perspective on DPDK\, including consumability and packaging considerations\nThe importance of ABI stability and Long Term Support\nSecurity for DPDK guest and host (DPDK vswitch)\n\n\nVideo | Slide\n \n\n\nCommunity Survey Feedback\n\nMike Glynn (Program Manager\, Intel) \n\nWe conducted a survey of the DPDK community\, soliciting input on a variety of topics including DPDK usage\, roadmap\, performance\, patch submission process\, documentation and tools. This session will present the results of the survey\, which will help to guide the future direction of the project. \n\nVideo | Slide
URL:https://www.dpdk.org/event/dpdk-summit-usa-2016/
CATEGORIES:DPDK Summit
ATTACH;FMTTYPE=image/jpeg:https://www.dpdk.org/wp-content/uploads/sites/23/2018/06/summit-thumb-usa-2016.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20170425
DTEND;VALUE=DATE:20170427
DTSTAMP:20260410T045006
CREATED:20170425T173106Z
LAST-MODIFIED:20180914T150130Z
UID:663-1493078400-1493251199@www.dpdk.org
SUMMARY:DPDK Summit\, Bangalore
DESCRIPTION:Introductions\, Welcome and Agenda for the Day\nSujata Tibrewala (Intel) \nVideo | Slide \n\nDPDK Architecture and Roadmap\nKannan Babu Ramia (Intel)\, Deepak K Jain (Intel) \nThis talk will explore the motivation behind the existence of DPDK\, why and how it evolved into what it is today and how the future roadmap addresses the needs of the Industry \nVideo | Slide \n\nSupporting SoC devices in DPDK – Status Update\nShreyansh Jain (NXP) \nThis talk is an extension of a talk presented in DPDK Summit Userspace 2016 in Dublin\, where NXP presented a case for expanding DPDK towards non-standard (SoC) devices. That required a large number of fundamental changes in the DPDK framework to untangle from PCI specific code/functionality. In this talk we delve into current upstream design of 1) the bus ‘driver’\, 2) the mempool ‘driver’\, 3) the device driver\, and how these layers tie up together to provide the device model in DPDK framework. \nVideo | Slide \n\nDPDK on an Intelligent NIC\nVamsi Attunuru (Cavium) \nThis presentation is about using DPDK as firmware on an Intelligent NIC (OCTEON TX). It will cover the firmware architecture and how DPDK fits in that architecture. It will discuss the hurdles faced and solutions used as part of this exercise. \nVideo | Slide \n\nMigrating from 10G to 25G\nJingjing Wu (Intel)\, Helin Zhang (Intel) \nThe Ethernet speed upgrade path was clearly defined as 10G->40G->100G. However\, new developments in data center indicate the latest path for server connections will be 10G->25G->100G with potential for 10G->25G->50G->100G. This is because 25G provides a more efficient use of hardware and a more logical upgrade path to 100G. \nVideo | Slide \n\nDPDK Cook Book\nMuthurajan Jaya Kumar (Intel) \nThe short talk is a quick tour of the book and show and tell of what each chapter contains. It is not going over the contents but giving info to developers as what each chapter contains. \nVideo | Slide \n\nImplementation of Flow-Based QoS Mechanism with OVS and DPDK\nKaruppusamy M (Wipro) \nThe project objective is to implement ‘Flow based QoS’ for SDN-NFV platform using OVS and DPDK on Intel architecture. We will apply this QoS mechanism on Wipro vCPE platform and demonstrate performance improvement of real time traffic. \nVideo | Slide \n\nFast Path Programming\nRamachandran Subramoniam (Happiest Minds)\, Vnpraveen Desu (Happiest Minds) \nThis session is a primer on the prominence of P4 as a high-level\, domain-specific language for data path applications. While there are a few ASIC vendors like Barefoot Networks who are coming up with compilers for their platforms\, we are looking at expanding the reach of P4 for virtual infrastructure / software based data path by showcasing how P4 can become a choice for writing DPDK applications and thus enhanced portability. \nVideo | Slide \n\nDataplane for Subscriber Gateways\nNatarajan Venkataraman (Ericsson) \nSubscriber gateways\, such as BNG nodes\, have unique requirements and challenges as compared to traditional routers. They need to be feature rich while also supporting high scale and throughput. This talk will provide an overview of a typical dataplane for subscriber gateways and highlight some of the design challenges in realizing the goals and the trade-offs to be considered. \nVideo | Slide \n\nSample VNF in OPNFV\nRamia Kannan Babu (Intel) \nThe topic begins with an introduction for developing data plane feature rich Virtual Network Function (VNF) using optimized DPDK libraries including ip-pipeline packet framework and taking advantage of basic x86 architecture. It covers concept of developing data plane applications for running with RTC (Run To completion) mode or Pipeline mode with just configuration change. It also covers the generic Best Known Methods for developing optimized data plane application on x86 architecture with specific code examples from samplevnf project from OPNFV. Finally concludes with the call for action to community to contribute in the samplevnf project in OPNFV for application development. \nVideo | Slide \n\nFast Data IO / Vector Packet Processor: Architecture overview\nShwetha Bhandari (Cisco) \nFD.io (Fast Data) is architected as a collection of sub-projects and provides a modular\, extensible user space IO services framework that supports rapid development of high-throughput\, low-latency and resource-efficient IO services. At the heart of fd.io is Vector Packet Processing (VPP) technology. This session will give an overview of VPP\, its architecture and how it pushes packet processing to extreme limits of performance and scale. \nVideo | Slide \n\nTransport Layer Development Kit (TLDK)\nMohammad Abdul Awal (Intel) \nThis presentation provides an overview of the Transport Layer Development Kit (TLDK) project in FD.io. \nVideo | Slide \n\nSFC with OVS-DPDK and FD.io-DPDK\nPrasad Gorja (NXP) \nDPDK has become the ubiquitous user space framework on which prominent open source switching software\, Open vSwitch and FD.io run\, and is widely integrated in OPNFV. This session discusses Open DayLIght (ODL) based SFC on both OVS-DPDK and FD.io with DPDK\, and provides a comparative study on architecture\, performance and latency of SFC use case on ARM SoCs. \nVideo | Slide \n\nDPDK Automation in Red Hat OpenStack Platform\nSaravanan KR (Red Hat) \nIn this talk\, we would like to take you through the Red Hat’s effort to provision the OpenStack cluster with OVS-DPDK/SR-IOV datapath with the needed EPA parameters. We will describe the deployment steps\, and the need for composable roles to handle today’s VNF deployment scenarios. \nVideo | Slide \n\nPacket Steering for Multicore Virtual Network Applications over DPDK\nPriyanka Naik (IIT Mumbai)\, Mitali Yadav (IIT Mumbai) \nThis presentation addresses the question of how packets must be steered from the kernel bypass mechanism to the user space applications. We investigate the following two questions: (i) Should packets be distributed to cores in hardware or in software? (ii) What information in the packet should be used to partition packets to cores? \nVideo | Slide \n\nCryptodev API\nDeepak K Jain (Intel) \nThis presentation describes the cryptodev API\, a framework for processing crypto workloads in DPDK. The cryptodev framework provides crypto poll mode drivers as well as a standard API that supports all these PMDs and can be used to perform various cipher\, authentication\, and AEAD symmetric crypto operations in DPDK. The library also provides the ability for effortless migration between hardware and software crypto accelerators. \nVideo | Slide
URL:https://www.dpdk.org/event/dpdk-summit-bangalore-april-25-26-2017/
LOCATION:Vivanta by Taj Hotel\, Bangalore\, India
CATEGORIES:DPDK Summit
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20170627
DTEND;VALUE=DATE:20170628
DTSTAMP:20260410T045006
CREATED:20180612T172929Z
LAST-MODIFIED:20180914T150123Z
UID:658-1498521600-1498607999@www.dpdk.org
SUMMARY:DPDK Summit\, Shanghai
DESCRIPTION:Opening\nHeqing Zhu (Intel) \nThis presentation discussed the DPDK community and ecosystem status in China and worldwide\, key direction\, and event agenda. \nSlide \n\nDPDK in Container: Status Quo and Future Directions\nJianfeng Tan (Intel) \nThis presentation discussed how DPDK can accelerate the container networking\, problems in both data and control planes\, progress and plan. \nSlide \n\nF-Stack\, a Full User Space Network Service on DPDK\nHailong Wang (Tencent) \nThis presentation discussed the F-Stack\, its design principle\, architecture\, main components\, performance\, and development history in Tencent. \nSlide \n\nA Better Virtio towards NFV Cloud\nCunming Liang (Intel)\, Xiao Wang (Intel) \nThis presentation discussed the vHost data path acceleration technology to pave the way for Network Function Cloudification\, including the roadmap to intercept DPDK\, and the QEMU community. \nSlide \n\nAccelerate VM I/O via SPDK and Crypto for Generic vHost\nChangpeng Liu (Intel)\, Xin Zeng (Intel) \nThis presentation discussed using DPDK generic vHost user library to build storage (vHost-SCSI) and crypto (vhost-crypto) applications. \nSlide \n\nOVS-DPDK Practices in Meituan Cloud\nHuai Huang (Meituan) \nThis presentation discussed the OVS-DPDK trial in Meituan\, its progress and challenges for large adoption\, as well as the gaps and solutions. \nSlide \n\nNetwork Performance Tuning\, Lesson Learned\nFangliang Lou (ZTE) \nThis presentation discussed the performance optimization methods\, key lessons\, success story with Intel and DPDK technology to achieve the significant performance boost for wireless workload. \nSlide \n\nOPDL: on the Path to Packet Processing Nirvana\nLiang Ma (Intel) \nThis presentation discussed an optimized packet distributor for core to core. OPDL decentralizes the distributor\, all packets are maintained in order and atomic. It well addresses the high volume distribution needs for small packets. \nSlide \n\nIntel® 25GbE Ethernet Adapter Advanced Features for NFV\nHelin Zhang (Intel)\, Jingjing Wu (Intel) \nThis presentation discussed the new 25Gbe Ethernet feature in DPDK\, how to transit from 10Gbe to 25Gbe using Intel Ethernet\, device personalization\, NFV use case such as VF Daemon and the Adaptive VF Guest Interface. \nSlide \n\nAccelerate VPP Workload with DPDK Cryptodev Framework\nFan Zhang (Intel) \nThis presentation discussed Cryptodev in DPDK framework and how to use it in VPP/IPsec scenario\, as well as performance metric when Intel QAT is applied. \nSlide \n\nData Center Security Use Case with DPDK\nHaohao Zhang (Tencent) \nThis presentation discussed Tencent cloud data center’s security needs\, why move from the dedicated chip to x86/DPDK paths\, how to use the multiple process model to design the security service\, which lead to thousands of server adoption. \nSlide \n\nTowards Low Latency Interrupt Mode PMD\nYunhong Jiang (Intel)\, Wei Wang (Intel) \nThis presentation discussed the interrupt/poll switching challenge\, cost analysis to the interrupt PMD in baremtal and virutalization\, as well as the tuning proposal of latency reduction. \nSlide \n\nTelco Data Plane Status\, Challenges and Solutions\nHao Lin (T1Networks) \nThis presentation discussed the evolved path on how to develop the network appliance in multiple generations\, from kernel to user space\, from MIPS to x86\, from integrated to distributed model. In addition\, this presentation discussed how to construct NFV system on dual-socket server. \nSlide \n\nSupport Millions Users in vBRAS\nZhaohui Sun (Panabit) \nThis presentation discussed the vBRAS on x86 platform and how to achieve millions of users’ support. \nSlide \n\nA High Speed DPDK PMD Approach in LXC\nJie Zheng (United Stack) \nThis presentation discussed a new PMD for container network optimization to connect Linux and DPDK\, in addition to a new design based on the vectorized ring buffer. \nSlide \n\nCloud Data Center\, Network Security Practices\nKai Wang (Yunshan) \nThis presentation discussed traffic monitoring and analysis\, network visualization framework on DPDK\, and how to construct an efficient security cloud. \nSlide
URL:https://www.dpdk.org/event/dpdk-summit-shanghai-2017/
LOCATION:Shanghai Marriott Hotel Hongqiao\, 2270 Hong Qiao Road\, Shanghai\, 200336\, China
CATEGORIES:DPDK Summit
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20171114
DTEND;VALUE=DATE:20171116
DTSTAMP:20260410T045006
CREATED:20180612T153639Z
LAST-MODIFIED:20180914T150110Z
UID:637-1510617600-1510790399@www.dpdk.org
SUMMARY:DPDK Summit\, San Jose
DESCRIPTION:Opening Remarks & Governing Board\nJim St. Leger (Intel) \nIntroduction to the event\, including a review of the agenda\, logistics and expectations. An update from the Governing Board on who the Governing Board are\, what their responsibilities are\, progress to date\, future priorities/challenges for the project. \nVideo | Opening Remarks & Governing Board  \n\nCommunity Survey Feedback\nJohn McNamara (Intel) \nWe conducted a survey of the DPDK community\, soliciting input on a variety of topics including DPDK usage\, roadmap\, performance\, patch submission process\, documentation and tools. This session will present the results of the survey\, which will help to guide the future direction of the project. \nVideo | Community Survey Feedback \n\nReducing Barriers to Adoption – Making DPDK Easier to Integrate into Your Application\nBruce Richardson (Intel) \nWhile DPDK is a widely-adopted software package for high-performance networking applications\, there are a number of ways in which it is harder to use than it otherwise needs to be. This is especially true when it comes to integrating DPDK with an existing legacy codebase. This presentation will look at some of the issues and provide an update on current development and prototyping work to simplify DPDK integration with existing code. \nVideo | Slide \n\nNew Command Line Interface for DPDK\nKeith Wiles (Intel) \nThe current command line interface for DPDK called cmdline has a number of limitation and a complex user design. The next command line for DPDK called CLI is more dynamic with a simple directory style design. The directory style design allows for commands to be placed in a hierarchy for easy integration\, plus supporting a simple argc/argv function interface. Using these features reduced the LOC in test-pmd cmdline file from 12K to ~4K. The presentation includes an example usage. \nVideo | Slide \n\nEvent Adapters – Connecting Devices to Eventdev\nNikhil Rao (Intel) \nRecently\, the DPDK has enabled applications to use dynamically load balanced pipelines with the introduction of libeventdev. In addition to using eventdev for CPU to CPU pipelines\, devices such as ethdev\, cryptodev and timers need to be able to inject events into eventdev. Currently\, we are in the process of upstreaming extensions to eventdev called eventdev adapters for each of these devices that would allow applications to configure event input from these devices to the event device. We will discuss each of the adapter APIs and show example code that allow event based applications to be written in a platform independent manner. \nVideo | Slide \n\nGRO/GSO Libraries: Bring Significant Performance Gains to DPDK-based Applications\nJiayu Hu (Intel) \nA major part of packet processing has to be done on a per-packet basis\, such as switching and TCP/IP header processing. The overhead of the per-packet routines\, however\, exerts a significant impact on the performance of network processing. Generic Receive Offload (GRO) and Generic Segmentation Offload (GSO) are two effective techniques for mitigating the per-packet processing overhead by reducing the number of packets to be processed. Specifically\, GRO merges the receiving packets of the same flow in RX\, while GSO delays packet segmentation in TX. \nVideo | Slide \n\nPower Aware Packet Processing\nChris MacNamara (Intel) \nA drive to deliver OPEX saving and performance where and when it’s needed. Enter a new era of power optimized packet processing. This talk reviews new & existing DPDK extensions for policy based power control proposed in August and the associated performance benefits. \nVideo | Slide \n\nEnhanced Memory Management\nLaszlo Vadkerti (Ericsson)\, Jiangtao Zhang (Ericsson) \nIn this presentation we will be reviewing Enhanced Memory Management techniques and multi-process enhancements as a possible way to seamlessly solve burning issues like slow initialization\, memory protection\, memory hotplug\, dynamic scale up/down\, physically vs virtually contiguous\, inter-vm shared memory etc. \nVideo | Slide \n\nMaking networking apps scream on Windows with DPDK\nJason Messer (Microsoft)\, Manasi Deval (Intel) \nNetwork bandwidth is precious and milliseconds matter for many user-mode applications and virtual appliances running on both Linux and Windows. In order to get the best network throughput to process and forward packets\, developers need direct access to the NIC without going through the host networking stack. Until now\, only developers on Linux and FreeBSD platforms were able to use DPDK to obtain these performance benefits but\, we are happy to announce that we have an implementation of DPDK for the Windows platform! \nVideo | Slide \n\nMediated Devices: Better Userland IO\nFrançois-Frédéric Ozog (Linaro) \nUnbinding Linux kernel drivers to allow userland IO through VFIO has a number of disadvantages such as another large touchy code base to deal with the hardware\, loss of standard Linux tools (ifconfig\, ethtool\, tcpdump\, SNMPd…) and impossibility to accelerate container networking. Mediated device introduced in Linux kernel 4.10 for GPUs and provisions for additional devices hold the promise of collaboration between kernel drivers and userland application in need of direct datapath steering. \nVideo | Slide \n\nMellanox bifurcated driver model\nRony Efraim (Mellanox) \nMellanox PMD uses verbs instead of taking full control over the device (PCI). That allows the kernel (netdev) and more than a single PMD to run on a single PCI function. If the DPDK app is not steering by rte_flow\, all the traffic the packets be processed by the kernel net device. \nVideo | Slide \n\nDPDK with KNI – Pushing the Performance of an SDWAN Gateway to Highway Limits!\nSabyasachi Sengupta (Nuage Networks) \nAn SDWAN gateway is usually built with an x86 commercial off-the-shelf (COTS) hardware that often runs a variant of Linux Operating System and requires high throughput for connecting a corporate’s branch network with its Data Centers. However owing to the inherent limitations of standard 4K sized pages without dedicated resource allocations in a general-purpose Linux kernel\, it has been seen that even a high-end SDWAN gateway hardware cannot forward traffic to its full potential. \nVideo | Slide \n\nDPDK as microservices in ZTE Paas\nYong Wang (ZTE)\, Songming Yan (ZTE) \nTo provide high performance for ICT (Information Communications Technology) area\, we use DPDK as a micro service in container networking. We used primary/secondary mode\, rte_ring\, sharing meory and so on\, to promote the performance of datapath. We achieved bidirectional zero-copy between containers in contrast to only dequeue zero copy in vhost-user/virtio-user. \nVideo | Slide \n\nAccelerate Clear Container Network performance\nJun Xiao (CloudNetEngine) \nClear Container is a great technology to secure a container with a fast and lightweight hypervisor\, and there might be very different type of workloads running inside Clear Containers\, e.g. some workloads require high packet processing rate (PPS) and some workloads require massive data transfer (BPS)\, given Clear Container’s much higher density than Virtual Machine\, a high performance virtual switch is very critical and demands is highly emerged\, but current available virtual switches is still far behind those demands. \nVideo | Slide \n\nThe Path to Data Plane Microservices\nRay Kinsella (Intel) \nDPDK revolutionized software packet processing initially for discrete appliances and then for Virtual Network Functions. Containers and µServices technology are extensively used as a means to scale up and out in the Cloud. These technologies now include Comms Service Providers among their advocates\, and embracing these technologies with their scaling model and resiliency is the new frontier in software packet processing. \nVideo | Slide \n\nContainer Panel Discussion\nA panel discussion with Yong Wang\, Songming Yan\, Jun Xiao and Ray Kinsella to discuss DPDK enablement of containers and micro-services. \nVideo \n\nAccelerate storage service via SPDK\nJim Harris (Intel) \nSPDK (storage performance development kit\, http://spdk.io) is an open source library used to accelerate the storage service (e.g.\, file\, block) especially for PCIe SSDs (e.g.\, 3D Xpoint SSDs). The foundation of SPDK is the user space\, asynchronous and polled mode drivers (e.g.\, IOAT and NVMe)\, and the idea of which is similar to DPDK. \nVideo | Slide \n\nAccelerating P4-based Dataplane with DPDK\nPeilong Li (University of Massachusetts Lowell) \nThe high-level P4 programming language promises protocol and hardware-agnostic design of network functions. As the low-level functional implementation\, the P4 Behavior Model (BMv2) provides the necessary constructional blocks (parser\, deparser\, lookup tables\, and action primitives\, etc.) into which any P4 dataplane programs can be compiled. \nVideo | Slide \n\nImplementation and Testing of Soft Patch Panel\nTetsuro Nakamura (NTT)\, Yasufumi Ogawa (NTT) \nSPP is a framework to easily interconnect DPDK applications on host and guest virtual machines together\, and assign resources dynamically to these applications. As a carrier service provider\, we expect that SPP improves performance and usability for inter-VM communication for large scale NFV environment. \nVideo | Slide \n\nReflections on Mirroring With DPDK\nE. Scott Daniels (AT&T Labs) \nDebugging network problems is often hard\, and further complicated when a guest O/S is provided with an SR-IOV VF bound to a DPDK driver because tools running on the physical host (e.g. tcpdump) lose visibility to the interface. Hardware mirroring of traffic to another VF provides the ability to regain visibility and to help facilitate the troubleshooting process. \nVideo | Slide \n\nA network application API on top of device APIs\nFrançois-Frédéric Ozog (Linaro) \nNFV promise is to be able to instantiate or even live migrate VMs on different platforms and have applications benefit from whatever acceleration is available. As a result\, the application developer shall not make compilation or define application architecture based on what he/she expects from the runtime environment. ODP and DPDK have in common the concept of “device” APIs (Ethernet\, crypto\, events\, IPsec\, compression…) with distinct approaches. \nVideo | Slide \n\nSafetyOrange – a tiny server class multi-purpose box with DPDK\nAndras Kovacs (Ericsson)\, Laszlo Vadkerti (Ericsson) \nSafetyOrange is a portable (4.3 liter) and silent Xeon computer. Well\, it is larger than ‘DPDK in a box’ but it supports two NICs (as of now sporting 2 XL710 cards)\, has 32G of memory and 14 cores. We have been using it for testing both native and virtualized DPDK appliances also whole virtual routers and served as a traffic generator for performance tests (DPDK pktgen)\, too. It is a brilliant development environment\, too. And at the end of the day it still fits into a regular backpack. \nVideo | Slide \n\nTechnical Roadmap\nTechnical Board \nAn update from the Technical Board covering the future roadmap and technical challenges for the project. \nVideo | Slide \n\nrte_raw_device: implementing programmable accelerators using generic offload\nHemant Agrawal (NXP)\, Shreyansh Jain (NXP) \nThere are various kinds of HW accelerators available with SoCs. Each of the accelerators may support different capabilities and interfaces. Many of these accelerators are programmable devices. In this talk we will discuss the rte_raw_device and implementing a sample driver with it for NXP AIOP generic programmable accelerator. \nVideo | Slide \n\nDPDK support for new hardware offloads\nAlejandro Lucero (Netronome) \nFully programmable SmartNICs allow new offloads like OVS\, eBPF\, P4 or vRouter\, and the Linux kernel is changing for supporting them. Having these same offloads when using DPDK is a possibility although the implications are not clear yet. We present Netronome’s perspective for adding such a support to DPDK mainly for OVS and eBPF. \nVideo | Slide \n\nFlexible and Extensible support for new protocol processing with DPDK using Dynamic Device Personalization\nAndrey Chilikin (Intel)\, Brian Johnson (Intel) \nDynamic Device Personalization allows a DPDK application to enable identification of new protocols\, for example\, GTP\, PPPoE\, QUIC\, without changing the hardware. The demo showcases a DPDK application identifying and spreading traffic on GTP and QUIC. Dynamic Device Personalization can be used on any OS supported by DPDK\, for example we showcase a QUIC protocol classification demo on Windows OS. \nVideo | Slide \n\nServerless DPDK – How SmartNIC resident DPDK Accelerates Packet Processing\nNishant Lodha (Cavium) \nCloud architectures and business models are driving the need to ensure that all server compute resources have a revenue tie-in\, heralding the march towards the serverless dataplane. This session presents a unique way to harness the power of DPDK to accelerate packet processing by pushing the data plane into a SmartNIC. We will discuss the motivation\, benefits and challenges of implementing a DPDK based data plane running on the compute resources embedded in a SmartNIC. \nVideo | Slide \n\nEnabling hardware acceleration in DPDK data plane applications\nDeclan Doherty (Intel) \nThis presentation will look at the challenges faced in leveraging hardware acceleration in DPDK enabled applications\, addressing some of the problems posed in creating consistent hardware agnostic APIs to support multiple accelerators with non-aligned features\, and the knock implications this can have to application designs. \nVideo | Slide \n\nrte_security: enhancing IPSEC offload\nHemant Agrawal (NXP)\, Declan Doherty (Intel)\, Boris Pismenny (Mellanox) \nIn this talk we present a joint work of NXP\, Intel and Mellanox on offloading security protocol processing to hardware providing better utilization of host CPU for packet processing. This talk provides the overview of new enhancement in the rte_security APIs to support various features of IPSEC offloads as inline or lookaside offload. \nVideo | Slide \n\nMellanox FPGA\nBoris Pismenny (Mellanox) \nThe FPGA allows a wide variety of features to be supported in DPDK. We observe that programmable HW is useful for packet-processing pipelines. For example\, consider a pipeline of multiple match-action operations\, in which actions may also specify generic packet modifications that are carried out by accelerators. In this case\, the CPU is only involved at the beginning (transmission) or end (reception) of the pipeline\, while the accelerator invocations are initiated by NIC matching operations. \nVideo | Slide \n\nSMARTNIC\, FPGA\, IPSEC Panel discussion\nA panel discussion with Hemant Agrawal\, Alejandro Lucero\, Andrey Chilikin\, Brian Johnson\, Nishant Lodha\, Declan Doherty and Boris Pismenny to discuss DPDK enablement for smart NICs\, FPGA and IPsec. \nVideo \n\nVPP Host Stack\nFlorin Coras (Cisco) \nAlthough packet forwarding with VPP and DPDK can now scale to tens of millions of packets per second per core\, lack of alternatives to kernel-based sockets means that containers and host applications cannot take full advantage of this speed. To fill this gap\, VPP was recently added functionality specifically designed to allow containerized or host applications to communicate via shared-memory if co-located\, or via a high-performance TCP stack inter-host. \nVideo | Slide \n\nDPDK’s best kept secret – Micro-benchmark performance tests\nMuthurajan Jayakumar (Intel) \nTo have apple to apple comparisons\, developers need a common ground of base level metrics. That common ground is ability to identify the basic DPDK building block of importance (as well as relevance to the work load) e.g.\, producer/consumer rings and measure the cycle cost associated with basic operation like enque/dequeing – bulk versus single. \nVideo | Slide \n\nDPDK on Microsoft Azure\nDaniel Firestone (Microsoft)\, Madhan Sivakumar (Microsoft) \nSDN is at the foundation of all large scale networks in the public cloud\, such as Microsoft Azure. But how do we make a software network scale to an era of 40/50+ gigabit networks and provide great performance for network applications and NFV in VMs? In this presentation\, Daniel Firestone and Madhan Sivakumar will detail Azure Accelerated Networking for Linux with DPDK\, using Azure’s FPGA-based SmartNICs to accelerate Linux workloads using SR-IOV. \nVideo | Slide \n\nOpenNetVM: A high-performance NFV platforms to meet future communication challenges\nK. K. Ramakrishnan (Univ. of California\, Riverside) \nTo truly achieve the vision of a high-performance software-based network that is flexible\, lower-cost\, and agile\, a fast and carefully designed NFV platform along with a comprehensive SDN control plane is needed. Our high-performance NFV platform\, OpenNetVM\, exploits DPDK and enables high bandwidth network functions to operate at near line speed\, while taking advantage of the flexibility and customization of low cost commodity servers. \nVideo | Slide \n\nMake DPDK’s software traffic manager a deployable solution for vBNG\nCsaba Keszei (Ericsson) \nAchieving network functions parity across purpose-built ASIC implementation and virtual implementation is not straightforward. Irrespective of differences in performance capability between purpose-built and virtual environments. Functional disfiguration represents a significant obstacle in operators’ adoption of virtualization as it implies a dependency on access/aggregation network topology and configuration. \nVideo | Slide \n\nOpenVswitch hardware offload over DPDK\nRony Efraim (Mellanox) \nTelcos and Cloud providers are looking for higher performance and scalability when building nextgen datacenters for NFV & SDN deployments. While running OVS over DPDK reduces the CPU overload of interrupt driven packet processing\, CPU cores are still not completely freed up from polling of packet queues. \nVideo | Slide \n\nAccelerating NFV with VMware’s Enhanced Network Stack (ENS) and Intel’s Poll Mode Drivers (PMD)\nJin Heo (VMware)\, Rahul Shah (Intel) \nNetwork Functions Virtualization (NFV) deployments are happening at a rapid pace. This is driving the need to more efficiently consolidate compute\, storage and communication workloads. NFV enables Communications Service Providers to migrate their fixed function networking elements to a general purpose server; however there is the need preserve the existing performance and latency. To support such workloads a vSwitch that enables both high throughput and low latency is a must. \nVideo | Slide \n\nDPDK Membership Library\nSameh Gobriel (Intel) \nIn this talk we will present the new DPDK Membership Library\, this library is used to create what we call a “set-summary” which is a new data structure that is used to summarize large set of elements. It is the generalization and extension to the traditional filter structure\, e.g. bloom filter\, cuckoo filter\, etc to efficiently test if a key belongs to a large set. \nVideo | Slide \n\nIntegrating and using DPDK with Open vSwitch\nAaron Conole (Red Hat)\, Kevin Traynor (Red Hat) \nSome applications are written from the ground up with DPDK in mind\, but Open vSwitch is not one of them. This talk will look at how Open vSwitch integrated and uses DPDK. It will look at various aspects such as DPDK initialization\, threading\, and the usage of DPDK PMD’s and libraries. It will also talk about DPDK usability aspects such as LTS and API/ABI stability and the effect they have on Open vSwitch with DPDK. \nVideo | Slide \n\nLagopus Router\nTomoya Hibi (NTT)\, Hirokazu Takahashi (NTT) \nIn this talk\, we introduce a new open source router implementation called Lagopus Router. It is an extensible microservice architecture router that consists of a DPDK router dataplane\, router agents\, and a pub/sub-based centralized configuration manager. These modules are written in Go and C and are loosely coupled to each other by gRPC. \nVideo | Slide \n\nvSwitch Panel Discussion\nA panel discussion with Rony Efraim\, Jin Heo\, Rahul Shah\, Sameh Gobriel\, Charlie Tai\, Aaron Conole\, Kevin Traynor\, Tomoya Hibi and Hirokazu Takahashi to discuss DPDK acceleration of vswitches. \nVideo \n\nClosing Remarks\nJim St. Leger (Intel) \nVideo
URL:https://www.dpdk.org/event/dpdk-summit-usa-2017/
LOCATION:Club Auto Sport\, 521 Charcot Ave\, San Jose\, CA\, 95131\, United States
CATEGORIES:DPDK Summit
ATTACH;FMTTYPE=image/jpeg:https://www.dpdk.org/wp-content/uploads/sites/23/2018/06/summit-thumb-usa-2017.jpg
GEO:37.384043;-121.9144208
X-APPLE-STRUCTURED-LOCATION;VALUE=URI;X-ADDRESS=Club Auto Sport 521 Charcot Ave San Jose CA 95131 United States;X-APPLE-RADIUS=500;X-TITLE=521 Charcot Ave:geo:-121.9144208,37.384043
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20180309
DTEND;VALUE=DATE:20180310
DTSTAMP:20260410T045006
CREATED:20180309T154301Z
LAST-MODIFIED:20180914T150103Z
UID:633-1520553600-1520639999@www.dpdk.org
SUMMARY:DPDK Summit Bangalore
DESCRIPTION:Welcome\, Introduction\, Roadmap from Tech Board\nHemant Agrawal (Intel)\, Jerin Jacom (Cavium)\, Kannan Babu Ramia (Intel)\, Sujata Tibrewala (Networking Evangelist\, Intel) \nVideo | Slide\n \n\nIdeas for Adding FPGA Accellerators to DPDK\nZhihong Wang\n \nWith Partial Reconfigure(PR) parts of Bitstream\, Field Programmable Gate Array(FPGA) not only provides one kinds of accelerator but also provides many types of accelerators at the same time. But the lack of standard software framework and APIs to integrate various FPGA devices is hindering FPGA’s integration with standard frameworks such as DPDK and its mass deployment. In this presentation\, we will introduce FPGA-BUS which will provide FPGA management software frameworks without dealing with hardware differences among various FPGA devices. \nVideo | Slide \n\nRte_Security: A New Crypto Offload Framework in DPDK\nHemant Agrawal (Software Architect\, NXP AG)\, Akhil Goyal (Software Engineer\, NXP Semiconductors) \nIn this talk we will present a security framework for offloading cryptographic operations and specific protocol processing like IPSec to hardware. This helps in reducing the CPU cycles for packet processing. In this talk\, we provide a brief overview of the rte_security APIs and its implementation for inline and lookaside offload hardwares. \nVideo | Slide \n\nAsymmetric Crypto and Compression in DPDK\nShally Verma (Manager\, Project Management\, Cavium) \nHandling DPDK / ODP Specification definition and development for compression & crypto modules of cavium networking and storage family of devices \nVideo | Slide \n\nVirtio/Vhost Status Quo and Near-Term Plan\nZhihong Wang \nThis talk will help developers to improve virtual switches by better understanding the recent and upcoming improvements in DPDK virtio/vhost on both features and performance. Also some best practice is shared for both dev and ops. \nVideo | Slide \n\nOpen vSwitch Hardware Offload Over DPDK\nAshrut Ambastha (Sr. Staff Engineer\, Mellanox Technologies)\n \nTelcos and Cloud providers are looking for higher performance and scalability when building nextgen datacenters for NFV & SDN deployments. While running OVS over DPDK reduces the CPU overload of interrupt driven packet processing\, CPU cores are still not completely freed up from polling of packet queues. \nTo solve this challenge\, OVS-DPDK is further accelerated through HW offloads.\nWe introduce a classification methodology that enables a split data plane between OVS-DPDK and the NIC hardware. A flow tag that represents the matched rule in the hardware is passed to OVS which saves CPU cycles consumed for flow look ups. We present the open source work being done in the DPDK\, OVS and Linux Kernel communities and significant performance gains achieved. We also present how this work can be extended to VXLAN traffic. \nSlide \n\nMemzone Monitor\nVipin Varghese (System Application Engineer\, Intel)\n \nDebugging memory corruption in DPDK applications can be difficult – particularly if multiple processes are accessing huge pages simultaneously. Given a machine with stripped binaries and no GDB instance – where do you start debugging? \nVideo | Slide \n\nDPDK Data Plane Corruption\nAmol Patel (Sys App Eng Manager\, Intel)\n \nDPDK worker threads are the Linux threads. Thread level MMU protection does NOT exist. All the worker-threads of the primary-process has access to stack and heap memory of all the other worker-threads\, one thread can corrupt the other thread’s stack and heap-allocated memory. Worker-thread’s stack corruption prevention and detection can be achieved by provisioning the stack-memory from the mem-zone\, the dynamically allocated objects can be allocated from the mem-zone instead of the heap. This allows protecting the thread’s stack and allows checking the corruption by dumping the mem-zones at the runtime.Any accidental Data-plane tables corruption can be prevented by using some of the general ‘C’ programming features and centralizing the data-plane object updates. Data-plane tables can be safe-guarded by placing each of them in individual mem-zones and surrounding it by the memory-guard-bands. This allows user to dump the complete table in one go and easily identify the table corruption. Memory guard-band allows user to identify any out-of-bound access for the tables. \nSlide \n\nDPDK – A Must Have Best Practices Checklist for NFV Platform Performance Optimizations\nM. Jayakumar (Software Application Engineer\, Intel)\n \nWHAT IS THE PROBLEM STATEMENT: Vendor Agnostic DPDK runs with multiple open software components. Eco System and open software developers\, when picking and choosing different s/w modules need a checklist\, that is applicable across all of their platforms\, for ensuring their product is tuned for best performance.\nHOW DOES THIS PRESENTATION ADDRESS THIS?: The presentation explains each potential bottleneck in the system along with the tools to identify those issues. In addition\, for each performance deterrent\, it gives vendor neutral tuning steps to achieve optimal performance. Since the steps are vendor neutral\, the solutions are scaleable to multiple platforms – in terms of development and deployment.\nCAN YOU GIVE SOME OUTLINE AND SAMPLE FLOW OF THE PRESENTATION?:\nDPDK being a user space process – still it co-exists with kernel\, OS scheduler\, Kernel Drivers and Kernel Applications and each can potentially impact performance. Let us take OS scheduler as an example. It can and come take DPDK core away from its network polling task and “steal” to schedule other tasks. The tuning checklist gives steps to isolate the core from such disturbance. Similarly an optimization you do for mice flow may have to be different from elephant flow. Checklist gives the balance and optimal guidelines. \nVideo | Slide \n\nOptimal VM Diminsioning for DPDK Enabled VNFs in Core/Edge Telco Cloud\nShashi Kant Singh (System Architect\, Altiostar Networks India Pvt Ltd)\n \nFor Optimal VM performance in Cloud networks\, dimension of the VM plays an important role. Specifically the CPU and RAM assignment effects not just the workload performance but also the operations aspects. VMs handling line rate traffic\, need DPDK enabled framework and enough number of cores for the workload processing but this makes the VMs bulky from the perspective of operations performance. Handling live migrations\, failures are difficult in such cases. Reducing the CPUs cannot be done beyond certain level as it would lead to sub-optimal performance from DPDKs standpoint. Similarly Edge networks has different set of challenges for VM dimensioning. Edge cloudlets consists of mix of bare metal servers\, dual sockets servers\, single controller/compute node or a full fledged chassis. Each of these has different constraints and needs to be handled separately for optimal VM dimension. This presentation shall bring out these the factors that need to be considered for optimal VM dimensioning from overall performance perspective. \nVideo | Slide \n\nAF_XDP\nMagnus Karlsson( Intel)\,  Nikhil Rao (Software Engineer\, Intel)\, Bjorn Topel (Intel)\n \nDeep Packet Inspection (DPI) and other specialized packet processing workloads are often run in user space due to their complexity and/or specialization. With the increase of Ethernet speeds from 40\, 100 to 200 Gbits/s\, the need for high-speed raw Ethernet frame delivery to Linux user-space is ever increasing. In this talk\, we present AFXDP (formerly known as AFPACKET V4) designed to scale to these high networking speeds through the use of true zero-copy\, lock-less data structures\, elimination of syscalls and other techniques\, while still abiding to the isolation and security rules of Linux. AF_XDP is currently an RFC on the Linux netdev mailing list with the goal to get it accepted upstream. \nIn our evaluation\, AFXDP provides a performance increase of up to 40x for some microbenchmarks and 20x for tcpdump compared to previous AFPACKET (raw socket) versions in Linux. To illustrate the approach\, we have implemented support for Intel I40E NICs and veth\, but it should hopefully be easy to port to other NICs and virtual devices as well. AFXDP is designed as an extension to the existing XDP support in Linux so that XDP enabled devices will be able to use this. We also show how SW networking libraries and SDKs such as DPDK can benefit from AFXDP to achieve increased robustness\, ease-of-use and HW independence. \nVideo | Slide \n\nSkydive – Analyzing Topology and Flows in OVS – DPDK and OVN OVS-DPDK Environments\nMasco Kaliyamoorthy (Software Engineer\, Red Hat)\, Venkata Anil Kumar (Red Hat)\, Numan Siddique (Red Hat)\, Yogananth Subramanian (Red Hat) \nSkydive is a real time network topology and flow and protocols analyzer tool. Skydive can be used with OVS deployments – both kernel and dpdk datapaths to do on demand port\, payload and statistical analysis\, which helps in monitoring and troubleshooting complex openstack/nfv/sdn environments. This talk covers on using it in the OVS – DPDK deployments. The talk would also cover the OVN which provides virtual network abstractions (L2 and L3) on top of OpenvSwitch and using it with Skydive and OVS-DPDK environments. \nVideo | Slide \n\nIntegrating DPDK with Storage Application\nVishnu Itta (Senior Architect\, MayaData)\, Mayank Patel (MayaData) \nThe Storage Performance Development Kit (SPDK) provides a set of tools and libraries for writing high performance\, scalable\, user-mode storage applications. New applications requiring fast access to storage can be built on top of SPDK\, however they need to adhere to principles which are fundamental part of SPDK/DPDK framework. To name few of these\, there is exactly one thread running on a CPU core\, which never blocks and constantly polls for new events and executes corresponding handlers in a loop. Blocking locks in poller’s loop are not acceptable since those could delay execution of other handlers on reactor. Memory is allocated from pinned huge pages\, which make DMA transfers to and from device possible avoiding copy of data between buffers etc. Those are fundamental design changes compared to how applications were built 5\, 10 or more years ago (legacy applications). Legacy applications have usually many threads\, which are using classic synchronization primitives (mutexes\, readers/writers locks\, etc.)\, which don’t scale with number of CPUs\, and rely on kernel to synchronize and schedule the threads with well known ill effects. Trying to redesign legacy applications is often not feasible especially if they are more complex and have matured for many years. On the other hand\, trying to reimplement them from scratch\, preserving their stability and quality\, can take years. Instead what we suggest is a compromise solution when legacy application runs with minimal changes while to some degree it can leverage performance that SPDK has to offer for doing IOs. Legacy application runs in a separate process from SPDK and VirtIO with vhost-user protocol is used for passing data between them. VirtIO is well proven technology which allows moving data between two processes using shared memory without a need to copy them\, without heavy-weight synchronization and without involving kernel. Vhost-user client-server protocol allows to establish VirtIO data channel between two processes using unix domain socket. SPDK framework already comes with vhost-user server implementation. Missing part is vhost-user client implementation coming in form of a simple library which could be easily embedded to legacy application. A library for allocating IO buffers from huge pages is necessary too\, although DPDK’s rte_eal library seems sufficient for a proof of concept. Legacy application could completely bypass the kernel for doing disk IO assuming SPDK runs a disk driver in userland and although the application cannot unleash the full potential of SPDK due to its legacy design\, it is expected to perform faster. The most important question\, which we are trying to find an answer for\, is how much speed can be gained using this compromise solution compared to traditional way of reading/writing data blocks through block device file. While comparing IOPS from fio benchmark tool is surely interesting\, we also focus and compare performance of real-world storage application – one of the most advanced local file system of present days – ZFS. Perhaps it is less known that it is possible to run ZFS file system in userspace\, which was initially introduced only for testing it. It is exciting to observe how much the performance of the file system as a whole can be further improved by using SPDK as a storage backend.\nIn this talk\, Mayank and Vishnu\, who are active contributors of an open source project OpenEBS\, worked on ZFS to make it SPDK enable\, will share their learnings about:\n– Integration of DPDK/SPDK libraries with project\n– Uses of mempool library for memory allocation of frequently used objects\n– Uses of ring library for message passing between threads\n– Experiences with developing vhost-user client library \nVideo | Slide \n\nAccelerating NVMe-oF target service via SPDK/DPDK\nZiye Yang (Senior Software Engineer\, Intel) \nNVMe over fabrics (NVMe-oF) extends NVMe protocol from PCIe to fabrics and aims at providing high performance on accessing remote NVMe devices. In this talk\, an accelerated NVMe-oF target is introduced with SPDK (storage performance development kit) technique. SPDK provides a set of tools and libraries for writing high performance\, scalable\, user-mode storage applications. It achieves high performance by moving all of the necessary drivers into user space and operating in a polled mode (similar with the idea in DPDK) instead of relying on interrupts\, which avoids kernel context switches and eliminates interrupt handling overhead. The accelerated NVMe-oF target relies on SPDK’s framework\, user space NVMe driver\, environment library encapsulated from DPDK’s EAL library (e.g.\, thread and memory management) and standard fabrics library (e.g.\, ibverbs) to provide high performance block service. Compared with Linux kernel’s NVMe-oF target\, our solution can is much more efficient with 10X improvement in per CPU core aspect. \nVideo | Slide \n\nEmpower Diverse Open Transport Layer Protocols in the Cloud Networking\nGeorge Zhao (Director OSS & Ecosystem\, Huawei)\, \nWith the development of cloud network\, the networking stack needs to be re-invented. Although user application has more options to construct high performance solutions with varied stacks\, there are a lot of challenges :\n* Legacy TCP is best effort based and provides no performance guarantee.\n* One-fits all protocol or algorithm is less feasible.\n* Complicated and Heterogeneous Network Environments.\n* Growing concern on network security\nWe want to share Huawei’s practices\, an open source protocol Kit DMM(Dual-domain，Multi-protocol\, Multi-instance)\, that provides an extendable transport protocol framework and runtime. It enables application-transparent and dynamic new protocol engagement. New protocols can be added on-demand and protocols can be managed dynamically. DMM is a new project in FD.io and has achieved great success on package forwarding\, provides flexible interfaces for user applications and protocols. \nVideo | Slide \n\nVote of Thanks and Speaker Recognition\nSujata Tibrewala(Networking Evangelist\, Intel) \n\nSchedule
URL:https://www.dpdk.org/event/dpdk-summit-bangalore-2018/
LOCATION:Leela Palace\, India
CATEGORIES:DPDK Summit
GEO:20.593684;78.96288
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20180628
DTEND;VALUE=DATE:20180629
DTSTAMP:20260410T045006
CREATED:20180507T144907Z
LAST-MODIFIED:20180921T121655Z
UID:67-1530144000-1530230399@www.dpdk.org
SUMMARY:DPDK Summit China
DESCRIPTION:[vc_row type=”in_container” full_screen_row_position=”middle” equal_height=”yes” content_placement=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″ shape_divider_position=”bottom” shape_type=””][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” column_border_radius=”none” width=”1/2″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][vc_column_text]\n\nDPDK Summit China will cover the latest developments to the DPDK and other related projects such as FD.io\, Lagopus\, OVS\, DPVS\, Tungsten Fabric and SPDK\, including plans for future releases\, and will provide an opportunity to hear from DPDK users who have used it in their applications.\n[/vc_column_text][nectar_btn size=”small” button_style=”regular” button_color_2=”Accent-Color” icon_family=”none” url=”https://www.youtube.com/playlist?list=PLo97Rhbj4ceIcdoZ6RDUeChDCPQwe-EoH&disable_polymer=true” text=”View Videos on YouTube” margin_top=”25″][/vc_column][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” column_border_radius=”none” width=”1/2″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][/vc_column][/vc_row][vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″ shape_divider_position=”bottom”][vc_column centered_text=”true” column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” column_border_radius=”none” width=”1/1″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][vc_column_text]\nSESSION SUMMARY\n[/vc_column_text][vc_column_text]To access the summary\, slides\, and video links for a specific session\, click on each of the tabs below.[/vc_column_text][/vc_column][/vc_row][vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″ shape_divider_position=”bottom”][vc_column centered_text=”true” column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” column_border_radius=”none” width=”1/1″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][toggles style=”minimal”][toggle color=”Default” title=”More Flexible\, More Scalable\, and More Applicable NFV Use Cases: Refactor of DPDK Packet Framework/IP Pipeline”][vc_column_text]Tech Board Presentation & Panel Discussion\nZhang Fan (Intel)\nIn 2014\, the first Packet Framework library/application generator was born. With the combination of simple configuration items and the CLI commands\, Various network functions such as firewall\, flow metering\, and edge routing\, etc. can be built easily with impressive performance. Packet framework was second most used components within DPDK. However\, the evolution of packet framework did not stop here. In this presentation\, we will introduce our brand-new packet framework 2.0. The new packet framework utilizes more flexible and more scalable approach: it separates the dependency of tables and action profiles from the pipeline instance\, to enable arbitrary action mapping to the pipeline ports and tables at will\, and allows various configuration from external controllers such as OpenBras. It is expected the new packet framework will maintain the performance benefit and can be used to build a wider range of applications. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »[/vc_column_text][vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”Multiple vDPI Functions Using DPDK and Hyperscan on OVS-DPDK Platform”][vc_column_text]Multiple vDPI Functions Using DPDK and Hyperscan on OVS-DPDK Platform\nCheng-Chien Su\, Lionic Corp.\nWe implement IPS\, Application identification\, Web Content Filter\, and Antivirus using DPDK and Hyperscan. These vDPI Functions are integrated into the OVS-DPDK-based vCPE platform. The carrier can control the functions for consumer requirement via OpenFlow protocol. The DPI function reports an identified session and adds a flow entry to OVS-DPDK to skip packet inspection for this session. This feature reduces unnecessary packet inspections and increases network performance. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”Hardware-Level Performance Analysis of Platform I/O”][vc_column_text]Hardware-Level Performance Analysis of Platform I/O\nRoman Sudarikov\, Intel\nPerformance analysis and software optimization have become increasingly challenging due to overall computer system complexities. Rapidly rising technological advancements of all layers of execution make application performance tuning a very complicated task. An existing common technique to solve performance related issues is based on the utilization of on-chip Performance Monitoring Units (PMUs). Traditionally\, a performance analysis is mostly focused on the performance counters in CPU cores. However\, when configuring a platform with I/O devices or when selecting a platform for I/O usage models\, it is equally important to have performance data from the Uncore (rest of the processor besides the core)\, I/O and socket inter-connect counters. Cumulatively\, there are more than one thousand performance monitoring events that can help understand microarchitecture activities while running an application. In this session\, we introduce an Uncore-based performance analysis of I/O intensive applications as a complement to the traditional CPU core-centric approach. The presentation covers platform components that are critical for I/O flows and their performance monitoring capabilities. We discuss Intel® Data Direct I/O Technology and why it is extremely critical for applications dealing with concurrent I/O traffic. Finally\, we describe the latest changes in the cache hierarchy and how this affects I/O transaction flows\, and end up with an overview of Intel tools that can provide such a platform-wide observability. Accommodate with the presentation\, we will demonstrate various tools using provider edge router sample application (ip_pipeline) from the DPDK to illustrate IO bandwidth\, MMIO read/write access\, DDIO hit/miss statistics\, memory bandwidth and much more. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”Link-Level Network Slicing with DPDK”][vc_column_text]Link-Level Network Slicing with DPDK\nJie Zheng\, VMware\nNetwork Virtualization Engineer\, VMwareAs NFV intrinsically demands\, the virtual network must be featured with higher bandwidth and lower latency even on top of COTS hardware\, to improve the network efficiency meanwhile maintaining high availability and scalability\, we do layer 2 network virtualization with dedicated network nodes along with infrastructure network\, to coordinate them in a link-level fabric view\, a controller cluster which employs smart layer 3 techniques is introduced. Through working together\, it provides the ability to slice the quantified network resource\, this session we will focus on how DPDK fuels the data path. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][/toggle][toggle color=”Default” title=”What’s New in Virtio 1.1″][vc_column_text]What’s New in Virtio 1.1\nJason Wang\, Red Hat\nAs a de-facto standard for virtual IO devices\, virtio has become more popular in both software and hardware implementations. The talk will discuss several improvements for the incoming 1.1 version for achieving better performance. The talk will first have a brief introduction to virtio and its history. The three major features will be presented: The first one is the new packed ring layout\, it aims to mitigate the cache stress and reduce the number of PCI transactions for hardware backends. The second is the in order feature\, it allows a device to reduce the number of writes when adding used buffers. The third is the notification data feature\, it will be useful for hardware implementation to fetch descriptors or for debugging purpose. In the end\, the performance numbers\, community status\, and future work will be talked. The target audience is the one who is interested in networking and NFV\, DPDK and virtualization. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”DPDK Support for Vhost Acceleration”][vc_column_text]DPDK Support for Vhost Acceleration\nXiao Wang\, Intel\nVhost Data Path Acceleration (vDPA) enables offload of the Vhost vring data path to HW devices in a para-virtualized way without direct pass-through to the guest. In addition to the SW Vhost lib\, vDPA allows device-specific configuration and management. As a result\, it achieves SR-IOV like performance with cloud-friendly compatibility\, supports live-migration which makes it possible to upgrade a stock VM with virtio to a new HW accelerated platform transparently. This session will give an introduction on how to leverage DPDK vDPA lib to support different kinds of accelerators\, and the update on the latest upstream status. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”Zero-Copy Improvement and Best Practice”][vc_column_text]Zero-Copy Improvement and Best Practice\nLiu Yong\, Intel\nVhost dequeue zero copy feature imported into DPDK since 17.02 and theoretically VM2VM and VM2NIC performance of large packets will be improved significantly. But there’re still some stumbling blocks in the usage of this feature. Like it won’t work with certain qemu version or even downgrade performance seen in OVS deployment. This session will dig into details of those obstacles and the best practice to remove them. With all these actions\, we can make vhost dequeue achieve its expected performance in deployment environment like OVS. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”Practices to Achieve Ultimate Performance in Cloud Networking”][vc_column_text]Practices to Achieve Ultimate Performance in Cloud Networking\n曹水 (Senior Researcher\, 华为)\nIn the current Cloud networking\, various network applications emerge in endlessly. As foundation as Cloud networking\, vSwitch evolved quickly to fulfill unceasing performance requirements\, from Kernel-based to Userspace\, from Software to NIC offload. Today\, we want to share our learnings to achieve ultimate performance within vSwitch. These practices already applied in Huawei new generation network infrastructure in Huawei cloud. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »[/vc_column_text][vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”Accelerate Virtual Switch with Intelligent Adapter”][vc_column_text]Accelerate Virtual Switch with Intelligent Adapter\nZhihui Chen\, Mellanox\nVirtual Switch(vSwitch) is widely deployed in Cloud/NFV environment for transparent switching of traffic between Virtual Machines(VMs) and with the outside work\, it is normally deployed as a software in a server and challenged with poor performance and high CPU overhead. The emerging intelligent adapter provides flow-based switching capability among virtual NICs(vNIC) through its programmable embedded switch(eSwitch). Based on intelligent adapter\, software vSwitch can offload a large portion of packet processing operations into hardware\, especially computing-intensive operations including VxLAN Encapsulation/Decapsulation\, packet classification based on a set of header field defined by OpenFlow\, modification of packet header\, QoS and Access control (ACL). There are two methods to optimize vSwitch over Intelligent adapters: Flex and Direct. For Flex mode\, data path still exists in software while some key operations of packet processing are offloaded to hardware for saving CPU and improving the efficiency of packet classification at software. This mode keeps the compatibility with current vSwitch design and interface to VM. For Direct mode\, data path is offloaded to hardware and eSwitch is configured to enable traffic switching among vNICs and handle all operations of packet processing. Software vSwitch is just used for control path\, offload flow rules to eSwitch and process traffic which cannot be offloaded. With this mode\, traffic bypasses the hypervisor and is delivered to VM directly through SRIOV interface. It can fully release the CPU resource from network processing and provide the best performance. Our test of Open vSwitch (OVS) with this mode over Mellanox ConnectX-5 shows 66Mpps with zero CPU%. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”DPDK Multiple Sized Packet Buffer Pool”][vc_column_text]DPDK Multiple Sized Packet Buffer Pool\nGavin Hu\, ARM\nCurrently\, DPDK uses single sized 2KB buffers to accommodate coming packets\, without discerning the sizes. This causes a big memory space waste for small packets\, and having chain buffers for jumbo frames costs extra DMA transactions and extra CPU cycles. In this talk\, we will discuss how to improve this situation by using multiple sized buffers. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”FPGA Acceleration and Virtualization Technology in DPDK”][vc_column_text]FPGA Acceleration and Virtualization Technology in DPDK\nRosen Xu\, Intel\, 天飞 张\, Intel\nMany china e-market companies using cloud computing infrastructure to accelerate their business\, the cloud aims to cut costs and helps the users focus on their core business instead of being impeded by IT obstacle. SDN and NVF are more popular deployed in internet companies. But how to make a software network scale to an era of 40/50+ Gigabit networks and provide great performance for network applications in cloud computing like Alibaba double 11 shopping spree? In this presentation\, Tianfei and Rosen will introduce a new FPGA software framework in DPDK using Intel Xeno+A10 FPGA to accelerating Linux workloads using SRIOV and virtualization technology. We will introduce OPAE (Open Programmable Acceleration Engine)\, the open source software framework for FPGA devices\, and its integration with DPDK for network function acceleration. With OPAE userspace drivers and APIs\, we were able to create an open and consistent API for DPDK to integrate FPGA accelerated network functions without dealing with hardware differences among various FPGA devices. This significantly simplifies DPDK’s integration with FPGA accelerator devices. We have developed Software of SmartNICs which using OPAE and virtualization technology to accelerating some e-market company’s business in China. In the end\, we will discuss the status of integration with DPDK community with this FPGA software framework. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”A Case for Queue APIs”][vc_column_text]DPDK to support InfiniBand Link Layer\nHonnappa Nagarahalli\, ARM\nDPDK supports run-to-completion and pipeline model of packet processing. The pipeline model uses queues (rte_ring functions) to exchange packets between the cores running different stages of the pipeline. Many networking SoCs provide acceleration capability for the queues. Since there are no queue APIs for inter-core communication\, the networking SoCs are forced to use software-based rte_ring functions for inter-core communication to support pipeline model. Creating Queue APIs also allows for introducing different types of queues (for ex: non-blocking queues) without having to create separate rte_ring functions for every type. This talk presents possible queue APIs and their advantages. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »[/vc_column_text][vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”DPDK based Load Balancer to Support Alibaba Dual 11 Festival”][vc_column_text]DPDK based Load Balancer to Support Alibaba Dual 11 Festival\nLiang Jun\, Alibaba Cloud\nA network load balancer is a service to improve the distribution of network workloads across multiple computing resources\, it extends application’s service capability by traffic distribution\, in the meanwhile\, eliminates the single point of failure to improve the availability of the system. Therefore\, the load balancer has been widely deployed and becomes the important component for many of Alibaba services. The new generation of Alibaba’s load balancer is based on the DPDK. The high performance and high availability support the high-speed development of Alibaba’s business. It also successfully has been tested by the huge burst of traffic flow in 2017 Alibaba’s Dual 11 festival. \nThis presentation will introduce Alibaba’s new generation of load balancers from three aspects. First\, it will introduce the architecture of the high-performance load balancer based on DPDK. Then\, the horizontal-scalable\, redundant physical network architecture will be discussed\, which improves the performance of the load balancer to a new level. Finally\, it introduces the concurrent session synchronization mechanism of the load balancer. This mechanism enables the load balancers to be always online of service in the disaster recovery and upgrade scenarios\, which is transparent to tenants. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle][toggle color=”Default” title=”DPDK Accelerated Load Balancer”][vc_column_text]DPDK Accelerated Load Balancer\nLei Chen\, i Q i Y i .com\nDPVS (DPDK+LVS)\, an open source L4 load balancer (LB) based on DPDK.\n* why LVS/Kernel is not fast enough.\n* how to accelerate LB with DPDK and other techniques.\n* DPVS architect and design detail.\n* DPVS performance vs. LVS.\n* Key issue we addressed during development.\n* Use cases and deploy examples.\n* DPVS roadmap. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle][/toggles][/vc_column][/vc_row][vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” disable_element=”yes” overlay_strength=”0.3″ shape_divider_position=”bottom” shape_type=””][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” column_border_radius=”none” width=”1/1″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][vc_column_text]Will take place in the China National Convention Center\, Beijing on June 28th. \nThe agenda will cover the latest developments to the DPDK and other related projects such as FD.io\, Lagopus\, OVS\, DPVS\, Tungsten Fabric and SPDK\, including plans for future releases\, and will provide an opportunity to hear from DPDK users who have used it in their applications. \nHear and learn from DPDK and industry experts who will be sharing information about the projects\, use cases\, capabilities\, integrations with DPDK. This is a great opportunity for LinuxCon\, ContainerCon and CloudOpen attendees to share their thought leadership and innovations at one of the industry’s premier events. \n[button open_new_tab=”true” color=”accent-color” hover_text_color_override=”#fff” size=”large” url=”http://linux.31huiyi.com/” text=”Register LinuxCon + DPDK” color_override=”” image=”fa-calendar-o”]\n[button open_new_tab=”true” color=”accent-color” hover_text_color_override=”#fff” size=”large” url=”https://www.regonline.com/registration/checkin.aspx?EventId=2205569″ text=”Register DPDK Only” color_override=”” image=”fa-calendar-o”]\n[button open_new_tab=”true” color=”accent-color” hover_text_color_override=”#fff” size=”large” url=”https://dpdkprcsummit2018.sched.com/” text=”Schedule” color_override=”” image=”fa-list-alt”]\nSponsors:\n[/vc_column_text][/vc_column][/vc_row]
URL:https://www.dpdk.org/event/dpdk-summit-china-2018/
LOCATION:China National Convention Center\, Beijing\, China
CATEGORIES:DPDK Summit
GEO:39.9041999;116.4073963
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20181203T080000
DTEND;TZID=America/Los_Angeles:20181204T170000
DTSTAMP:20260410T045006
CREATED:20180827T234238Z
LAST-MODIFIED:20181230T124804Z
UID:1178-1543824000-1543942800@www.dpdk.org
SUMMARY:DPDK Summit North America 2018
DESCRIPTION:[vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″ shape_divider_position=”bottom”][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” column_border_radius=”none” width=”1/1″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][vc_column_text]DPDK is a set of libraries and drivers for fast packet processing. It is designed to run on any processors. The first supported CPU was Intel x86 and it is now extended to IBM POWER and ARM. It runs mostly in Linux userland. A FreeBSD port is available for a subset of DPDK features. DPDK is an Open Source BSD licensed project. The most recent patches and enhancements\, provided by the community\, are available in master branch. The agenda for DPDK Summit North America 2018 will cover the latest developments to the DPDK framework and other related projects such as FD.io\, including plans for future releases\, and will provide an opportunity to hear from DPDK users who have used the framework in their applications. Let’s discuss the present and future\, including DPDK roadmap suggestions\, container networking\, P4\, hardware accelerators and any other networking innovation.[/vc_column_text][/vc_column][/vc_row]\n[vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″ shape_divider_position=”bottom”][nectar_btn size=”small” button_style=”regular” button_color_2=”Accent-Color” icon_family=”none” url=”https://www.youtube.com/playlist?list=PLo97Rhbj4ceISWDa6OxsbEx2jBPaymJWL” text=”View Videos on YouTube”]\n[vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″ shape_divider_position=”bottom”][vc_column centered_text=”true” column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” column_border_radius=”none” width=”1/1″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][vc_column_text]\nSESSION SUMMARY\n[/vc_column_text][vc_column_text]To access the summary\, slides\, and video links for a specific session\, click on each of the tabs below.[/vc_column_text][/vc_column][/vc_row][vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″ shape_divider_position=”bottom” shape_type=””][vc_column centered_text=”true” column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” column_border_radius=”none” width=”1/1″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][toggles style=”minimal”]\n[toggle color=”Default” title=”Opening Remarks”][vc_column_text]Opening Remarks \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »[/vc_column_text][vc_column_text][/vc_column_text][/toggle]\n[toggle color=”Default” title=”SW Assisted vDPA for Live Migration”][vc_column_text]SW Assisted vDPA for Live Migration\nXiao Wang\, Intel \nvirtio is the de facto standard para-virtualization interface in cloud networking\, vDPA (vhost data path acceleration) is designed to provide a HW acceleration framework for virtio\, this framework provides both pass-thru like performance and virtio flexibility. One of the main advantages of vDPA is live migration support\, HW can do dirty page logging and report ring status just like what SW vhost does. \nTo even further reduce HW requirement to support vDPA\, we can have SW assisted solution to help device do the live migration related stuff\, we add this helper into vhost lib then any vDPA device driver can leverage the helpers to perform SW assisted live migration. This SW assisted solution provides a new option for vDPA HW design and can reduce HW design complexity. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle]\n[toggle color=”Default” title=”Using nDPI over DPDK to Classify and Block Unwanted Network Traffic”][vc_column_text]Using nDPI over DPDK to Classify and Block Unwanted Network Traffic\nLuca Deri\, ntop \nnDPI is an open source library that used DPI (deep packet inspection) techniques to classify network traffic. It can be used in monitoring tools to characterise network traffic\, or inline to enforce network traffic policies. nDPI currently supports over 250 protocols including skype\, bit torrent\, and tor\, and it is part of many open source applications and Linux distributions. This talk will cover the design of nDPI\, and it explains how to use it on top of DPDK to efficiently monitor and block selected communication flows. Various real case examples are demonstrated ranging from parental control enforcement to IoT devices protection. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle]\n[toggle color=”Default” title=”Reclaiming Memory – Efficient and Lock Free – rte_tqs”][vc_column_text]Reclaiming Memory – Efficient and Lock Free – rte_tqs\nHonnappa Nagarahalli\, Arm \nIn Dublin summit\, Arm introduced lock-less rte_hash algorithm. The lock less data structures require memory reclamation. Thread Quiescent State library was mentioned as a solution. In this presentation\, I would like to talk about further details of the library\, namely – APIs\, design and various use cases it enables in DPDK. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle]\n[toggle color=”Default” title=”A Hierarchical SW Load Balancing Solution for Cloud Deployment”][vc_column_text]A Hierarchical SW Load Balancing Solution for Cloud Deployment\nHongjun Ni\, Intel \nFor deployment of Cloud Native applications\, high throughtput\, minimal latency and high-availability are critical. Traditionally\, Load Balancers leverage dedicated hardware\, which leads high cost and low flexibility. \nThis presentation will introduce a hierarchical software load balancing solution based on DPDK and VPP\, which shows high performance and keeps flexibility in a large cloud deployment. \nIt contains below key elements:\n1) Implement a software router on DPDK\, VPP and legacy Routing Daemon\, with enabling ECMP.\n2) Implement a software Load Balancer enabling DSR (Direct Server Return)\, supporting Tunnel or Routing modes.\n1) Implement a host based service proxy\, including host load balancing\, DNAT and SNAT.\n4) Integrate SW router\, Load Balancer and host based service proxy to build a few flexible load balancing solutions. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle]\n[toggle color=”Default” title=”DPDK Based L4 Load Balancer”][vc_column_text]DPDK Based L4 Load Balancer\nM Jayakumar\, Intel \nDPVS is DPDK based open source high performance Layer-4 Load Balancer. To highlight DPDK optimizations\, Kernel based Load balancer\, LVS will be touched upon to make points on Hashing and other algorithms. The presentation will illustrate three variances of Load balancer topologies – (a) NAT\, (b) IP Tunnel and (c) Direct Server Reply. The session will wrap up with a discussion on configuration nuance as how to include load balancer in getting the client requests but send replies directly from servers to the client. The performance improvement is pronounced with replies doing heavy data movement compared to queries. DPVS https://github.com/iqiyi/dpvs \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle]\n[toggle color=”Default” title=”Accelerating Telco NFV Deployments with DPDK and Smart NIC”][vc_column_text]Accelerating Telco NFV Deployments with DPDK and Smart NIC\nKalimani Venkatesan Govindarajan\, Aricent & Barak Perlman\, Ethernity Network\nTelco NFV deployments with white boxes and x86 compute are becoming more concrete. SD-WAN uCPEs and Telco cloud VNFs like vEPC\, vBNG and vRouter have unique requirements\, which need to be met by DPDK based VNFs. Specifically for the Telco VNFs\, the approach for Hardware Acceleration using Smart NICs is emerging as an economic model\, but the simplicity of disaggregation requires clean interfaces for multiple 3rd party VNFs to leverage the Hardware Acceleration offered by the Smart NICs. This talk proposes to share our experiments with DPDK based interface for Smart NICs for multi-party VNF co-existence. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle]\n[toggle color=”Default” title=”NFF-Go: Bringing DPDK to the Cloud”][vc_column_text]NFF-Go: Bringing DPDK to the Cloud\nAreg Melik-Adamyan\, Intel\nNFF-Go provides a novel approach to network function development. Transmitting speed and amount of data in the networks are exponentially increasing\, which makes middle-boxes to be less efficient due to cost\, deployment\, inflexibility\, scalability and other issues. Network function virtualization technology\, on the other hand\, was proposed to solve this problem by moving hardware functionality to be developed as a software and deployed to commodity hardware. However\, this approach brought several new problems: slow speed of network functions’ development\, lower performance compared to the middle-boxes\, virtual machines scaling and deployment issues. Our approach presents a framework with a new high-level programming model for the rapid development of performant\, scalable virtualized network functions based on DPDK as a performant I/O engine. It significantly lowers the entry bar for newcomers to enter packet processing world\, order of magnitude ease the development of a custom packet processing applications\, and drastically improves deployment to the cloud via API controlled and cloud-native scheduler support. NFF-Go already is a part of DPDK umbrella project and can be found in apps repository. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »[/vc_column_text][vc_column_text][/vc_column_text][/toggle]\n[toggle color=”Default” title=”Enabing P4 in DPDK”][vc_column_text]Enabing P4 in DPDK\nCristian Dumitrescu\, Intel & Antonin Bas\, Barefoot Networks\nThis presentation provides a technical overview for companies and developers interested in describing their data plane pipelines in the P4 language on how to generate performance optimized DPDK code from a P4 program and the associated P4 Runtime API. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »[/vc_column_text][vc_column_text][/vc_column_text][/toggle]\n[toggle color=”Default” title=”Accelerating DPDK via P4-programmable FPGA-based Smart NICs”][vc_column_text]Accelerating DPDK via P4-programmable FPGA-based Smart NICs\nPetr Kastovsky\, Netcope Technologies\nDPDK is an open source standard for developing the data plane of modern virtual network functions running on CPUs. There are various benefits of accelerating selected workloads in order to achieve better performance per watt and latency that is becoming critical for edge applications. FPGAs are well positioned to be the right acceleration technology. On the other hand\, DPDK being a software library is making fast progress introducing wide set of new features with every release. Keeping up with such innovation pace is not possible considering standard FPGA development workflow. Netcope provides P4 programmability for various FPGA-based smart NICs to remove that obstacle. Key component of successful adoption of P4-programmable FPGA-based smart NICs is a standardized API for the users. DPDK is the best positioned development kit to address this challenge and there are various extensions of DPDK that could be used already\, namely DPDK RTE Flow and/or RTE Pipeline. During this presentation we will look into pros and cons of these extensions from P4 perspective. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle] \n[toggle color=”Default” title=”DPDK Tunnel Offloading”][vc_column_text]DPDK Tunnel Offloading\nYongseok Koh & Rony Efraim\, Mellanox \nContemporary data centers use overlay network to support multi tenancy and virtualization features such as VM migration\, and to boost operational agility. Overlay networks means tunnel protocols (VXLAN\, GRE\, GENEVE and more). \nHandling a tunneled packets in high rate is a challenging task for a virtual switch. The standard RSS will not perform well\, checksum computation will need to be validated on the inner part\, and the tunnel header will need to be added/removed for each incoming/outgoing packet.\nRecent work in DPDK exposed the APIs to offload much of the tunnel packets overhead into the device and thus save precious CPU cycles for the application. \nThe talk will overview the new offloads and demonstrate the use of them to achieve better and scalable vswitch solutions. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle]\n[toggle color=”Default” title=”DPDK on F5 BIG-IP Virtual ADCs”][vc_column_text]DPDK on F5 BIG-IP Virtual ADCs\nBrent Blood\, F5 Networks\nF5 app services is built on a high performance\, scalable architecture\, BIG-IP Traffic Management Microkernel (TMM)\, and has been used by the largest enterprises and service providers for over twenty years to ensure the availability\, performance\, and security of their applications. As BIG-IP has transitioned from purpose built hardware to virtualized appliance (VM) on COTS\, how can we continue to cost efficiently scale with the advent of 25/40/100G NICs on host servers. In this presentation\, we will discuss F5’s strategy of using DPDK to support multiple NIC vendors\, enable high performance workloads and services\, and lessons learned around integrating custom TMM with its own TCP stack and memory manager with DPDK. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle]\n[toggle color=”Default” title=”Arm’s Efforts for DPDK and Optimization Plan”][vc_column_text]Arm’s Efforts for DPDK and Optimization Plan\nGavin Hu & Honnappa Nagarahalli\, Arm\nIn this presentation\, we will talk about what Arm has done and is doing for DPDK\, including features enablement\, build system/tool chains/documentations enhancement\, DTS test cases adaptation\, and bug fixing and performance tuning (rte ring\, hash\, KNI\,…). We will also talk about our future optimization plan including NEON implementation\, relaxed memory ordering tuning for other components\, like PMDs\, examples\, virtio\, and etc. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle]\n[toggle color=”Default” title=”DPDK Flow Classification and Traffic Profiling & Measurement”][vc_column_text]DPDK Flow Classification and Traffic Profiling & Measurement\nRen Wang & Yipeng Wang\, Intel Labs \nIn this talk\, we will present new technologies to extend the current membership library to provide efficient traffic profiling and measurement capabilities\, such as heavy hitter detection and cardinality estimation. \nWe will fist provide an overview of the different classification libraries (e.g. hash library\, EFD library\, membership library.) and highlight the set of usages where each library is a best fit for\, including the extendible bucket table design we recently added to the rte_hash library in DPDK v18.11 to support 100% guaranteed insertion of keys. Next\, we provide details on the usages and design of the new extension we are adding to membership library for traffic profiling and measurement\, which becomes increasingly important in both Telco and Data center networks. We propose a memory efficient and general-purpose “sketch” based data structure in DPDK\, targeting on a wide range of traffic profiling usages. Specifically our sketch designs provide library support to: 1) efficiently profile flow size to report heavy hitters for congestion and DoS attack detection; 2) estimate the total number of active flows (cardinality estimation) for QoS and traffic management purposes; 3) perform anomaly detection via profiling flows that suddenly undergo heavy changes; and many more potential usages. The inline profiling process is both memory and computation efficient with high accuracy. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »[/vc_column_text][vc_column_text][/vc_column_text][/toggle]\n[toggle color=”Default” title=”Projects using DPDK”][vc_column_text]Projects using DPDK\nStephen Hemminger\, Microsoft\nMany open source (and proprietary) networking projects are using DPDK\, but not all projects all features. This is a survey talk that discusses how these projects are integrating DPDK. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »[/vc_column_text][vc_column_text][/vc_column_text][/toggle]\n[toggle color=”Default” title=”DPDK Open Lab Performance Continious Integration”][vc_column_text]DPDK Open Lab Performance Continious Integration\nJeremy Plsek\, University of New Hampshire InterOperability Laboratory\nThe DPDK Open Lab is a performance based continuous integration system\, supported by the DPDK project. When a patch is submitted\, it is automatically sent to our CI to be applied and built. Once the patch is compiled and installed\, it is ran against each of the bare metal environments hosted in the lab. This is to check for performance degradations or speed ups within DPDK on various hardware platforms. This talk will explore how the this system supports the development community\, such as accepting patches based on performance and tracking how performance has changed in DPDK over time. We will go over how to navigate and use the Dashboard. We will show how the performance has changed in DPDK over the past six months\, looking at relative numbers and graphs of various platforms. Finally\, we will also talk about the future of the Open Lab\, such as running more test cases\, running unit tests for DPDK\, additional capabilities for the dashboard\, and making the systems more accessible to the development community. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle]\n[toggle color=”Default” title=”Community Q & A”][vc_column_text]Community Q & A \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »[/vc_column_text][vc_column_text][/vc_column_text][/toggle]\n[toggle color=”Default” title=”Fast Prototyping DPDK Apps in Containernet”][vc_column_text]Fast Prototyping DPDK Apps in Containernet\nAndrew Wang\, Comcast \nWhen we first set out to develop network functions to provide new functionality needed in our infrastructure\, we knew we wanted to try DPDK for fast packet processing and build our app as a container for making packing\, shipping\, and deploying them easier. In our initial prototyping phase\, our main focus was verifying the applications we wrote performed as expected. Our first challenge was on the correct setup that would allow us to successfully build a DPDK app. Then we were faced with where to run our app. Creating a virtual network out of multiple VMs on a single server soon exhausted its resources as we added more nodes. Our infrastructure team was (understandably) cautious on allowing us to run the functions in their production networks\, and changing the network topology or dynamically scaling to add or remove nodes in a lab environment proved time-consuming. \nContainernet is a fork of the mininet project\, which supports using Docker containers as hosts in emulated networks. As we were able configure DPDK’s Environmental Abstraction Layer (EAL) correctly\, we could create a virtual network in seconds\, easily scale to more nodes as needed\, have access to all the hosts in the network to debug\, and all this in our own laptops\, which allowed us to explore the space freely and to see how our apps operated as we developed. \nIn this talk I will introduce Containernet\, explain how to create and setup a virtual network in it\, how to configure DPDK’s EAL for communicating with other hosts\, what limitations and surprises we faced when running apps in Containernet\, and conclude with a short demo showing all pieces working together. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle]\n[toggle color=”Default” title=”Implementing DPDK Based Application Container Framework with SPP”][vc_column_text]Implementing DPDK Based Application Container Framework with SPP\nYasufumi Ogawa\, NTT \nSoft Patch Panel (SPP) is a multi-process application for providing easy-to-use Service Function Chaining framework in NFV environment [1][2]. SPP enables users to connect DPDK applications running on host and virtual machines with several PMDs including ring\, vhost and PCAP. Zero copy packet forwarding between VM to VM can achieve 10GbE throughput for 64byte short packets. \nWe have tried to implement SPP for container networking with the latest DPDK. It is challenging to implement multi-process application because DPDK was largely updated in v18.05 and it is unstable for multi-process application support. In our presentation\, we will introduce how to implement DPDK multi-process application for container networking support. \n[1] http://dpdk.org/git/apps/spp\n[2] https://www.dpdk.org/hosted-projects/ \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle]\n[toggle color=”Default” title=”Shaping the Future of IP Broadcasting with Cisco’s vMI and DPDK on Windows”][vc_column_text]Shaping the Future of IP Broadcasting with Cisco’s vMI and DPDK on Windows\nHarini Ramakrishnan\, Microsoft & Michael O’Gorman\, Cisco \nThe video broadcasting industry is undergoing a massive transformation\, moving from domain specific Serial Digital Interface (SDI) interconnects\, to an IP-based network. Media software vendors are accelerating this network re-architecture\, scaling to meet bandwidth demands of next-gen media formats. Cisco’s virtual media interface(vMI) is a software toolkit – open sourced as “Herrison” – for media vendors undergoing this transition. \nWe are pleased to announce that Cisco\, in partnership with Intel and Microsoft\, is making this software toolkit highly optimized for media applications using DPDK on Windows\, the platform of choice for media software vendors. We will talk about how vMI uses DPDK on Windows to overcome the performance limitations of kernel mediated IO. We will then demonstrate how vMI realizes capacity at parity with legacy SDI\, scaling from 5HD streams to 62HD steams representing over 100Gbps in throughput. Lastly\, we will talk about how media appliances can incorporate this solution to reap the benefits of the efficient path to the NICs. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle]\n[toggle color=”Default” title=”Improving Security and Flexibility within Windows DPDK Networking Stacks”][vc_column_text]Improving Security and Flexibility within Windows DPDK Networking Stacks\nRanjit Menon\, Intel Corporation & Omar Cardona\, Microsoft \nWindows support for DPDK was announced at the DPDK North America summit in November 2017. Since then\, the code has been made available in a ‘draft’ repo at dpdk.org. The software stack for DPDK on Windows is similar to that on other operating systems\, including the use of a Linux-style UIO driver to obtain access to the networking device. The use of a UIO driver in Windows is problematic from a multi-user/multi-process security point of view. It cannot be certified and signed independently by DPDK consumers. Windows certification is minimal as it does not fully utilize the capabilities of the networking device. \nThis presentation introduces a miniport pass-through Windows driver that exposes the device to a user-space application which can concurrently support DPDK and standard network functions in a shared and secure manner. These enhanced and Windows Logo certifiable network drivers will contain all standard functions while exposing a subset of resources for DPDK through two models: first a bifurcated model for devices with minimal resources and secondly\, a multi-process/multi-user secure model for server grade NICs. \nLastly\, this presentation will also touch upon the current status of DPDK on Windows and the future roadmap. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle]\n[toggle color=”Default” title=”Use DPDK to Accelerate Data Compression for Storage Applications”][vc_column_text]Use DPDK to Accelerate Data Compression for Storage Applications\nFiona Trahe & Paul Luse\, Intel\nThis presentation will showcase how the DPDK compressdev API can deliver data compression services through an accelerator-agnostic API\, enabling the application to take advantage of either software or hardware acceleration engines. As Storage users also use SPDK to access DPDK services\, it will report on the work in progress to integrate compressdev with SPDK. Feedback from Storage users will be welcomed to fine-tune the API to satisfy Storage use-cases. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle]\n[toggle color=”Default” title=”Fine-grained Device Infrastructure for Network I/O Slicing in DPDK”][vc_column_text]Fine-grained Device Infrastructure for Network I/O Slicing in DPDK\nCunming Liang & John Mangan\, Intel \nMediated device has been introduced to allow fine-grained device partitioning in a generic manner. Kernel drivers of parent device define the isolation boundaries and ultimately populate the mediated device instances. \nThrough the unified VFIO UAPI\, mediated device instances can pass-thru to a VM just like normal VFIO devices on physical bus (e.g. PCIe). The recent DPDK PMD is able to access the isolated driver resource transparently on top of the emulated bus. \nHowever\, for ubiquitous use\, in bare metal or container usage\, it requires DPDK realize mediated device bus\, identify the bus layout and consume the VFIO mediated device natively\, which is not available yet. Meanwhile\, it doesn’t expect to introduce new individual PMD for mediated device with only difference of the granularity against existing PMD for usual device. \nThis presentation talks the concept and outlines in stages the DPDK impact and design\, shows the landscape of user space network functions in container. \nIt also describes some innovative uses case including transparent software abstraction (e.g. for NIC) and also a means to securely share FPGA device resource without SR-IOV. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »[/vc_column_text][vc_column_text][/vc_column_text][/toggle]\n[toggle color=”Default” title=”Embracing Externally Allocated Memory”][vc_column_text]Embracing Externally Allocated Memory\nYongseok Koh\, Mellanox\nThere are a few applications (GPU\, storage apps and VPP) out there to use externally allocated memory and DPDK is now ready to support that. Since v18.05\, rte_buf started to support external buffer attachment. This can be useful to support storage applications which read bulk data from storage and send it out to network as mbuf can have indirect memory allocated out of mempool. One remaining issue was registering externally allocated memory for DMA. Thanks to Anatoly’s patchset for v18.11\, externally allocated memory can now be managed within DPDK framework once it is registered by DPDK API. VFIO or Mellanox’s Memory Region (MR) which registers memory for DMA will seamlessly work with such external memory. I will present the latest changes which enable broader range of applications for DPDK and make further suggestions for DMA memory management. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle]\n[toggle color=”Default” title=”Accelerating DPDK Para-Virtual I/O with DMA Copy Offload Engine”][vc_column_text]Accelerating DPDK Para-Virtual I/O with DMA Copy Offload Engine\nJiayu Hu\, Intel \nVirtIO is a standard of para-virtual I/O for host and VMs communication. In VirtIO\, host communicates with VMs by copying packets from and to VM’s memory. With enabling TCP Segment Offloading\, VMs can use very large TCP packets\, like 64KB\, to mitigate the per-packet processing overhead. However\, the overhead of copying large bulk of data in the memory makes the VirtIO host interface become the I/O bottleneck. \nDMA copy offload engine is a PCI-enumerated device in the Intel chipset\, which is extremely efficient in performing memory copy operations. With intensively benchmarks\, we analyze DMA copy offload engine and CPU memory copy performance\, and we propose an adaption mechanism for different applications to fully utilize DMA copy offload engine capability. In this talk\, we present the design of integrating DMA copy offload engine in vhost-user and a dma-copy API framework for different usage scenarios. To our knowledge\, our proposal is the first to use DMA copy offload engine to mitigate the memory copy overhead for VirtIO. The experimental results show DMA copy offload engine is capable of enhancing vhost-user throughput by up to 20%. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle]\n[toggle color=”Default” title=”Revise 4K Pages Performance Impact for DPDK Applications”][vc_column_text]Revise 4K Pages Performance Impact for DPDK Applications\nLei Yao & Jiayu Hu\, Intel \nDPDK reduces TLB and IOTLB misses by using 2M and 1G pages\, but it requires DPDK applications to run as privileged users. Since 17.11 release\, DPDK supports 4K pages\, thus enabling applications to run as non-roots. However\, 4K pages may hurt packet processing performance in some usage scenarios. \nIn this talk\, we introduce a detailed guidance for DPDK applications (e.g. Open vSwitch) using 4K pages. Our guidance reveals how 4K pages impact packet I/O performance and gives best deployment suggestions to mitigate performance degradation from 4K pages. Under the guidance\, the experimental results show that testpmd P2P performance can improve throughput around 100%. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][vc_column_text][/vc_column_text][/toggle]\n[toggle color=”Default” title=”DPDK IPsec Library”][vc_column_text]DPDK IPsec Library\nDeclan Doherty\, Intel\nThis presentation will review the progress made in the community to enable a scaleable high performance IPsec library in DPDK which was announce at the DPDK Userspace event earlier this year\, focusing on the evolving library APIs and development roadmap and upstream plans for 2019. The presentation will then also present a number of different example integration’s of the library into data plane applications and look at the early performance indicators. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][/toggle]\n[toggle color=”Default” title=”Tungsten Fabric Performance Optimization by DPDK”][vc_column_text]Tungsten Fabric Performance Optimization by DPDK\nLei Yao\, Intel\nvRotuer-dpdk is the user space dataplane solution in Tungsten Fabric project. Although new Intel platform and new DPDK technology development rapid\, vRouter-dpdk was designed in several years ago\, it could not benefit from them. This presentation is to introduce some works that have been done for vRouter-dpdk. Those cover CPU cores extension on new SKL platform\, tunnel acceleration by rte_flow library and powered by DDP technology on Intel NIC\, Cuckoo hash library integration for flow table\, multi-queue support\, and batch TX/RX support. Eventually\, with these enhancement on vRouter-dpdk\, Tungsten Fabric becomes compatible with new hardware and software technology\, and the performance is boosted as well. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][/toggle]\n[toggle color=”Default” title=”DPDK Based Vswitch Upgrade”][vc_column_text]DPDK Based Vswitch Upgrade\nYuanhan Liu\, Tencent\nSoftware has bugs. Also\, more and more new features will be added. Both require software upgrade. Unlike other software\, the Vswitch upgrade has more critical requirement: the downtime has to be as small as possible. Otherwise\, it may have huge impacts on all virtual machines it connected to. This talk presents how we managed to reduce the downtime greatly. Initially\, we made the downtime less than 400ms. With further enhancements\, we made it below 50ms or so. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][/toggle]\n[toggle color=”Default” title=”Using New DPDK Port Representor by Switch Application like OVS”][vc_column_text]Using New DPDK Port Representor by Switch Application like OVS\nRony Efraim\, Mellanox \nNew API for port representors introduced in DPDK\, for switch application like OVS. \nWhile running DPDK reduces the CPU overload of interrupt driven packet processing\, CPU cores are still not completely freed up from polling of packet queues. We already implemented accelerated through HW offloads saves CPU cycles consumed for flow look ups. \nTo solve this challenge\, DPDK switch app is further accelerated through internal HW switch offloads of virtual port like SR-IOV. Port representors for switches already introduced in DPDK and we present how OVS-DPDK will use it. \nWe introduce a classification and forward methodology that enables a full offloading datapath to the NIC hardware. \nWe present the open source work being done in the DPDK and OVS communities and significant performance gains achieved. We also present how this work can be extended to VXLAN and other tunneling traffic. \n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-youtube”] Watch Video »\n[icon color=”Accent-Color” size=”tiny” icon_size=”” image=”fa-file-pdf-o”] Download Slides »[/vc_column_text][/toggle]\n[/toggles][/vc_column][/vc_row][vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″ shape_divider_position=”bottom”][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” column_border_radius=”none” width=”1/1″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][vc_column_text]\nTHANK YOU TO OUR SPONSORS\n \n \n \n[/vc_column_text][/vc_column][/vc_row]
URL:https://www.dpdk.org/event/dpdk-summit-north-america-2018/
LOCATION:Club Auto Sport\, 521 Charcot Ave\, San Jose\, CA\, 95131\, United States
CATEGORIES:DPDK Summit
END:VEVENT
END:VCALENDAR