Upcoming Webinar: Hyperscaling in the Cloud
Skip to main content
Category

Blog

DPDK blog posts

The TCP stack ANS works with Redis and NGINX ports

By Blog

Originally posted by zimeiw zimeiw at 163.com on the dpdk-dev mailing list


hi,


The tcp/ip stack is developed based on dpdk.
tcp/ip stack and APP deployment.
         |-------|       |-------|       |-------|
         |  APP  |       |  APP  |       |  APP  |
         |       |       |       |       |       |
         |       |       |       |       |       |
         |-------|       |-------|       |-------|
             |               |               |
--------------------------------------------------
netdpsock    |               |               |          
             fd              fd              fd
             |               |               |
--------------------------------------------------
netdp        |               |               |
         |-------|       |-------|       |-------|
         | TCP   |       |  TCP  |       | TCP   |
         |       |       |       |       |       |
         |       |       |       |       |       |
         |       |       |       |       |       |
         |---------------------------------------|       
         |               IP/ARP/ICMP             |
         |---------------------------------------|       
         |       |       |       |       |       |
         |LCORE0 |       |LCORE1 |       |LCORE2 |
         |-------|       |-------|       |-------|
             |               |               |
             ---------------RSS---------------
                             | 
         |---------------------------------------| 
         |                  NIC                  | 
         |---------------------------------------| 
NIC distribute packets to different lcore based on RSS, so same TCP flow are handled in the same lcore.
Each lcore has own TCP stack. so no share data between lcores, free lock.
IP/ARP/ICMP are shared between lcores.
APP process runs as a tcp server, only listens on one lcore and accept tcp connections from the lcore, so the APP process number shall large than the lcore number. The APP processes are deployed on each lcore automaticly and averagely.
APP process runs as a tcp client, app process can communicate with each lcore. The tcp connection can be located in specified lcore automaticly.
APP process can bind the same port if enable reuseport, APP process could accept tcp connection by round robin.
If NIC don't support multi queue or RSS, shall enhance opendp_main.c, reserve one lcore to receive and send packets from NIC, and distribute packets to lcores of netdp tcp stack by software RSS.
   2. netdpsock are compatible with BSD socket, so it is easy to porting app to run in netdp stack.
nginx is already porting to run in netdp, a few code are changed.  link: https://github.com/opendp/dpdk-nginx
redis is also porting. link: https://github.com/opendp/dpdk-redis


   3. Performance.
one lcore, one http server, ab testing 

Concurrency Level:      500
Time taken for tests:   0.642 seconds
Complete requests:      30000
Failed requests:        0
Total transferred:      4530000 bytes
HTML transferred:       1890000 bytes
Requests per second:    46695.59 [#/sec] (mean)
Time per request:       10.708 [ms] (mean)
Time per request:       0.021 [ms] (mean, across all concurrent requests)
Transfer rate:          6885.78 [Kbytes/sec] received
one lcore, one nginx server, ab testing 
Concurrency Level:      500
Time taken for tests:   0.965 seconds
Complete requests:      30000
Failed requests:        0
Total transferred:      25320000 bytes
HTML transferred:       18360000 bytes
Requests per second:    31092.43 [#/sec] (mean)
Time per request:       16.081 [ms] (mean)
Time per request:       0.032 [ms] (mean, across all concurrent requests)
Transfer rate:          25626.97 [Kbytes/sec] received
one lcore, one redis server, redis-bench testing 
root at h163:~/dpdk-redis# ./src/redis-benchmark -h 2.2.2.2  -p 6379 -n 100000 -c 50 -q
PING_INLINE: 86655.11 requests per second
PING_BULK: 90497.73 requests per second
SET: 84317.03 requests per second
GET: 85106.38 requests per second
INCR: 86580.09 requests per second
LPUSH: 83263.95 requests per second
LPOP: 83612.04 requests per second
SADD: 85034.02 requests per second
SPOP: 86430.43 requests per second
LPUSH (needed to benchmark LRANGE): 84245.99 requests per second
LRANGE_100 (first 100 elements): 46948.36 requests per second
LRANGE_300 (first 300 elements): 19615.54 requests per second
LRANGE_500 (first 450 elements): 11584.80 requests per second
LRANGE_600 (first 600 elements): 10324.18 requests per second
MSET (10 keys): 66401.06 requests per second
Still didn't test multicore tcp performance because lack test tools and env.


For detail test result, please refer to https://github.com/opendp/dpdk-odp


--
Best Regards,
zimeiw

NetBSD TCP/IP port on DPDK using Rump framework

By Blog

Originally posted by Antti Kantee (pooka at iki.fi) on the dpdk-dev mailing list

----
Hi,

I like the opportunities that a technology like DPDK enables, and I felt 
that the availability of an open source TCP/IP stack for DPDK could make 
things even more interesting.  I've been working on a concept called the 
anykernel, where the idea is that an OS kernel should be structured in a 
fashion which allows the driver components to be run independently of a 
monolithic kernel in so-called rump kernels.  Long story short, one of 
the "byproducts" is a run-anywhere standalone version of NetBSD kernel 
TCP/IP stack, so it was a natural progression to integrate that TCP/IP 
stack with the NIC driver layer provided by DPDK.

The published code completes "stage 1: make it work".  I want to thank 
another person who wished to remain anonymous for help with the 
implementation and testing.  Next up is "stage 2: make it fast".  I 
welcome everyone to contribute ideas / use cases / code / etc. towards 
that goal.

You can find the code for plugging DPDK into the rump kernel NIC layer 
along with instructions from:

https://github.com/anttikantee/dpdk-rumptcpip

In case anyone is interested, I tested the setup with Void Linux running 
in VM using an emulated 82540 NIC (cf. my previous mail to this list) 
and the other party tested on Ubuntu using real hardware and a 82599 NIC.

cheers,
antti