[dpdk-dev] examples/qos_sched: fix packets dequeue operation from ring

Message ID 1472724664-1400-1-git-send-email-jasvinder.singh@intel.com (mailing list archive)
State Accepted, archived
Delegated to: Thomas Monjalon
Headers

Commit Message

Jasvinder Singh Sept. 1, 2016, 10:11 a.m. UTC
  The app_worker_thread() and app_mixed_thread() use rte_ring_sc_dequeue_bulk
to dequeue packets from the ring and this imposes restriction on number of
packets in software ring to be greater than the specified value to start
actual dequeue operation, thus, adds latency to those packets. Therefore,
rte_ring_sc_dequeue_bulk is replaced with rte_ring_sc_dequeue_burst.

Fixes: de3cfa2c9823 ("sched: initial import")

Suggested-by: Yang, Tao Y <tao.y.yang@intel.com>
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
---
 examples/qos_sched/app_thread.c | 22 ++++++++++------------
 1 file changed, 10 insertions(+), 12 deletions(-)
  

Comments

Thomas Monjalon Oct. 12, 2016, 9:06 p.m. UTC | #1
2016-09-01 11:11, Jasvinder Singh:
> The app_worker_thread() and app_mixed_thread() use rte_ring_sc_dequeue_bulk
> to dequeue packets from the ring and this imposes restriction on number of
> packets in software ring to be greater than the specified value to start
> actual dequeue operation, thus, adds latency to those packets. Therefore,
> rte_ring_sc_dequeue_bulk is replaced with rte_ring_sc_dequeue_burst.
> 
> Fixes: de3cfa2c9823 ("sched: initial import")
> 
> Suggested-by: Yang, Tao Y <tao.y.yang@intel.com>
> Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>

Applied, thanks
  

Patch

diff --git a/examples/qos_sched/app_thread.c b/examples/qos_sched/app_thread.c
index 3c678cc..70fdcdb 100644
--- a/examples/qos_sched/app_thread.c
+++ b/examples/qos_sched/app_thread.c
@@ -215,17 +215,16 @@  app_worker_thread(struct thread_conf **confs)
 
 	while ((conf = confs[conf_idx])) {
 		uint32_t nb_pkt;
-		int retval;
 
 		/* Read packet from the ring */
-		retval = rte_ring_sc_dequeue_bulk(conf->rx_ring, (void **)mbufs,
+		nb_pkt = rte_ring_sc_dequeue_burst(conf->rx_ring, (void **)mbufs,
 					burst_conf.ring_burst);
-		if (likely(retval == 0)) {
+		if (likely(nb_pkt)) {
 			int nb_sent = rte_sched_port_enqueue(conf->sched_port, mbufs,
-					burst_conf.ring_burst);
+					nb_pkt);
 
-			APP_STATS_ADD(conf->stat.nb_drop, burst_conf.ring_burst - nb_sent);
-			APP_STATS_ADD(conf->stat.nb_rx, burst_conf.ring_burst);
+			APP_STATS_ADD(conf->stat.nb_drop, nb_pkt - nb_sent);
+			APP_STATS_ADD(conf->stat.nb_rx, nb_pkt);
 		}
 
 		nb_pkt = rte_sched_port_dequeue(conf->sched_port, mbufs,
@@ -250,17 +249,16 @@  app_mixed_thread(struct thread_conf **confs)
 
 	while ((conf = confs[conf_idx])) {
 		uint32_t nb_pkt;
-		int retval;
 
 		/* Read packet from the ring */
-		retval = rte_ring_sc_dequeue_bulk(conf->rx_ring, (void **)mbufs,
+		nb_pkt = rte_ring_sc_dequeue_burst(conf->rx_ring, (void **)mbufs,
 					burst_conf.ring_burst);
-		if (likely(retval == 0)) {
+		if (likely(nb_pkt)) {
 			int nb_sent = rte_sched_port_enqueue(conf->sched_port, mbufs,
-					burst_conf.ring_burst);
+					nb_pkt);
 
-			APP_STATS_ADD(conf->stat.nb_drop, burst_conf.ring_burst - nb_sent);
-			APP_STATS_ADD(conf->stat.nb_rx, burst_conf.ring_burst);
+			APP_STATS_ADD(conf->stat.nb_drop, nb_pkt - nb_sent);
+			APP_STATS_ADD(conf->stat.nb_rx, nb_pkt);
 		}