[dpdk-dev,v4,1/2] net/vhost: add a new defined stats struct

Message ID 1474452829-58658-1-git-send-email-zhiyong.yang@intel.com (mailing list archive)
State Superseded, archived
Delegated to: Yuanhan Liu
Headers

Commit Message

Yang, Zhiyong Sept. 21, 2016, 10:13 a.m. UTC
  The patch moves all stats counters to a new defined struct vhost_stats
as follows, in order to manage all stats counters in a unified way and
simplify the subsequent function implementation(vhost_dev_xstats_reset).

Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
---

Changes in v4:
A queue can be only used as TX or RX, So, we can use pkts instead of
rx_pkts and tx_pkts, the same to rx_bytes and tx_bytes.
before modification:
struct vhost_stats {
	uint64_t rx_pkts;
	uint64_t tx_pkts;
	uint64_t missed_pkts;
	uint64_t rx_bytes;
	uint64_t tx_bytes;
};
New struct vhost_stats definition as follows:
struct vhost_stats {
	uint64_t pkts;
	uint64_t bytes;
	uint64_t missed_pkts;
};
 drivers/net/vhost/rte_eth_vhost.c | 42 ++++++++++++++++++++-------------------
 1 file changed, 22 insertions(+), 20 deletions(-)
  

Comments

Yang, Zhiyong Sept. 22, 2016, 8:19 a.m. UTC | #1
Patch 1 moves all stats counters to a new defined struct vhost_stats,
in order to manage all stats counters in a consistent way.

Patch 2 adds the pmd xstats support.

Changes in V5:

Patch 2:
for vhost_count_multicast_broadcast, passing struct rte_mbuf *buf instead
of struct rte_mbuf **buf and remove the 3th parameter uint16_t count;.

Changes in v4:

Patch 1:
A queue can be only used as TX or RX, So, we can use pkts instead of
rx_pkts and tx_pkts, the same to rx_bytes and tx_bytes.
Before modification:
struct vhost_stats {
	uint64_t rx_pkts;
	uint64_t tx_pkts;
	uint64_t missed_pkts;
	uint64_t rx_bytes;
	uint64_t tx_bytes;
};
New struct vhost_stats definition as follows:
struct vhost_stats {
	uint64_t pkts;
	uint64_t bytes;
	uint64_t missed_pkts;
};

Patch 2:
1. add a member VHOST_XSTATS_MAX in enum vhost_xstats_pkts, So, we can
define uint64_t xstats[VHOST_XSTATS_MAX]; instead of xstats[16].
2. restore unicast_packets and update it in the function
vhost_dev_xstats_get
3. move the loop out of function vhost_count_multicast_broadcast in order
to reduce the computation.

Changes in v3:
1. rework the vhost_update_packet_xstats and separate it into two parts.
   One function deals with the generic packets update, another one deals
   with increasing the broadcast and multicast with failure packets sent
   according to RFC2863 page42 ifHCOutMulticastPkts ifHCOutBroadcastPkts.
2. define enum vhost_stat_pkts to replace the magic numbers and enhance
   the code readability.
3. remove some unnecessary type casts and fix one format issue.

Changes in v2:
1. remove the compiling switch.
2. fix two code bugs.

---
Zhiyong Yang (2):
  net/vhost: add a new defined stats struct
  net/vhost: add pmd xstats

 drivers/net/vhost/rte_eth_vhost.c | 315 +++++++++++++++++++++++++++++++++++---
 1 file changed, 295 insertions(+), 20 deletions(-)
  
Yuanhan Liu Sept. 23, 2016, 3:56 a.m. UTC | #2
On Thu, Sep 22, 2016 at 04:19:07PM +0800, Zhiyong Yang wrote:
> Patch 1 moves all stats counters to a new defined struct vhost_stats,
> in order to manage all stats counters in a consistent way.
> 
> Patch 2 adds the pmd xstats support.

Applied to dpdk-next-virtio.

Thanks.

	--yliu
  
Yuanhan Liu Sept. 28, 2016, 2:35 a.m. UTC | #3
On Fri, Sep 23, 2016 at 11:56:17AM +0800, Yuanhan Liu wrote:
> On Thu, Sep 22, 2016 at 04:19:07PM +0800, Zhiyong Yang wrote:
> > Patch 1 moves all stats counters to a new defined struct vhost_stats,
> > in order to manage all stats counters in a consistent way.
> > 
> > Patch 2 adds the pmd xstats support.
> 
> Applied to dpdk-next-virtio.

Here is a note this patchset has been dropped in dpdk-next-virtio tree:
OVS teams asks a per-port xstats, while this patch does per-queue xstats.

Zhiyong will send a new version to address this request, shortly.

	--yliu
  

Patch

diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 80c3f4c..7b989ec 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -72,6 +72,12 @@  static struct ether_addr base_eth_addr = {
 	}
 };
 
+struct vhost_stats {
+	uint64_t pkts;
+	uint64_t bytes;
+	uint64_t missed_pkts;
+};
+
 struct vhost_queue {
 	int vid;
 	rte_atomic32_t allow_queuing;
@@ -80,11 +86,7 @@  struct vhost_queue {
 	struct rte_mempool *mb_pool;
 	uint8_t port;
 	uint16_t virtqueue_id;
-	uint64_t rx_pkts;
-	uint64_t tx_pkts;
-	uint64_t missed_pkts;
-	uint64_t rx_bytes;
-	uint64_t tx_bytes;
+	struct vhost_stats stats;
 };
 
 struct pmd_internal {
@@ -145,11 +147,11 @@  eth_vhost_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 	nb_rx = rte_vhost_dequeue_burst(r->vid,
 			r->virtqueue_id, r->mb_pool, bufs, nb_bufs);
 
-	r->rx_pkts += nb_rx;
+	r->stats.pkts += nb_rx;
 
 	for (i = 0; likely(i < nb_rx); i++) {
 		bufs[i]->port = r->port;
-		r->rx_bytes += bufs[i]->pkt_len;
+		r->stats.bytes += bufs[i]->pkt_len;
 	}
 
 out:
@@ -176,11 +178,11 @@  eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 	nb_tx = rte_vhost_enqueue_burst(r->vid,
 			r->virtqueue_id, bufs, nb_bufs);
 
-	r->tx_pkts += nb_tx;
-	r->missed_pkts += nb_bufs - nb_tx;
+	r->stats.pkts += nb_tx;
+	r->stats.missed_pkts += nb_bufs - nb_tx;
 
 	for (i = 0; likely(i < nb_tx); i++)
-		r->tx_bytes += bufs[i]->pkt_len;
+		r->stats.bytes += bufs[i]->pkt_len;
 
 	for (i = 0; likely(i < nb_tx); i++)
 		rte_pktmbuf_free(bufs[i]);
@@ -582,10 +584,10 @@  eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 		if (dev->data->rx_queues[i] == NULL)
 			continue;
 		vq = dev->data->rx_queues[i];
-		stats->q_ipackets[i] = vq->rx_pkts;
+		stats->q_ipackets[i] = vq->stats.pkts;
 		rx_total += stats->q_ipackets[i];
 
-		stats->q_ibytes[i] = vq->rx_bytes;
+		stats->q_ibytes[i] = vq->stats.bytes;
 		rx_total_bytes += stats->q_ibytes[i];
 	}
 
@@ -594,11 +596,11 @@  eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 		if (dev->data->tx_queues[i] == NULL)
 			continue;
 		vq = dev->data->tx_queues[i];
-		stats->q_opackets[i] = vq->tx_pkts;
-		tx_missed_total += vq->missed_pkts;
+		stats->q_opackets[i] = vq->stats.pkts;
+		tx_missed_total += vq->stats.missed_pkts;
 		tx_total += stats->q_opackets[i];
 
-		stats->q_obytes[i] = vq->tx_bytes;
+		stats->q_obytes[i] = vq->stats.bytes;
 		tx_total_bytes += stats->q_obytes[i];
 	}
 
@@ -619,16 +621,16 @@  eth_stats_reset(struct rte_eth_dev *dev)
 		if (dev->data->rx_queues[i] == NULL)
 			continue;
 		vq = dev->data->rx_queues[i];
-		vq->rx_pkts = 0;
-		vq->rx_bytes = 0;
+		vq->stats.pkts = 0;
+		vq->stats.bytes = 0;
 	}
 	for (i = 0; i < dev->data->nb_tx_queues; i++) {
 		if (dev->data->tx_queues[i] == NULL)
 			continue;
 		vq = dev->data->tx_queues[i];
-		vq->tx_pkts = 0;
-		vq->tx_bytes = 0;
-		vq->missed_pkts = 0;
+		vq->stats.pkts = 0;
+		vq->stats.bytes = 0;
+		vq->stats.missed_pkts = 0;
 	}
 }