[dpdk-dev,v2] ethdev: check number of queues less than RTE_ETHDEV_QUEUE_STAT_CNTRS

Message ID 1479722378-23959-1-git-send-email-alejandro.lucero@netronome.com (mailing list archive)
State Accepted, archived
Delegated to: Thomas Monjalon
Headers

Checks

Context Check Description
checkpatch/checkpatch success coding style OK

Commit Message

Alejandro Lucero Nov. 21, 2016, 9:59 a.m. UTC
  From: Bert van Leeuwen <bert.vanleeuwen@netronome.com>

Arrays inside rte_eth_stats have size=RTE_ETHDEV_QUEUE_STAT_CNTRS.
Some devices report more queues than that and this code blindly uses
the reported number of queues by the device to fill those arrays up.
This patch fixes the problem using MIN between the reported number of
queues and RTE_ETHDEV_QUEUE_STAT_CNTRS.

Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
---
 lib/librte_ether/rte_ethdev.c | 25 +++++++++++++++++--------
 1 file changed, 17 insertions(+), 8 deletions(-)
  

Comments

Olivier Matz Nov. 24, 2016, 4:59 p.m. UTC | #1
Hi,

On Mon, 2016-11-21 at 09:59 +0000, Alejandro Lucero wrote:
> From: Bert van Leeuwen <bert.vanleeuwen@netronome.com>
> 
> Arrays inside rte_eth_stats have size=RTE_ETHDEV_QUEUE_STAT_CNTRS.
> Some devices report more queues than that and this code blindly uses
> the reported number of queues by the device to fill those arrays up.
> This patch fixes the problem using MIN between the reported number of
> queues and RTE_ETHDEV_QUEUE_STAT_CNTRS.
> 
> Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
> 

Reviewed-by: Olivier Matz <olivier.matz@6wind.com>


As a next step, I'm wondering if it would be possible to remove
this limitation. We could replace the tables in struct rte_eth_stats
by a pointer to an array allocated dynamically at pmd setup.

It would break the API, so it should be announced first. I'm thinking
of something like:

struct rte_eth_generic_stats {
        uint64_t ipackets;
        uint64_t opackets;
        uint64_t ibytes;
        uint64_t obytes;
        uint64_t imissed;
        uint64_t ierrors;
        uint64_t oerrors;
        uint64_t rx_nombuf
};

struct rte_eth_stats {
	struct rte_eth_generic_stats port_stats;
	struct rte_eth_generic_stats *queue_stats;
};

The queue_stats array would always be indexed by queue_id.
The xstats would continue to report the generic stats per-port and
per-queue.

About the mapping API, either we keep it as-is, or it could
become a driver-specific API.


Thomas, what do you think?

Regards,
Olivier
  
Thomas Monjalon Nov. 28, 2016, 11:13 a.m. UTC | #2
2016-11-24 17:59, Olivier Matz:
> Hi,
> 
> On Mon, 2016-11-21 at 09:59 +0000, Alejandro Lucero wrote:
> > From: Bert van Leeuwen <bert.vanleeuwen@netronome.com>
> > 
> > Arrays inside rte_eth_stats have size=RTE_ETHDEV_QUEUE_STAT_CNTRS.
> > Some devices report more queues than that and this code blindly uses
> > the reported number of queues by the device to fill those arrays up.
> > This patch fixes the problem using MIN between the reported number of
> > queues and RTE_ETHDEV_QUEUE_STAT_CNTRS.
> > 
> > Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
> > 
> 
> Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
> 
> 
> As a next step, I'm wondering if it would be possible to remove
> this limitation. We could replace the tables in struct rte_eth_stats
> by a pointer to an array allocated dynamically at pmd setup.

Yes that's definitely the right way to handle these statistics.

> It would break the API, so it should be announced first. I'm thinking
> of something like:
> 
> struct rte_eth_generic_stats {
>         uint64_t ipackets;
>         uint64_t opackets;
>         uint64_t ibytes;
>         uint64_t obytes;
>         uint64_t imissed;
>         uint64_t ierrors;
>         uint64_t oerrors;
>         uint64_t rx_nombuf
> };
> 
> struct rte_eth_stats {
> 	struct rte_eth_generic_stats port_stats;
> 	struct rte_eth_generic_stats *queue_stats;
> };
> 
> The queue_stats array would always be indexed by queue_id.
> The xstats would continue to report the generic stats per-port and
> per-queue.
> 
> About the mapping API, either we keep it as-is, or it could
> become a driver-specific API.

Yes I agree to remove the queue statistics mapping which is very specific.
I will send a patch with a deprecation notice to move the mapping API
to a driver-specific API.

Any objection?
  
Alejandro Lucero Dec. 1, 2016, 2:44 p.m. UTC | #3
On Mon, Nov 28, 2016 at 11:13 AM, Thomas Monjalon <thomas.monjalon@6wind.com
> wrote:

> 2016-11-24 17:59, Olivier Matz:
> > Hi,
> >
> > On Mon, 2016-11-21 at 09:59 +0000, Alejandro Lucero wrote:
> > > From: Bert van Leeuwen <bert.vanleeuwen@netronome.com>
> > >
> > > Arrays inside rte_eth_stats have size=RTE_ETHDEV_QUEUE_STAT_CNTRS.
> > > Some devices report more queues than that and this code blindly uses
> > > the reported number of queues by the device to fill those arrays up.
> > > This patch fixes the problem using MIN between the reported number of
> > > queues and RTE_ETHDEV_QUEUE_STAT_CNTRS.
> > >
> > > Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
> > >
> >
> > Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
> >
> >
> > As a next step, I'm wondering if it would be possible to remove
> > this limitation. We could replace the tables in struct rte_eth_stats
> > by a pointer to an array allocated dynamically at pmd setup.
>
> Yes that's definitely the right way to handle these statistics.
>
>
Agree.


> > It would break the API, so it should be announced first. I'm thinking
> > of something like:
> >
> > struct rte_eth_generic_stats {
> >         uint64_t ipackets;
> >         uint64_t opackets;
> >         uint64_t ibytes;
> >         uint64_t obytes;
> >         uint64_t imissed;
> >         uint64_t ierrors;
> >         uint64_t oerrors;
> >         uint64_t rx_nombuf
> > };
> >
> > struct rte_eth_stats {
> >       struct rte_eth_generic_stats port_stats;
> >       struct rte_eth_generic_stats *queue_stats;
> > };
> >
> > The queue_stats array would always be indexed by queue_id.
> > The xstats would continue to report the generic stats per-port and
> > per-queue.
> >
> > About the mapping API, either we keep it as-is, or it could
> > become a driver-specific API.
>
> Yes I agree to remove the queue statistics mapping which is very specific.
> I will send a patch with a deprecation notice to move the mapping API
> to a driver-specific API.
>
> Any objection?
>

No from my side.
  
Olivier Matz Dec. 5, 2016, 12:53 p.m. UTC | #4
On Thu, 24 Nov 2016 17:59:06 +0100, Olivier Matz
<olivier.matz@6wind.com> wrote:
> Hi,
> 
> On Mon, 2016-11-21 at 09:59 +0000, Alejandro Lucero wrote:
> > From: Bert van Leeuwen <bert.vanleeuwen@netronome.com>
> > 
> > Arrays inside rte_eth_stats have size=RTE_ETHDEV_QUEUE_STAT_CNTRS.
> > Some devices report more queues than that and this code blindly uses
> > the reported number of queues by the device to fill those arrays up.
> > This patch fixes the problem using MIN between the reported number
> > of queues and RTE_ETHDEV_QUEUE_STAT_CNTRS.
> > 
> > Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
> >   
> 
> Reviewed-by: Olivier Matz <olivier.matz@6wind.com>

Fixes: ce757f5c9a4d ("ethdev: new method to retrieve extended statistics")
  
Thomas Monjalon Dec. 6, 2016, 1:24 p.m. UTC | #5
> > > From: Bert van Leeuwen <bert.vanleeuwen@netronome.com>
> > > 
> > > Arrays inside rte_eth_stats have size=RTE_ETHDEV_QUEUE_STAT_CNTRS.
> > > Some devices report more queues than that and this code blindly uses
> > > the reported number of queues by the device to fill those arrays up.
> > > This patch fixes the problem using MIN between the reported number
> > > of queues and RTE_ETHDEV_QUEUE_STAT_CNTRS.
> > > 
> > > Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
> > >   
> > 
> > Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
> 
> Fixes: ce757f5c9a4d ("ethdev: new method to retrieve extended statistics")

Applied, thanks
  

Patch

diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index fde8112..4209ad0 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1343,8 +1343,10 @@  get_xstats_count(uint8_t port_id)
 	} else
 		count = 0;
 	count += RTE_NB_STATS;
-	count += dev->data->nb_rx_queues * RTE_NB_RXQ_STATS;
-	count += dev->data->nb_tx_queues * RTE_NB_TXQ_STATS;
+	count += RTE_MIN(dev->data->nb_rx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS) *
+		 RTE_NB_RXQ_STATS;
+	count += RTE_MIN(dev->data->nb_tx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS) *
+		 RTE_NB_TXQ_STATS;
 	return count;
 }
 
@@ -1358,6 +1360,7 @@  rte_eth_xstats_get_names(uint8_t port_id,
 	int cnt_expected_entries;
 	int cnt_driver_entries;
 	uint32_t idx, id_queue;
+	uint16_t num_q;
 
 	cnt_expected_entries = get_xstats_count(port_id);
 	if (xstats_names == NULL || cnt_expected_entries < 0 ||
@@ -1374,7 +1377,8 @@  rte_eth_xstats_get_names(uint8_t port_id,
 			"%s", rte_stats_strings[idx].name);
 		cnt_used_entries++;
 	}
-	for (id_queue = 0; id_queue < dev->data->nb_rx_queues; id_queue++) {
+	num_q = RTE_MIN(dev->data->nb_rx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
+	for (id_queue = 0; id_queue < num_q; id_queue++) {
 		for (idx = 0; idx < RTE_NB_RXQ_STATS; idx++) {
 			snprintf(xstats_names[cnt_used_entries].name,
 				sizeof(xstats_names[0].name),
@@ -1384,7 +1388,8 @@  rte_eth_xstats_get_names(uint8_t port_id,
 		}
 
 	}
-	for (id_queue = 0; id_queue < dev->data->nb_tx_queues; id_queue++) {
+	num_q = RTE_MIN(dev->data->nb_tx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
+	for (id_queue = 0; id_queue < num_q; id_queue++) {
 		for (idx = 0; idx < RTE_NB_TXQ_STATS; idx++) {
 			snprintf(xstats_names[cnt_used_entries].name,
 				sizeof(xstats_names[0].name),
@@ -1420,14 +1425,18 @@  rte_eth_xstats_get(uint8_t port_id, struct rte_eth_xstat *xstats,
 	unsigned count = 0, i, q;
 	signed xcount = 0;
 	uint64_t val, *stats_ptr;
+	uint16_t nb_rxqs, nb_txqs;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 
+	nb_rxqs = RTE_MIN(dev->data->nb_rx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
+	nb_txqs = RTE_MIN(dev->data->nb_tx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
+
 	/* Return generic statistics */
-	count = RTE_NB_STATS + (dev->data->nb_rx_queues * RTE_NB_RXQ_STATS) +
-		(dev->data->nb_tx_queues * RTE_NB_TXQ_STATS);
+	count = RTE_NB_STATS + (nb_rxqs * RTE_NB_RXQ_STATS) +
+		(nb_txqs * RTE_NB_TXQ_STATS);
 
 	/* implemented by the driver */
 	if (dev->dev_ops->xstats_get != NULL) {
@@ -1458,7 +1467,7 @@  rte_eth_xstats_get(uint8_t port_id, struct rte_eth_xstat *xstats,
 	}
 
 	/* per-rxq stats */
-	for (q = 0; q < dev->data->nb_rx_queues; q++) {
+	for (q = 0; q < nb_rxqs; q++) {
 		for (i = 0; i < RTE_NB_RXQ_STATS; i++) {
 			stats_ptr = RTE_PTR_ADD(&eth_stats,
 					rte_rxq_stats_strings[i].offset +
@@ -1469,7 +1478,7 @@  rte_eth_xstats_get(uint8_t port_id, struct rte_eth_xstat *xstats,
 	}
 
 	/* per-txq stats */
-	for (q = 0; q < dev->data->nb_tx_queues; q++) {
+	for (q = 0; q < nb_txqs; q++) {
 		for (i = 0; i < RTE_NB_TXQ_STATS; i++) {
 			stats_ptr = RTE_PTR_ADD(&eth_stats,
 					rte_txq_stats_strings[i].offset +