[dpdk-dev] igb: fix crash with offload on 82575 chipset

Message ID 1458901920-21677-1-git-send-email-olivier.matz@6wind.com (mailing list archive)
State Accepted, archived
Delegated to: Bruce Richardson
Headers

Commit Message

Olivier Matz March 25, 2016, 10:32 a.m. UTC
  On the 82575 chipset, there is a pool of global TX contexts instead of 2
per queues on 82576. See Table A-1 "Changes in Programming Interface
Relative to 82575" of Intel® 82576EB GbE Controller datasheet (*).

In the driver, the contexts are attributed to a TX queue: 0-1 for txq0,
2-3 for txq1, and so on.

In igbe_set_xmit_ctx(), the variable ctx_curr contains the index of the
per-queue context (0 or 1), and ctx_idx contains the index to be given
to the hardware (0 to 7). The size of txq->ctx_cache[] is 2, and must
be indexed with ctx_curr to avoid an out-of-bound access.

Also, the index returned by what_advctx_update() is the per-queue
index (0 or 1), so we need to add txq->ctx_start before sending it
to the hardware.

(*) The datasheets says 16 global contexts, however the IDX fields in TX
    descriptors are 3 bits, which gives a total of 8 contexts. The
    driver assumes there are 8 contexts on 82575: 2 per queues, 4 txqs.

Fixes: 4c8db5f09a ("igb: enable TSO support")
Fixes: af75078fec ("first public release")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 drivers/net/e1000/igb_rxtx.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
  

Comments

Ananyev, Konstantin March 25, 2016, 2:06 p.m. UTC | #1
> -----Original Message-----

> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier Matz

> Sent: Friday, March 25, 2016 10:32 AM

> To: dev@dpdk.org

> Cc: Lu, Wenzhuo

> Subject: [dpdk-dev] [PATCH] igb: fix crash with offload on 82575 chipset

> 

> On the 82575 chipset, there is a pool of global TX contexts instead of 2

> per queues on 82576. See Table A-1 "Changes in Programming Interface

> Relative to 82575" of Intel® 82576EB GbE Controller datasheet (*).

> 

> In the driver, the contexts are attributed to a TX queue: 0-1 for txq0,

> 2-3 for txq1, and so on.

> 

> In igbe_set_xmit_ctx(), the variable ctx_curr contains the index of the

> per-queue context (0 or 1), and ctx_idx contains the index to be given

> to the hardware (0 to 7). The size of txq->ctx_cache[] is 2, and must

> be indexed with ctx_curr to avoid an out-of-bound access.

> 

> Also, the index returned by what_advctx_update() is the per-queue

> index (0 or 1), so we need to add txq->ctx_start before sending it

> to the hardware.

> 

> (*) The datasheets says 16 global contexts, however the IDX fields in TX

>     descriptors are 3 bits, which gives a total of 8 contexts. The

>     driver assumes there are 8 contexts on 82575: 2 per queues, 4 txqs.

> 

> Fixes: 4c8db5f09a ("igb: enable TSO support")

> Fixes: af75078fec ("first public release")

> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>

> ---

>  drivers/net/e1000/igb_rxtx.c | 6 +++---

>  1 file changed, 3 insertions(+), 3 deletions(-)

> 

> diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c

> index e527895..529dba4 100644

> --- a/drivers/net/e1000/igb_rxtx.c

> +++ b/drivers/net/e1000/igb_rxtx.c

> @@ -325,9 +325,9 @@ igbe_set_xmit_ctx(struct igb_tx_queue* txq,

>  	}

> 

>  	txq->ctx_cache[ctx_curr].flags = ol_flags;

> -	txq->ctx_cache[ctx_idx].tx_offload.data =

> +	txq->ctx_cache[ctx_curr].tx_offload.data =

>  		tx_offload_mask.data & tx_offload.data;

> -	txq->ctx_cache[ctx_idx].tx_offload_mask = tx_offload_mask;

> +	txq->ctx_cache[ctx_curr].tx_offload_mask = tx_offload_mask;

> 

>  	ctx_txd->type_tucmd_mlhl = rte_cpu_to_le_32(type_tucmd_mlhl);

>  	vlan_macip_lens = (uint32_t)tx_offload.data;

> @@ -450,7 +450,7 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,

>  			ctx = what_advctx_update(txq, tx_ol_req, tx_offload);

>  			/* Only allocate context descriptor if required*/

>  			new_ctx = (ctx == IGB_CTX_NUM);

> -			ctx = txq->ctx_curr;

> +			ctx = txq->ctx_curr + txq->ctx_start;

>  			tx_last = (uint16_t) (tx_last + new_ctx);

>  		}

>  		if (tx_last >= txq->nb_tx_desc)

> --


Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>


> 2.1.4
  
Bruce Richardson March 25, 2016, 3:26 p.m. UTC | #2
On Fri, Mar 25, 2016 at 02:06:51PM +0000, Ananyev, Konstantin wrote:
> 
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier Matz
> > Sent: Friday, March 25, 2016 10:32 AM
> > To: dev@dpdk.org
> > Cc: Lu, Wenzhuo
> > Subject: [dpdk-dev] [PATCH] igb: fix crash with offload on 82575 chipset
> > 
> > On the 82575 chipset, there is a pool of global TX contexts instead of 2
> > per queues on 82576. See Table A-1 "Changes in Programming Interface
> > Relative to 82575" of Intel® 82576EB GbE Controller datasheet (*).
> > 
> > In the driver, the contexts are attributed to a TX queue: 0-1 for txq0,
> > 2-3 for txq1, and so on.
> > 
> > In igbe_set_xmit_ctx(), the variable ctx_curr contains the index of the
> > per-queue context (0 or 1), and ctx_idx contains the index to be given
> > to the hardware (0 to 7). The size of txq->ctx_cache[] is 2, and must
> > be indexed with ctx_curr to avoid an out-of-bound access.
> > 
> > Also, the index returned by what_advctx_update() is the per-queue
> > index (0 or 1), so we need to add txq->ctx_start before sending it
> > to the hardware.
> > 
> > (*) The datasheets says 16 global contexts, however the IDX fields in TX
> >     descriptors are 3 bits, which gives a total of 8 contexts. The
> >     driver assumes there are 8 contexts on 82575: 2 per queues, 4 txqs.
> > 
> > Fixes: 4c8db5f09a ("igb: enable TSO support")
> > Fixes: af75078fec ("first public release")
> > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> 
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

Applied to dpdk-next-net/rel_16_04

/Bruce
  

Patch

diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index e527895..529dba4 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -325,9 +325,9 @@  igbe_set_xmit_ctx(struct igb_tx_queue* txq,
 	}
 
 	txq->ctx_cache[ctx_curr].flags = ol_flags;
-	txq->ctx_cache[ctx_idx].tx_offload.data =
+	txq->ctx_cache[ctx_curr].tx_offload.data =
 		tx_offload_mask.data & tx_offload.data;
-	txq->ctx_cache[ctx_idx].tx_offload_mask = tx_offload_mask;
+	txq->ctx_cache[ctx_curr].tx_offload_mask = tx_offload_mask;
 
 	ctx_txd->type_tucmd_mlhl = rte_cpu_to_le_32(type_tucmd_mlhl);
 	vlan_macip_lens = (uint32_t)tx_offload.data;
@@ -450,7 +450,7 @@  eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			ctx = what_advctx_update(txq, tx_ol_req, tx_offload);
 			/* Only allocate context descriptor if required*/
 			new_ctx = (ctx == IGB_CTX_NUM);
-			ctx = txq->ctx_curr;
+			ctx = txq->ctx_curr + txq->ctx_start;
 			tx_last = (uint16_t) (tx_last + new_ctx);
 		}
 		if (tx_last >= txq->nb_tx_desc)