[dpdk-dev] lib: change rte_ring dequeue to guarantee ordering before tail update

Message ID 20160715043951.32040-1-juhamatti.kuusisaari@coriant.com (mailing list archive)
State Accepted, archived
Headers

Commit Message

Kuusisaari, Juhamatti (Infinera - FI/Espoo) July 15, 2016, 4:39 a.m. UTC
  Consumer queue dequeuing must be guaranteed to be done fully before
the tail is updated. This is not guaranteed with a read barrier,
changed to a write barrier just before tail update which in practice
guarantees correct order of reads and writes.

Signed-off-by: Juhamatti Kuusisaari <juhamatti.kuusisaari@coriant.com>
---
 lib/librte_ring/rte_ring.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--
2.9.0


============================================================
The information contained in this message may be privileged
and confidential and protected from disclosure. If the reader
of this message is not the intended recipient, or an employee
or agent responsible for delivering this message to the
intended recipient, you are hereby notified that any reproduction,
dissemination or distribution of this communication is strictly
prohibited. If you have received this communication in error,
please notify us immediately by replying to the message and
deleting it from your computer. Thank you. Coriant-Tellabs
============================================================
  

Comments

Ananyev, Konstantin July 15, 2016, 12:22 p.m. UTC | #1
> 
> Consumer queue dequeuing must be guaranteed to be done fully before the tail is updated. This is not guaranteed with a read barrier,
> changed to a write barrier just before tail update which in practice guarantees correct order of reads and writes.
> 
> Signed-off-by: Juhamatti Kuusisaari <juhamatti.kuusisaari@coriant.com>
> ---
>  lib/librte_ring/rte_ring.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h index eb45e41..14920af 100644
> --- a/lib/librte_ring/rte_ring.h
> +++ b/lib/librte_ring/rte_ring.h
> @@ -748,7 +748,7 @@ __rte_ring_sc_do_dequeue(struct rte_ring *r, void **obj_table,
> 
>         /* copy in table */
>         DEQUEUE_PTRS();
> -       rte_smp_rmb();
> +       rte_smp_wmb();
> 
>         __RING_STAT_ADD(r, deq_success, n);
>         r->cons.tail = cons_next;
> --

Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

> 2.9.0
>
  
Thomas Monjalon July 21, 2016, 9:26 p.m. UTC | #2
> > Consumer queue dequeuing must be guaranteed to be done fully before the tail is updated. This is not guaranteed with a read barrier,
> > changed to a write barrier just before tail update which in practice guarantees correct order of reads and writes.
> > 
> > Signed-off-by: Juhamatti Kuusisaari <juhamatti.kuusisaari@coriant.com>
> 
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

Applied, thanks
  
Jerin Jacob July 23, 2016, 6:05 a.m. UTC | #3
On Thu, Jul 21, 2016 at 11:26:50PM +0200, Thomas Monjalon wrote:
> > > Consumer queue dequeuing must be guaranteed to be done fully before the tail is updated. This is not guaranteed with a read barrier,
> > > changed to a write barrier just before tail update which in practice guarantees correct order of reads and writes.
> > > 
> > > Signed-off-by: Juhamatti Kuusisaari <juhamatti.kuusisaari@coriant.com>
> > 
> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 
> Applied, thanks

There was ongoing discussion on this
http://dpdk.org/ml/archives/dev/2016-July/044168.html
This change may not be required as it has the performance impact.
  
Thomas Monjalon July 23, 2016, 9:02 a.m. UTC | #4
2016-07-23 8:05 GMT+02:00 Jerin Jacob <jerin.jacob@caviumnetworks.com>:
> On Thu, Jul 21, 2016 at 11:26:50PM +0200, Thomas Monjalon wrote:
>> > > Consumer queue dequeuing must be guaranteed to be done fully before the tail is updated. This is not guaranteed with a read barrier,
>> > > changed to a write barrier just before tail update which in practice guarantees correct order of reads and writes.
>> > >
>> > > Signed-off-by: Juhamatti Kuusisaari <juhamatti.kuusisaari@coriant.com>
>> >
>> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>>
>> Applied, thanks
>
> There was ongoing discussion on this
> http://dpdk.org/ml/archives/dev/2016-July/044168.html

Sorry Jerin, I forgot this email.
The problem is that nobody replied to your email and you did not nack
the v2 of this patch.

> This change may not be required as it has the performance impact.

We need to clearly understand what is the performance impact
(numbers and use cases) on one hand, and is there a real bug fixed
by this patch on the other hand?

Please guys make things clear and we'll revert if needed.
  
Jerin Jacob July 23, 2016, 9:36 a.m. UTC | #5
On Sat, Jul 23, 2016 at 11:02:33AM +0200, Thomas Monjalon wrote:
> 2016-07-23 8:05 GMT+02:00 Jerin Jacob <jerin.jacob@caviumnetworks.com>:
> > On Thu, Jul 21, 2016 at 11:26:50PM +0200, Thomas Monjalon wrote:
> >> > > Consumer queue dequeuing must be guaranteed to be done fully before the tail is updated. This is not guaranteed with a read barrier,
> >> > > changed to a write barrier just before tail update which in practice guarantees correct order of reads and writes.
> >> > >
> >> > > Signed-off-by: Juhamatti Kuusisaari <juhamatti.kuusisaari@coriant.com>
> >> >
> >> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> >>
> >> Applied, thanks
> >
> > There was ongoing discussion on this
> > http://dpdk.org/ml/archives/dev/2016-July/044168.html
> 
> Sorry Jerin, I forgot this email.
> The problem is that nobody replied to your email and you did not nack
> the v2 of this patch.
> 
> > This change may not be required as it has the performance impact.
> 
> We need to clearly understand what is the performance impact
> (numbers and use cases) on one hand, and is there a real bug fixed
> by this patch on the other hand?

IHMO, there is no real bug here. rte_smb_rmb() provides the LOAD-STORE
barrier to make sure tail pointer WRITE happens only after prior LOADS.

Thoughts?

> 
> Please guys make things clear and we'll revert if needed.
  
Ananyev, Konstantin July 23, 2016, 10:14 a.m. UTC | #6
Hi lads,

> On Sat, Jul 23, 2016 at 11:02:33AM +0200, Thomas Monjalon wrote:
> > 2016-07-23 8:05 GMT+02:00 Jerin Jacob <jerin.jacob@caviumnetworks.com>:
> > > On Thu, Jul 21, 2016 at 11:26:50PM +0200, Thomas Monjalon wrote:
> > >> > > Consumer queue dequeuing must be guaranteed to be done fully
> > >> > > before the tail is updated. This is not guaranteed with a read barrier, changed to a write barrier just before tail update which in
> practice guarantees correct order of reads and writes.
> > >> > >
> > >> > > Signed-off-by: Juhamatti Kuusisaari
> > >> > > <juhamatti.kuusisaari@coriant.com>
> > >> >
> > >> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > >>
> > >> Applied, thanks
> > >
> > > There was ongoing discussion on this
> > > http://dpdk.org/ml/archives/dev/2016-July/044168.html
> >
> > Sorry Jerin, I forgot this email.
> > The problem is that nobody replied to your email and you did not nack
> > the v2 of this patch.

It's probably my bad.
I acked the patch before Jerin response, and forgot to reply later. 

> >
> > > This change may not be required as it has the performance impact.
> >
> > We need to clearly understand what is the performance impact (numbers
> > and use cases) on one hand, and is there a real bug fixed by this
> > patch on the other hand?
> 
> IHMO, there is no real bug here. rte_smb_rmb() provides the LOAD-STORE barrier to make sure tail pointer WRITE happens only after prior
> LOADS.

Yep, from what I read at the link Jerin provided, indeed it seems rte_smp_rmb() is enough for the arm arch here...
For ppc, as I can see both rte_smp_rmb()/rte_smp_wmb() emits the same instruction.

> 
> Thoughts?

Wonder how big is a performance impact?
If there is a real one, I suppose we can revert the patch?
Konstantin 

> 
> >
> > Please guys make things clear and we'll revert if needed.
  
Jerin Jacob July 23, 2016, 10:38 a.m. UTC | #7
On Sat, Jul 23, 2016 at 10:14:51AM +0000, Ananyev, Konstantin wrote:
> Hi lads,
> 
> > On Sat, Jul 23, 2016 at 11:02:33AM +0200, Thomas Monjalon wrote:
> > > 2016-07-23 8:05 GMT+02:00 Jerin Jacob <jerin.jacob@caviumnetworks.com>:
> > > > On Thu, Jul 21, 2016 at 11:26:50PM +0200, Thomas Monjalon wrote:
> > > >> > > Consumer queue dequeuing must be guaranteed to be done fully
> > > >> > > before the tail is updated. This is not guaranteed with a read barrier, changed to a write barrier just before tail update which in
> > practice guarantees correct order of reads and writes.
> > > >> > >
> > > >> > > Signed-off-by: Juhamatti Kuusisaari
> > > >> > > <juhamatti.kuusisaari@coriant.com>
> > > >> >
> > > >> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > > >>
> > > >> Applied, thanks
> > > >
> > > > There was ongoing discussion on this
> > > > http://dpdk.org/ml/archives/dev/2016-July/044168.html
> > >
> > > Sorry Jerin, I forgot this email.
> > > The problem is that nobody replied to your email and you did not nack
> > > the v2 of this patch.
> 
> It's probably my bad.
> I acked the patch before Jerin response, and forgot to reply later. 
> 
> > >
> > > > This change may not be required as it has the performance impact.
> > >
> > > We need to clearly understand what is the performance impact (numbers
> > > and use cases) on one hand, and is there a real bug fixed by this
> > > patch on the other hand?
> > 
> > IHMO, there is no real bug here. rte_smb_rmb() provides the LOAD-STORE barrier to make sure tail pointer WRITE happens only after prior
> > LOADS.
> 
> Yep, from what I read at the link Jerin provided, indeed it seems rte_smp_rmb() is enough for the arm arch here...
> For ppc, as I can see both rte_smp_rmb()/rte_smp_wmb() emits the same instruction.
> 
> > 
> > Thoughts?
> 
> Wonder how big is a performance impact?

With this change we need to wait for addtional STORES to be completed to
local buffer in addtion to LOADS from ring buffers memory.

> If there is a real one, I suppose we can revert the patch?

Request to revert this one as their no benifts for other architectures
and indeed it creates addtional delay in waiting for STORES to complete in ARM.
Lets do the correct thing by reverting it.

Jerin

 

> Konstantin 
> 
> > 
> > >
> > > Please guys make things clear and we'll revert if needed.
  
Ananyev, Konstantin July 23, 2016, 11:15 a.m. UTC | #8
> -----Original Message-----
> From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> Sent: Saturday, July 23, 2016 11:39 AM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Cc: Thomas Monjalon <thomas.monjalon@6wind.com>; Juhamatti Kuusisaari <juhamatti.kuusisaari@coriant.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] lib: change rte_ring dequeue to guarantee ordering before tail update
> 
> On Sat, Jul 23, 2016 at 10:14:51AM +0000, Ananyev, Konstantin wrote:
> > Hi lads,
> >
> > > On Sat, Jul 23, 2016 at 11:02:33AM +0200, Thomas Monjalon wrote:
> > > > 2016-07-23 8:05 GMT+02:00 Jerin Jacob <jerin.jacob@caviumnetworks.com>:
> > > > > On Thu, Jul 21, 2016 at 11:26:50PM +0200, Thomas Monjalon wrote:
> > > > >> > > Consumer queue dequeuing must be guaranteed to be done
> > > > >> > > fully before the tail is updated. This is not guaranteed
> > > > >> > > with a read barrier, changed to a write barrier just before
> > > > >> > > tail update which in
> > > practice guarantees correct order of reads and writes.
> > > > >> > >
> > > > >> > > Signed-off-by: Juhamatti Kuusisaari
> > > > >> > > <juhamatti.kuusisaari@coriant.com>
> > > > >> >
> > > > >> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > > > >>
> > > > >> Applied, thanks
> > > > >
> > > > > There was ongoing discussion on this
> > > > > http://dpdk.org/ml/archives/dev/2016-July/044168.html
> > > >
> > > > Sorry Jerin, I forgot this email.
> > > > The problem is that nobody replied to your email and you did not
> > > > nack the v2 of this patch.
> >
> > It's probably my bad.
> > I acked the patch before Jerin response, and forgot to reply later.
> >
> > > >
> > > > > This change may not be required as it has the performance impact.
> > > >
> > > > We need to clearly understand what is the performance impact
> > > > (numbers and use cases) on one hand, and is there a real bug fixed
> > > > by this patch on the other hand?
> > >
> > > IHMO, there is no real bug here. rte_smb_rmb() provides the
> > > LOAD-STORE barrier to make sure tail pointer WRITE happens only after prior LOADS.
> >
> > Yep, from what I read at the link Jerin provided, indeed it seems rte_smp_rmb() is enough for the arm arch here...
> > For ppc, as I can see both rte_smp_rmb()/rte_smp_wmb() emits the same instruction.
> >
> > >
> > > Thoughts?
> >
> > Wonder how big is a performance impact?
> 
> With this change we need to wait for addtional STORES to be completed to local buffer in addtion to LOADS from ring buffers memory.

I understand that, just wonder did you see any real performance difference?
Probably with ring_perf_autotest/mempool_perf_autotest or something?
Konstantin 

> 
> > If there is a real one, I suppose we can revert the patch?
> 
> Request to revert this one as their no benifts for other architectures and indeed it creates addtional delay in waiting for STORES to complete
> in ARM.
> Lets do the correct thing by reverting it.
> 
> Jerin
> 
> 
> 
> > Konstantin
> >
> > >
> > > >
> > > > Please guys make things clear and we'll revert if needed.
  
Jerin Jacob July 23, 2016, 11:49 a.m. UTC | #9
On Sat, Jul 23, 2016 at 11:15:27AM +0000, Ananyev, Konstantin wrote:
> 
> 
> > -----Original Message-----
> > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> > Sent: Saturday, July 23, 2016 11:39 AM
> > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > Cc: Thomas Monjalon <thomas.monjalon@6wind.com>; Juhamatti Kuusisaari <juhamatti.kuusisaari@coriant.com>; dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH] lib: change rte_ring dequeue to guarantee ordering before tail update
> > 
> > On Sat, Jul 23, 2016 at 10:14:51AM +0000, Ananyev, Konstantin wrote:
> > > Hi lads,
> > >
> > > > On Sat, Jul 23, 2016 at 11:02:33AM +0200, Thomas Monjalon wrote:
> > > > > 2016-07-23 8:05 GMT+02:00 Jerin Jacob <jerin.jacob@caviumnetworks.com>:
> > > > > > On Thu, Jul 21, 2016 at 11:26:50PM +0200, Thomas Monjalon wrote:
> > > > > >> > > Consumer queue dequeuing must be guaranteed to be done
> > > > > >> > > fully before the tail is updated. This is not guaranteed
> > > > > >> > > with a read barrier, changed to a write barrier just before
> > > > > >> > > tail update which in
> > > > practice guarantees correct order of reads and writes.
> > > > > >> > >
> > > > > >> > > Signed-off-by: Juhamatti Kuusisaari
> > > > > >> > > <juhamatti.kuusisaari@coriant.com>
> > > > > >> >
> > > > > >> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > > > > >>
> > > > > >> Applied, thanks
> > > > > >
> > > > > > There was ongoing discussion on this
> > > > > > http://dpdk.org/ml/archives/dev/2016-July/044168.html
> > > > >
> > > > > Sorry Jerin, I forgot this email.
> > > > > The problem is that nobody replied to your email and you did not
> > > > > nack the v2 of this patch.
> > >
> > > It's probably my bad.
> > > I acked the patch before Jerin response, and forgot to reply later.
> > >
> > > > >
> > > > > > This change may not be required as it has the performance impact.
> > > > >
> > > > > We need to clearly understand what is the performance impact
> > > > > (numbers and use cases) on one hand, and is there a real bug fixed
> > > > > by this patch on the other hand?
> > > >
> > > > IHMO, there is no real bug here. rte_smb_rmb() provides the
> > > > LOAD-STORE barrier to make sure tail pointer WRITE happens only after prior LOADS.
> > >
> > > Yep, from what I read at the link Jerin provided, indeed it seems rte_smp_rmb() is enough for the arm arch here...
> > > For ppc, as I can see both rte_smp_rmb()/rte_smp_wmb() emits the same instruction.
> > >
> > > >
> > > > Thoughts?
> > >
> > > Wonder how big is a performance impact?
> > 
> > With this change we need to wait for addtional STORES to be completed to local buffer in addtion to LOADS from ring buffers memory.
> 
> I understand that, just wonder did you see any real performance difference?

Yeah...

> Probably with ring_perf_autotest/mempool_perf_autotest or something?

W/O change 
RTE>>ring_perf_autotest 
### Testing single element and burst enq/deq ###
SP/SC single enq/dequeue: 4
MP/MC single enq/dequeue: 16
SP/SC burst enq/dequeue (size: 8): 0
MP/MC burst enq/dequeue (size: 8): 2
SP/SC burst enq/dequeue (size: 32): 0
MP/MC burst enq/dequeue (size: 32): 0

### Testing empty dequeue ###
SC empty dequeue: 0.35
MC empty dequeue: 0.60

### Testing using a single lcore ###
SP/SC bulk enq/dequeue (size: 8): 0.93
MP/MC bulk enq/dequeue (size: 8): 2.45
SP/SC bulk enq/dequeue (size: 32): 0.58
MP/MC bulk enq/dequeue (size: 32): 0.97

### Testing using two physical cores ###
SP/SC bulk enq/dequeue (size: 8): 1.89
MP/MC bulk enq/dequeue (size: 8): 4.28
SP/SC bulk enq/dequeue (size: 32): 0.90
MP/MC bulk enq/dequeue (size: 32): 1.19
Test OK
RTE>>

With change
RTE>>ring_perf_autotest 
### Testing single element and burst enq/deq ###
SP/SC single enq/dequeue: 6
MP/MC single enq/dequeue: 16
SP/SC burst enq/dequeue (size: 8): 1
MP/MC burst enq/dequeue (size: 8): 2
SP/SC burst enq/dequeue (size: 32): 0
MP/MC burst enq/dequeue (size: 32): 0

### Testing empty dequeue ###
SC empty dequeue: 0.35
MC empty dequeue: 0.60

### Testing using a single lcore ###
SP/SC bulk enq/dequeue (size: 8): 1.28
MP/MC bulk enq/dequeue (size: 8): 2.47
SP/SC bulk enq/dequeue (size: 32): 0.64
MP/MC bulk enq/dequeue (size: 32): 0.97

### Testing using two physical cores ###
SP/SC bulk enq/dequeue (size: 8): 2.08
MP/MC bulk enq/dequeue (size: 8): 4.29
SP/SC bulk enq/dequeue (size: 32): 1.24
MP/MC bulk enq/dequeue (size: 32): 1.19
Test OK

> Konstantin 
> 
> > 
> > > If there is a real one, I suppose we can revert the patch?
> > 
> > Request to revert this one as their no benifts for other architectures and indeed it creates addtional delay in waiting for STORES to complete
> > in ARM.
> > Lets do the correct thing by reverting it.
> > 
> > Jerin
> > 
> > 
> > 
> > > Konstantin
> > >
> > > >
> > > > >
> > > > > Please guys make things clear and we'll revert if needed.
  
Ananyev, Konstantin July 23, 2016, 12:32 p.m. UTC | #10
> -----Original Message-----
> From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> Sent: Saturday, July 23, 2016 12:49 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Cc: Thomas Monjalon <thomas.monjalon@6wind.com>; Juhamatti Kuusisaari <juhamatti.kuusisaari@coriant.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] lib: change rte_ring dequeue to guarantee ordering before tail update
> 
> On Sat, Jul 23, 2016 at 11:15:27AM +0000, Ananyev, Konstantin wrote:
> >
> >
> > > -----Original Message-----
> > > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> > > Sent: Saturday, July 23, 2016 11:39 AM
> > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > Cc: Thomas Monjalon <thomas.monjalon@6wind.com>; Juhamatti
> > > Kuusisaari <juhamatti.kuusisaari@coriant.com>; dev@dpdk.org
> > > Subject: Re: [dpdk-dev] [PATCH] lib: change rte_ring dequeue to
> > > guarantee ordering before tail update
> > >
> > > On Sat, Jul 23, 2016 at 10:14:51AM +0000, Ananyev, Konstantin wrote:
> > > > Hi lads,
> > > >
> > > > > On Sat, Jul 23, 2016 at 11:02:33AM +0200, Thomas Monjalon wrote:
> > > > > > 2016-07-23 8:05 GMT+02:00 Jerin Jacob <jerin.jacob@caviumnetworks.com>:
> > > > > > > On Thu, Jul 21, 2016 at 11:26:50PM +0200, Thomas Monjalon wrote:
> > > > > > >> > > Consumer queue dequeuing must be guaranteed to be done
> > > > > > >> > > fully before the tail is updated. This is not
> > > > > > >> > > guaranteed with a read barrier, changed to a write
> > > > > > >> > > barrier just before tail update which in
> > > > > practice guarantees correct order of reads and writes.
> > > > > > >> > >
> > > > > > >> > > Signed-off-by: Juhamatti Kuusisaari
> > > > > > >> > > <juhamatti.kuusisaari@coriant.com>
> > > > > > >> >
> > > > > > >> > Acked-by: Konstantin Ananyev
> > > > > > >> > <konstantin.ananyev@intel.com>
> > > > > > >>
> > > > > > >> Applied, thanks
> > > > > > >
> > > > > > > There was ongoing discussion on this
> > > > > > > http://dpdk.org/ml/archives/dev/2016-July/044168.html
> > > > > >
> > > > > > Sorry Jerin, I forgot this email.
> > > > > > The problem is that nobody replied to your email and you did
> > > > > > not nack the v2 of this patch.
> > > >
> > > > It's probably my bad.
> > > > I acked the patch before Jerin response, and forgot to reply later.
> > > >
> > > > > >
> > > > > > > This change may not be required as it has the performance impact.
> > > > > >
> > > > > > We need to clearly understand what is the performance impact
> > > > > > (numbers and use cases) on one hand, and is there a real bug
> > > > > > fixed by this patch on the other hand?
> > > > >
> > > > > IHMO, there is no real bug here. rte_smb_rmb() provides the
> > > > > LOAD-STORE barrier to make sure tail pointer WRITE happens only after prior LOADS.
> > > >
> > > > Yep, from what I read at the link Jerin provided, indeed it seems rte_smp_rmb() is enough for the arm arch here...
> > > > For ppc, as I can see both rte_smp_rmb()/rte_smp_wmb() emits the same instruction.
> > > >
> > > > >
> > > > > Thoughts?
> > > >
> > > > Wonder how big is a performance impact?
> > >
> > > With this change we need to wait for addtional STORES to be completed to local buffer in addtion to LOADS from ring buffers memory.
> >
> > I understand that, just wonder did you see any real performance difference?
> 
> Yeah...

Ok, then I don't see any good reason why we shouldn't revert it.
I suppose the best way would be to submit a new patch for RC5 to revert the changes.
Do you prefer to submit it yourself and I'll ack it or visa-versa?
Thanks
Konstantin 

> 
> > Probably with ring_perf_autotest/mempool_perf_autotest or something?
> 
> W/O change
> RTE>>ring_perf_autotest
> ### Testing single element and burst enq/deq ### SP/SC single enq/dequeue: 4 MP/MC single enq/dequeue: 16 SP/SC burst enq/dequeue
> (size: 8): 0 MP/MC burst enq/dequeue (size: 8): 2 SP/SC burst enq/dequeue (size: 32): 0 MP/MC burst enq/dequeue (size: 32): 0
> 
> ### Testing empty dequeue ###
> SC empty dequeue: 0.35
> MC empty dequeue: 0.60
> 
> ### Testing using a single lcore ###
> SP/SC bulk enq/dequeue (size: 8): 0.93
> MP/MC bulk enq/dequeue (size: 8): 2.45
> SP/SC bulk enq/dequeue (size: 32): 0.58
> MP/MC bulk enq/dequeue (size: 32): 0.97
> 
> ### Testing using two physical cores ### SP/SC bulk enq/dequeue (size: 8): 1.89 MP/MC bulk enq/dequeue (size: 8): 4.28 SP/SC bulk
> enq/dequeue (size: 32): 0.90 MP/MC bulk enq/dequeue (size: 32): 1.19 Test OK
> RTE>>
> 
> With change
> RTE>>ring_perf_autotest
> ### Testing single element and burst enq/deq ### SP/SC single enq/dequeue: 6 MP/MC single enq/dequeue: 16 SP/SC burst enq/dequeue
> (size: 8): 1 MP/MC burst enq/dequeue (size: 8): 2 SP/SC burst enq/dequeue (size: 32): 0 MP/MC burst enq/dequeue (size: 32): 0
> 
> ### Testing empty dequeue ###
> SC empty dequeue: 0.35
> MC empty dequeue: 0.60
> 
> ### Testing using a single lcore ###
> SP/SC bulk enq/dequeue (size: 8): 1.28
> MP/MC bulk enq/dequeue (size: 8): 2.47
> SP/SC bulk enq/dequeue (size: 32): 0.64
> MP/MC bulk enq/dequeue (size: 32): 0.97
> 
> ### Testing using two physical cores ### SP/SC bulk enq/dequeue (size: 8): 2.08 MP/MC bulk enq/dequeue (size: 8): 4.29 SP/SC bulk
> enq/dequeue (size: 32): 1.24 MP/MC bulk enq/dequeue (size: 32): 1.19 Test OK
> 
> > Konstantin
> >
> > >
> > > > If there is a real one, I suppose we can revert the patch?
> > >
> > > Request to revert this one as their no benifts for other
> > > architectures and indeed it creates addtional delay in waiting for STORES to complete in ARM.
> > > Lets do the correct thing by reverting it.
> > >
> > > Jerin
> > >
> > >
> > >
> > > > Konstantin
> > > >
> > > > >
> > > > > >
> > > > > > Please guys make things clear and we'll revert if needed.
  
Jerin Jacob July 23, 2016, 12:35 p.m. UTC | #11
On Sat, Jul 23, 2016 at 12:32:01PM +0000, Ananyev, Konstantin wrote:
> 
> 
> > -----Original Message-----
> > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> > Sent: Saturday, July 23, 2016 12:49 PM
> > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > Cc: Thomas Monjalon <thomas.monjalon@6wind.com>; Juhamatti Kuusisaari <juhamatti.kuusisaari@coriant.com>; dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH] lib: change rte_ring dequeue to guarantee ordering before tail update
> > 
> > On Sat, Jul 23, 2016 at 11:15:27AM +0000, Ananyev, Konstantin wrote:
> > >
> > >
> > > > -----Original Message-----
> > > > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> > > > Sent: Saturday, July 23, 2016 11:39 AM
> > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > > Cc: Thomas Monjalon <thomas.monjalon@6wind.com>; Juhamatti
> > > > Kuusisaari <juhamatti.kuusisaari@coriant.com>; dev@dpdk.org
> > > > Subject: Re: [dpdk-dev] [PATCH] lib: change rte_ring dequeue to
> > > > guarantee ordering before tail update
> > > >
> > > > On Sat, Jul 23, 2016 at 10:14:51AM +0000, Ananyev, Konstantin wrote:
> > > > > Hi lads,
> > > > >
> > > > > > On Sat, Jul 23, 2016 at 11:02:33AM +0200, Thomas Monjalon wrote:
> > > > > > > 2016-07-23 8:05 GMT+02:00 Jerin Jacob <jerin.jacob@caviumnetworks.com>:
> > > > > > > > On Thu, Jul 21, 2016 at 11:26:50PM +0200, Thomas Monjalon wrote:
> > > > > > > >> > > Consumer queue dequeuing must be guaranteed to be done
> > > > > > > >> > > fully before the tail is updated. This is not
> > > > > > > >> > > guaranteed with a read barrier, changed to a write
> > > > > > > >> > > barrier just before tail update which in
> > > > > > practice guarantees correct order of reads and writes.
> > > > > > > >> > >
> > > > > > > >> > > Signed-off-by: Juhamatti Kuusisaari
> > > > > > > >> > > <juhamatti.kuusisaari@coriant.com>
> > > > > > > >> >
> > > > > > > >> > Acked-by: Konstantin Ananyev
> > > > > > > >> > <konstantin.ananyev@intel.com>
> > > > > > > >>
> > > > > > > >> Applied, thanks
> > > > > > > >
> > > > > > > > There was ongoing discussion on this
> > > > > > > > http://dpdk.org/ml/archives/dev/2016-July/044168.html
> > > > > > >
> > > > > > > Sorry Jerin, I forgot this email.
> > > > > > > The problem is that nobody replied to your email and you did
> > > > > > > not nack the v2 of this patch.
> > > > >
> > > > > It's probably my bad.
> > > > > I acked the patch before Jerin response, and forgot to reply later.
> > > > >
> > > > > > >
> > > > > > > > This change may not be required as it has the performance impact.
> > > > > > >
> > > > > > > We need to clearly understand what is the performance impact
> > > > > > > (numbers and use cases) on one hand, and is there a real bug
> > > > > > > fixed by this patch on the other hand?
> > > > > >
> > > > > > IHMO, there is no real bug here. rte_smb_rmb() provides the
> > > > > > LOAD-STORE barrier to make sure tail pointer WRITE happens only after prior LOADS.
> > > > >
> > > > > Yep, from what I read at the link Jerin provided, indeed it seems rte_smp_rmb() is enough for the arm arch here...
> > > > > For ppc, as I can see both rte_smp_rmb()/rte_smp_wmb() emits the same instruction.
> > > > >
> > > > > >
> > > > > > Thoughts?
> > > > >
> > > > > Wonder how big is a performance impact?
> > > >
> > > > With this change we need to wait for addtional STORES to be completed to local buffer in addtion to LOADS from ring buffers memory.
> > >
> > > I understand that, just wonder did you see any real performance difference?
> > 
> > Yeah...
> 
> Ok, then I don't see any good reason why we shouldn't revert it.
> I suppose the best way would be to submit a new patch for RC5 to revert the changes.
> Do you prefer to submit it yourself and I'll ack it or visa-versa?

OK. I will submit it then

> Thanks
> Konstantin 
> 
> > 
> > > Probably with ring_perf_autotest/mempool_perf_autotest or something?
> > 
> > W/O change
> > RTE>>ring_perf_autotest
> > ### Testing single element and burst enq/deq ### SP/SC single enq/dequeue: 4 MP/MC single enq/dequeue: 16 SP/SC burst enq/dequeue
> > (size: 8): 0 MP/MC burst enq/dequeue (size: 8): 2 SP/SC burst enq/dequeue (size: 32): 0 MP/MC burst enq/dequeue (size: 32): 0
> > 
> > ### Testing empty dequeue ###
> > SC empty dequeue: 0.35
> > MC empty dequeue: 0.60
> > 
> > ### Testing using a single lcore ###
> > SP/SC bulk enq/dequeue (size: 8): 0.93
> > MP/MC bulk enq/dequeue (size: 8): 2.45
> > SP/SC bulk enq/dequeue (size: 32): 0.58
> > MP/MC bulk enq/dequeue (size: 32): 0.97
> > 
> > ### Testing using two physical cores ### SP/SC bulk enq/dequeue (size: 8): 1.89 MP/MC bulk enq/dequeue (size: 8): 4.28 SP/SC bulk
> > enq/dequeue (size: 32): 0.90 MP/MC bulk enq/dequeue (size: 32): 1.19 Test OK
> > RTE>>
> > 
> > With change
> > RTE>>ring_perf_autotest
> > ### Testing single element and burst enq/deq ### SP/SC single enq/dequeue: 6 MP/MC single enq/dequeue: 16 SP/SC burst enq/dequeue
> > (size: 8): 1 MP/MC burst enq/dequeue (size: 8): 2 SP/SC burst enq/dequeue (size: 32): 0 MP/MC burst enq/dequeue (size: 32): 0
> > 
> > ### Testing empty dequeue ###
> > SC empty dequeue: 0.35
> > MC empty dequeue: 0.60
> > 
> > ### Testing using a single lcore ###
> > SP/SC bulk enq/dequeue (size: 8): 1.28
> > MP/MC bulk enq/dequeue (size: 8): 2.47
> > SP/SC bulk enq/dequeue (size: 32): 0.64
> > MP/MC bulk enq/dequeue (size: 32): 0.97
> > 
> > ### Testing using two physical cores ### SP/SC bulk enq/dequeue (size: 8): 2.08 MP/MC bulk enq/dequeue (size: 8): 4.29 SP/SC bulk
> > enq/dequeue (size: 32): 1.24 MP/MC bulk enq/dequeue (size: 32): 1.19 Test OK
> > 
> > > Konstantin
> > >
> > > >
> > > > > If there is a real one, I suppose we can revert the patch?
> > > >
> > > > Request to revert this one as their no benifts for other
> > > > architectures and indeed it creates addtional delay in waiting for STORES to complete in ARM.
> > > > Lets do the correct thing by reverting it.
> > > >
> > > > Jerin
> > > >
> > > >
> > > >
> > > > > Konstantin
> > > > >
> > > > > >
> > > > > > >
> > > > > > > Please guys make things clear and we'll revert if needed.
  

Patch

diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index eb45e41..14920af 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -748,7 +748,7 @@  __rte_ring_sc_do_dequeue(struct rte_ring *r, void **obj_table,

        /* copy in table */
        DEQUEUE_PTRS();
-       rte_smp_rmb();
+       rte_smp_wmb();

        __RING_STAT_ADD(r, deq_success, n);
        r->cons.tail = cons_next;