Message ID | 1445922658-4955-1-git-send-email-xutao.sun@intel.com (mailing list archive) |
---|---|
State | Superseded, archived |
Headers |
Return-Path: <dev-bounces@dpdk.org> X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 648757F1C; Tue, 27 Oct 2015 06:11:10 +0100 (CET) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id E96C26A80 for <dev@dpdk.org>; Tue, 27 Oct 2015 06:11:07 +0100 (CET) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP; 26 Oct 2015 22:11:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,203,1444719600"; d="scan'208";a="820194425" Received: from shvmail01.sh.intel.com ([10.239.29.42]) by fmsmga001.fm.intel.com with ESMTP; 26 Oct 2015 22:11:06 -0700 Received: from shecgisg004.sh.intel.com (shecgisg004.sh.intel.com [10.239.29.89]) by shvmail01.sh.intel.com with ESMTP id t9R5B4Gl009389; Tue, 27 Oct 2015 13:11:04 +0800 Received: from shecgisg004.sh.intel.com (localhost [127.0.0.1]) by shecgisg004.sh.intel.com (8.13.6/8.13.6/SuSE Linux 0.8) with ESMTP id t9R5B0Le004989; Tue, 27 Oct 2015 13:11:02 +0800 Received: (from xutaosun@localhost) by shecgisg004.sh.intel.com (8.13.6/8.13.6/Submit) id t9R5B0ba004985; Tue, 27 Oct 2015 13:11:00 +0800 From: Xutao Sun <xutao.sun@intel.com> To: dev@dpdk.org Date: Tue, 27 Oct 2015 13:10:58 +0800 Message-Id: <1445922658-4955-1-git-send-email-xutao.sun@intel.com> X-Mailer: git-send-email 1.7.4.1 In-Reply-To: <1444721365-1065-1-git-send-email-xutao.sun@intel.com> References: <1444721365-1065-1-git-send-email-xutao.sun@intel.com> Subject: [dpdk-dev] [PATCH v2] examples/vmdq: Fix the core dump issue when mem_pool is more than 34 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK <dev.dpdk.org> List-Unsubscribe: <http://dpdk.org/ml/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://dpdk.org/ml/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <http://dpdk.org/ml/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org Sender: "dev" <dev-bounces@dpdk.org> |
Commit Message
Xutao Sun
Oct. 27, 2015, 5:10 a.m. UTC
Macro MAX_QUEUES was defined to 128, only allow 16 vmdq_pools in theory.
When running vmdq_app with more than 34 vmdq_pools, it will cause the
core_dump issue.
Change MAX_QUEUES to 1024 will solve this issue.
Signed-off-by: Xutao Sun <xutao.sun@intel.com>
---
v2:
- rectify the NUM_MBUFS_PER_PORT since MAX_QUEUES has been changed
examples/vmdq/main.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
Comments
Hi Xutao, > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Xutao Sun > Sent: Tuesday, October 27, 2015 5:11 AM > To: dev@dpdk.org > Subject: [dpdk-dev] [PATCH v2] examples/vmdq: Fix the core dump issue > when mem_pool is more than 34 > > Macro MAX_QUEUES was defined to 128, only allow 16 vmdq_pools in > theory. > When running vmdq_app with more than 34 vmdq_pools, it will cause the > core_dump issue. > Change MAX_QUEUES to 1024 will solve this issue. > > Signed-off-by: Xutao Sun <xutao.sun@intel.com> > --- > v2: > - rectify the NUM_MBUFS_PER_PORT since MAX_QUEUES has been > changed > > examples/vmdq/main.c | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c > index a142d49..bba5164 100644 > --- a/examples/vmdq/main.c > +++ b/examples/vmdq/main.c > @@ -69,12 +69,13 @@ > #include <rte_mbuf.h> > #include <rte_memcpy.h> > > -#define MAX_QUEUES 128 > +#define MAX_QUEUES 1024 > /* > * For 10 GbE, 128 queues require roughly > * 128*512 (RX/TX_queue_nb * RX/TX_ring_descriptors_nb) per port. > */ > -#define NUM_MBUFS_PER_PORT (128*512) > +#define NUM_MBUFS_PER_PORT (MAX_QUEUES * > RTE_MAX(RTE_TEST_RX_DESC_DEFAULT, \ > + > RTE_TEST_TX_DESC_DEFAULT)) > #define MBUF_CACHE_SIZE 64 > > #define MAX_PKT_BURST 32 > -- > 1.9.3 Please, change the comment above, as you have change code related to it, i.e. it is not 128*512 anymore. Pablo
> -----Original Message----- > From: De Lara Guarch, Pablo > Sent: Tuesday, October 27, 2015 3:55 PM > To: Sun, Xutao; dev@dpdk.org > Subject: RE: [dpdk-dev] [PATCH v2] examples/vmdq: Fix the core dump issue > when mem_pool is more than 34 > > Hi Xutao, > > > -----Original Message----- > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Xutao Sun > > Sent: Tuesday, October 27, 2015 5:11 AM > > To: dev@dpdk.org > > Subject: [dpdk-dev] [PATCH v2] examples/vmdq: Fix the core dump issue > > when mem_pool is more than 34 > > > > Macro MAX_QUEUES was defined to 128, only allow 16 vmdq_pools in > > theory. > > When running vmdq_app with more than 34 vmdq_pools, it will cause the > > core_dump issue. > > Change MAX_QUEUES to 1024 will solve this issue. > > > > Signed-off-by: Xutao Sun <xutao.sun@intel.com> > > --- > > v2: > > - rectify the NUM_MBUFS_PER_PORT since MAX_QUEUES has been > changed > > > > examples/vmdq/main.c | 5 +++-- > > 1 file changed, 3 insertions(+), 2 deletions(-) > > > > diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c index > > a142d49..bba5164 100644 > > --- a/examples/vmdq/main.c > > +++ b/examples/vmdq/main.c > > @@ -69,12 +69,13 @@ > > #include <rte_mbuf.h> > > #include <rte_memcpy.h> > > > > -#define MAX_QUEUES 128 > > +#define MAX_QUEUES 1024 > > /* > > * For 10 GbE, 128 queues require roughly > > * 128*512 (RX/TX_queue_nb * RX/TX_ring_descriptors_nb) per port. > > */ > > -#define NUM_MBUFS_PER_PORT (128*512) > > +#define NUM_MBUFS_PER_PORT (MAX_QUEUES * > > RTE_MAX(RTE_TEST_RX_DESC_DEFAULT, \ > > + > > RTE_TEST_TX_DESC_DEFAULT)) > > #define MBUF_CACHE_SIZE 64 > > > > #define MAX_PKT_BURST 32 > > -- > > 1.9.3 > > Please, change the comment above, as you have change code related to it, > i.e. it is not 128*512 anymore. > > Pablo Hi, Pablo I described how I changed code in version 2 below "v2". And I think the main feature of the patch to is modify the number of MAX_QUEUES. So I just described the other features briefly. Thanks, Xutao
> -----Original Message----- > From: Sun, Xutao > Sent: Tuesday, October 27, 2015 1:11 PM > To: dev@dpdk.org > Cc: Wu, Jingjing; Zhang, Helin; Sun, Xutao > Subject: [PATCH v2] examples/vmdq: Fix the core dump issue when > mem_pool is more than 34 > > Macro MAX_QUEUES was defined to 128, only allow 16 vmdq_pools in > theory. > When running vmdq_app with more than 34 vmdq_pools, it will cause the > core_dump issue. > Change MAX_QUEUES to 1024 will solve this issue. > > Signed-off-by: Xutao Sun <xutao.sun@intel.com> Acked-by: Jingjing Wu <jingjing.wu@intel.com> > --- > v2: > - rectify the NUM_MBUFS_PER_PORT since MAX_QUEUES has been > changed > > examples/vmdq/main.c | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > 1.9.3
diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c index a142d49..bba5164 100644 --- a/examples/vmdq/main.c +++ b/examples/vmdq/main.c @@ -69,12 +69,13 @@ #include <rte_mbuf.h> #include <rte_memcpy.h> -#define MAX_QUEUES 128 +#define MAX_QUEUES 1024 /* * For 10 GbE, 128 queues require roughly * 128*512 (RX/TX_queue_nb * RX/TX_ring_descriptors_nb) per port. */ -#define NUM_MBUFS_PER_PORT (128*512) +#define NUM_MBUFS_PER_PORT (MAX_QUEUES * RTE_MAX(RTE_TEST_RX_DESC_DEFAULT, \ + RTE_TEST_TX_DESC_DEFAULT)) #define MBUF_CACHE_SIZE 64 #define MAX_PKT_BURST 32