[dpdk-dev,v4] eal: make hugetlb initialization more robust

Message ID 1463013881-27985-1-git-send-email-jianfeng.tan@intel.com (mailing list archive)
State Superseded, archived
Delegated to: Thomas Monjalon
Headers

Commit Message

Jianfeng Tan May 12, 2016, 12:44 a.m. UTC
  This patch adds an option, --huge-trybest, to use a recover mechanism to
the case that there are not so many hugepages (declared in sysfs), which
can be used. It relys on a mem access to fault-in hugepages, and if fails
with SIGBUS, recover to previously saved stack environment with
siglongjmp().

Besides, this solution fixes an issue when hugetlbfs is specified with an
option of size. Currently DPDK does not respect the quota of a hugetblfs
mount. It fails to init the EAL because it tries to map the number of free
hugepages in the system rather than using the number specified in the quota
for that mount.

It's still an open issue with CONFIG_RTE_EAL_SINGLE_FILE_SEGMENTS. Under
this case (such as IVSHMEM target), having hugetlbfs mounts with quota will
fail to remap hugepages as it relies on having mapped all free hugepages
in the system.

Test example:
  a. cgcreate -g hugetlb:/test-subgroup
  b. cgset -r hugetlb.1GB.limit_in_bytes=2147483648 test-subgroup
  c. cgexec -g hugetlb:test-subgroup \
	  ./examples/helloworld/build/helloworld -c 0x2 -n 4 --huge-trybest

Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
---
v4:
 - Change map_all_hugepages to return unsigned instead of int.
v3:
 - Reword commit message to include it fixes the hugetlbfs quota issue.
 - setjmp -> sigsetjmp.
 - Fix RTE_LOG complaint from ERR to DEBUG as it does not mean init error
   so far.
 - Fix the second map_all_hugepages's return value check.
v2:
 - Address the compiling error by move setjmp into a wrap method.

 lib/librte_eal/common/eal_common_options.c |   4 +
 lib/librte_eal/common/eal_internal_cfg.h   |   1 +
 lib/librte_eal/common/eal_options.h        |   2 +
 lib/librte_eal/linuxapp/eal/eal.c          |   1 +
 lib/librte_eal/linuxapp/eal/eal_memory.c   | 118 +++++++++++++++++++++++++----
 5 files changed, 112 insertions(+), 14 deletions(-)
  

Comments

David Marchand May 17, 2016, 4:39 p.m. UTC | #1
Hello Jianfeng,

On Thu, May 12, 2016 at 2:44 AM, Jianfeng Tan <jianfeng.tan@intel.com> wrote:
> This patch adds an option, --huge-trybest, to use a recover mechanism to
> the case that there are not so many hugepages (declared in sysfs), which
> can be used. It relys on a mem access to fault-in hugepages, and if fails
> with SIGBUS, recover to previously saved stack environment with
> siglongjmp().
>
> Besides, this solution fixes an issue when hugetlbfs is specified with an
> option of size. Currently DPDK does not respect the quota of a hugetblfs
> mount. It fails to init the EAL because it tries to map the number of free
> hugepages in the system rather than using the number specified in the quota
> for that mount.
>
> It's still an open issue with CONFIG_RTE_EAL_SINGLE_FILE_SEGMENTS. Under
> this case (such as IVSHMEM target), having hugetlbfs mounts with quota will
> fail to remap hugepages as it relies on having mapped all free hugepages
> in the system.

For such a case case, maybe having some warning log message when it
fails would help the user.
+ a known issue in the release notes ?


> diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c
> index 5b9132c..8c77010 100644
> --- a/lib/librte_eal/linuxapp/eal/eal_memory.c
> +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
> @@ -417,12 +434,33 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl,
>                         hugepg_tbl[i].final_va = virtaddr;
>                 }
>
> +               if (orig && internal_config.huge_trybest) {
> +                       /* In linux, hugetlb limitations, like cgroup, are
> +                        * enforced at fault time instead of mmap(), even
> +                        * with the option of MAP_POPULATE. Kernel will send
> +                        * a SIGBUS signal. To avoid to be killed, save stack
> +                        * environment here, if SIGBUS happens, we can jump
> +                        * back here.
> +                        */
> +                       if (wrap_sigsetjmp()) {
> +                               RTE_LOG(DEBUG, EAL, "SIGBUS: Cannot mmap more "
> +                                       "hugepages of size %u MB\n",
> +                                       (unsigned)(hugepage_sz / 0x100000));
> +                               munmap(virtaddr, hugepage_sz);
> +                               close(fd);
> +                               unlink(hugepg_tbl[i].filepath);
> +                               return i;
> +                       }
> +                       *(int *)virtaddr = 0;
> +               }
> +
> +
>                 /* set shared flock on the file. */
>                 if (flock(fd, LOCK_SH | LOCK_NB) == -1) {
> -                       RTE_LOG(ERR, EAL, "%s(): Locking file failed:%s \n",
> +                       RTE_LOG(DEBUG, EAL, "%s(): Locking file failed:%s \n",
>                                 __func__, strerror(errno));
>                         close(fd);
> -                       return -1;
> +                       return i;
>                 }
>
>                 close(fd);

Maybe I missed something, but we are writing into some hugepage before
the flock has been called.
Are we sure there is nobody else using this hugepage ?

Especially, can't this cause trouble to a primary process running if
we start the exact same primary process ?
  
Thomas Monjalon May 17, 2016, 4:40 p.m. UTC | #2
2016-05-12 00:44, Jianfeng Tan:
> This patch adds an option, --huge-trybest, to use a recover mechanism to
> the case that there are not so many hugepages (declared in sysfs), which
> can be used. It relys on a mem access to fault-in hugepages, and if fails

relys -> relies

> with SIGBUS, recover to previously saved stack environment with
> siglongjmp().
> 
> Besides, this solution fixes an issue when hugetlbfs is specified with an
> option of size. Currently DPDK does not respect the quota of a hugetblfs
> mount. It fails to init the EAL because it tries to map the number of free
> hugepages in the system rather than using the number specified in the quota
> for that mount.

It looks to be a bug. Why adding an option?
What is the benefit of the old behaviour, not using --try-best?

> +static sigjmp_buf jmpenv;
> +
> +static void sigbus_handler(int signo __rte_unused)
> +{
> +	siglongjmp(jmpenv, 1);
> +}
> +
> +/* Put setjmp into a wrap method to avoid compiling error. Any non-volatile,
> + * non-static local variable in the stack frame calling sigsetjmp might be
> + * clobbered by a call to longjmp.
> + */
> +static int wrap_sigsetjmp(void)
> +{
> +	return sigsetjmp(jmpenv, 1);
> +}

Please add the word "huge" to these variables and functions.

> +static struct sigaction action_old;
> +static int need_recover;
> +
> +static void
> +register_sigbus(void)
> +{
> +	sigset_t mask;
> +	struct sigaction action;
> +
> +	sigemptyset(&mask);
> +	sigaddset(&mask, SIGBUS);
> +	action.sa_flags = 0;
> +	action.sa_mask = mask;
> +	action.sa_handler = sigbus_handler;
> +
> +	need_recover = !sigaction(SIGBUS, &action, &action_old);
> +}
> +
> +static void
> +recover_sigbus(void)
> +{
> +	if (need_recover) {
> +		sigaction(SIGBUS, &action_old, NULL);
> +		need_recover = 0;
> +	}
> +}

Idem, Please add the word "huge".
  
Sergio Gonzalez Monroy May 18, 2016, 7:56 a.m. UTC | #3
On 17/05/2016 17:39, David Marchand wrote:
> Hello Jianfeng,
>
> On Thu, May 12, 2016 at 2:44 AM, Jianfeng Tan <jianfeng.tan@intel.com> wrote:
>> This patch adds an option, --huge-trybest, to use a recover mechanism to
>> the case that there are not so many hugepages (declared in sysfs), which
>> can be used. It relys on a mem access to fault-in hugepages, and if fails
>> with SIGBUS, recover to previously saved stack environment with
>> siglongjmp().
>>
>> Besides, this solution fixes an issue when hugetlbfs is specified with an
>> option of size. Currently DPDK does not respect the quota of a hugetblfs
>> mount. It fails to init the EAL because it tries to map the number of free
>> hugepages in the system rather than using the number specified in the quota
>> for that mount.
>>
>> It's still an open issue with CONFIG_RTE_EAL_SINGLE_FILE_SEGMENTS. Under
>> this case (such as IVSHMEM target), having hugetlbfs mounts with quota will
>> fail to remap hugepages as it relies on having mapped all free hugepages
>> in the system.
> For such a case case, maybe having some warning log message when it
> fails would help the user.
> + a known issue in the release notes ?
>
>
>> diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c
>> index 5b9132c..8c77010 100644
>> --- a/lib/librte_eal/linuxapp/eal/eal_memory.c
>> +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
>> @@ -417,12 +434,33 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl,
>>                          hugepg_tbl[i].final_va = virtaddr;
>>                  }
>>
>> +               if (orig && internal_config.huge_trybest) {
>> +                       /* In linux, hugetlb limitations, like cgroup, are
>> +                        * enforced at fault time instead of mmap(), even
>> +                        * with the option of MAP_POPULATE. Kernel will send
>> +                        * a SIGBUS signal. To avoid to be killed, save stack
>> +                        * environment here, if SIGBUS happens, we can jump
>> +                        * back here.
>> +                        */
>> +                       if (wrap_sigsetjmp()) {
>> +                               RTE_LOG(DEBUG, EAL, "SIGBUS: Cannot mmap more "
>> +                                       "hugepages of size %u MB\n",
>> +                                       (unsigned)(hugepage_sz / 0x100000));
>> +                               munmap(virtaddr, hugepage_sz);
>> +                               close(fd);
>> +                               unlink(hugepg_tbl[i].filepath);
>> +                               return i;
>> +                       }
>> +                       *(int *)virtaddr = 0;
>> +               }
>> +
>> +
>>                  /* set shared flock on the file. */
>>                  if (flock(fd, LOCK_SH | LOCK_NB) == -1) {
>> -                       RTE_LOG(ERR, EAL, "%s(): Locking file failed:%s \n",
>> +                       RTE_LOG(DEBUG, EAL, "%s(): Locking file failed:%s \n",
>>                                  __func__, strerror(errno));
>>                          close(fd);
>> -                       return -1;
>> +                       return i;
>>                  }
>>
>>                  close(fd);
> Maybe I missed something, but we are writing into some hugepage before
> the flock has been called.
> Are we sure there is nobody else using this hugepage ?
>
> Especially, can't this cause trouble to a primary process running if
> we start the exact same primary process ?
>

We lock the hugepage directory during eal_hugepage_info_init(), and we 
do not unlock
until we have finished eal_memory_init.

I think that takes care of that case.

Sergio
  
Sergio Gonzalez Monroy May 18, 2016, 8:06 a.m. UTC | #4
On 17/05/2016 17:40, Thomas Monjalon wrote:
> 2016-05-12 00:44, Jianfeng Tan:
>> This patch adds an option, --huge-trybest, to use a recover mechanism to
>> the case that there are not so many hugepages (declared in sysfs), which
>> can be used. It relys on a mem access to fault-in hugepages, and if fails
> relys -> relies
>
>> with SIGBUS, recover to previously saved stack environment with
>> siglongjmp().
>>
>> Besides, this solution fixes an issue when hugetlbfs is specified with an
>> option of size. Currently DPDK does not respect the quota of a hugetblfs
>> mount. It fails to init the EAL because it tries to map the number of free
>> hugepages in the system rather than using the number specified in the quota
>> for that mount.
> It looks to be a bug. Why adding an option?
> What is the benefit of the old behaviour, not using --try-best?

I do not see any benefit to the old behavior.
Given that we need the signal handling for the cgroup use case, I would 
be inclined to use
this method as the default instead of trying to figure out how many 
hugepages we have free, etc.

Thoughts?

Sergio

>> +static sigjmp_buf jmpenv;
>> +
>> +static void sigbus_handler(int signo __rte_unused)
>> +{
>> +	siglongjmp(jmpenv, 1);
>> +}
>> +
>> +/* Put setjmp into a wrap method to avoid compiling error. Any non-volatile,
>> + * non-static local variable in the stack frame calling sigsetjmp might be
>> + * clobbered by a call to longjmp.
>> + */
>> +static int wrap_sigsetjmp(void)
>> +{
>> +	return sigsetjmp(jmpenv, 1);
>> +}
> Please add the word "huge" to these variables and functions.
>
>> +static struct sigaction action_old;
>> +static int need_recover;
>> +
>> +static void
>> +register_sigbus(void)
>> +{
>> +	sigset_t mask;
>> +	struct sigaction action;
>> +
>> +	sigemptyset(&mask);
>> +	sigaddset(&mask, SIGBUS);
>> +	action.sa_flags = 0;
>> +	action.sa_mask = mask;
>> +	action.sa_handler = sigbus_handler;
>> +
>> +	need_recover = !sigaction(SIGBUS, &action, &action_old);
>> +}
>> +
>> +static void
>> +recover_sigbus(void)
>> +{
>> +	if (need_recover) {
>> +		sigaction(SIGBUS, &action_old, NULL);
>> +		need_recover = 0;
>> +	}
>> +}
> Idem, Please add the word "huge".
>
  
David Marchand May 18, 2016, 9:34 a.m. UTC | #5
Hello Sergio,

On Wed, May 18, 2016 at 9:56 AM, Sergio Gonzalez Monroy
<sergio.gonzalez.monroy@intel.com> wrote:
> On 17/05/2016 17:39, David Marchand wrote:
>>> diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c
>>> b/lib/librte_eal/linuxapp/eal/eal_memory.c
>>> index 5b9132c..8c77010 100644
>>> --- a/lib/librte_eal/linuxapp/eal/eal_memory.c
>>> +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
>>> @@ -417,12 +434,33 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl,
>>>                          hugepg_tbl[i].final_va = virtaddr;
>>>                  }
>>>
>>> +               if (orig && internal_config.huge_trybest) {
>>> +                       /* In linux, hugetlb limitations, like cgroup,
>>> are
>>> +                        * enforced at fault time instead of mmap(), even
>>> +                        * with the option of MAP_POPULATE. Kernel will
>>> send
>>> +                        * a SIGBUS signal. To avoid to be killed, save
>>> stack
>>> +                        * environment here, if SIGBUS happens, we can
>>> jump
>>> +                        * back here.
>>> +                        */
>>> +                       if (wrap_sigsetjmp()) {
>>> +                               RTE_LOG(DEBUG, EAL, "SIGBUS: Cannot mmap
>>> more "
>>> +                                       "hugepages of size %u MB\n",
>>> +                                       (unsigned)(hugepage_sz /
>>> 0x100000));
>>> +                               munmap(virtaddr, hugepage_sz);
>>> +                               close(fd);
>>> +                               unlink(hugepg_tbl[i].filepath);
>>> +                               return i;
>>> +                       }
>>> +                       *(int *)virtaddr = 0;
>>> +               }
>>> +
>>> +
>>>                  /* set shared flock on the file. */
>>>                  if (flock(fd, LOCK_SH | LOCK_NB) == -1) {
>>> -                       RTE_LOG(ERR, EAL, "%s(): Locking file failed:%s
>>> \n",
>>> +                       RTE_LOG(DEBUG, EAL, "%s(): Locking file failed:%s
>>> \n",
>>>                                  __func__, strerror(errno));
>>>                          close(fd);
>>> -                       return -1;
>>> +                       return i;
>>>                  }
>>>
>>>                  close(fd);
>>
>> Maybe I missed something, but we are writing into some hugepage before
>> the flock has been called.
>> Are we sure there is nobody else using this hugepage ?
>>
>> Especially, can't this cause trouble to a primary process running if
>> we start the exact same primary process ?
>>
>
> We lock the hugepage directory during eal_hugepage_info_init(), and we do
> not unlock
> until we have finished eal_memory_init.
>
> I think that takes care of that case.

Yes, thanks.
  
David Marchand May 18, 2016, 9:38 a.m. UTC | #6
On Wed, May 18, 2016 at 10:06 AM, Sergio Gonzalez Monroy
<sergio.gonzalez.monroy@intel.com> wrote:
> On 17/05/2016 17:40, Thomas Monjalon wrote:
>>
>> 2016-05-12 00:44, Jianfeng Tan:
>>>
>>> This patch adds an option, --huge-trybest, to use a recover mechanism to
>>> the case that there are not so many hugepages (declared in sysfs), which
>>> can be used. It relys on a mem access to fault-in hugepages, and if fails
>>
>> relys -> relies
>>
>>> with SIGBUS, recover to previously saved stack environment with
>>> siglongjmp().
>>>
>>> Besides, this solution fixes an issue when hugetlbfs is specified with an
>>> option of size. Currently DPDK does not respect the quota of a hugetblfs
>>> mount. It fails to init the EAL because it tries to map the number of
>>> free
>>> hugepages in the system rather than using the number specified in the
>>> quota
>>> for that mount.
>>
>> It looks to be a bug. Why adding an option?
>> What is the benefit of the old behaviour, not using --try-best?
>
>
> I do not see any benefit to the old behavior.
> Given that we need the signal handling for the cgroup use case, I would be
> inclined to use
> this method as the default instead of trying to figure out how many
> hugepages we have free, etc.

+1
  
Jianfeng Tan May 19, 2016, 2 a.m. UTC | #7
Hi David,


On 5/18/2016 12:39 AM, David Marchand wrote:
> Hello Jianfeng,
>
> On Thu, May 12, 2016 at 2:44 AM, Jianfeng Tan <jianfeng.tan@intel.com> wrote:
>> This patch adds an option, --huge-trybest, to use a recover mechanism to
>> the case that there are not so many hugepages (declared in sysfs), which
>> can be used. It relys on a mem access to fault-in hugepages, and if fails
>> with SIGBUS, recover to previously saved stack environment with
>> siglongjmp().
>>
>> Besides, this solution fixes an issue when hugetlbfs is specified with an
>> option of size. Currently DPDK does not respect the quota of a hugetblfs
>> mount. It fails to init the EAL because it tries to map the number of free
>> hugepages in the system rather than using the number specified in the quota
>> for that mount.
>>
>> It's still an open issue with CONFIG_RTE_EAL_SINGLE_FILE_SEGMENTS. Under
>> this case (such as IVSHMEM target), having hugetlbfs mounts with quota will
>> fail to remap hugepages as it relies on having mapped all free hugepages
>> in the system.
>
>
>
>> diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c
>> index 5b9132c..8c77010 100644
>> --- a/lib/librte_eal/linuxapp/eal/eal_memory.c
>> +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
>> @@ -417,12 +434,33 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl,
>>                          hugepg_tbl[i].final_va = virtaddr;
>>                  }
>>
>> +               if (orig && internal_config.huge_trybest) {
>> +                       /* In linux, hugetlb limitations, like cgroup, are
>> +                        * enforced at fault time instead of mmap(), even
>> +                        * with the option of MAP_POPULATE. Kernel will send
>> +                        * a SIGBUS signal. To avoid to be killed, save stack
>> +                        * environment here, if SIGBUS happens, we can jump
>> +                        * back here.
>> +                        */
>> +                       if (wrap_sigsetjmp()) {
>> +                               RTE_LOG(DEBUG, EAL, "SIGBUS: Cannot mmap more "
>> +                                       "hugepages of size %u MB\n",
>> +                                       (unsigned)(hugepage_sz / 0x100000));
> For such a case case, maybe having some warning log message when it
> fails would help the user.
> + a known issue in the release notes ?

Do you mean when sigbus is triggered, like here, warn the user that "it 
fails to hold all free hugepages as sysfs shows", and
#ifdef RTE_EAL_SINGLE_FILE_SEGMENTS
/*we need to return error from rte_eal_init_memory */
#endif

Thanks,
Jianfeng
  
Jianfeng Tan May 19, 2016, 2:11 a.m. UTC | #8
Hi Thomas & Sergio,


On 5/18/2016 4:06 PM, Sergio Gonzalez Monroy wrote:
> On 17/05/2016 17:40, Thomas Monjalon wrote:
>> 2016-05-12 00:44, Jianfeng Tan:
>>> This patch adds an option, --huge-trybest, to use a recover 
>>> mechanism to
>>> the case that there are not so many hugepages (declared in sysfs), 
>>> which
>>> can be used. It relys on a mem access to fault-in hugepages, and if 
>>> fails
>> relys -> relies
>>
>>> with SIGBUS, recover to previously saved stack environment with
>>> siglongjmp().
>>>
>>> Besides, this solution fixes an issue when hugetlbfs is specified 
>>> with an
>>> option of size. Currently DPDK does not respect the quota of a 
>>> hugetblfs
>>> mount. It fails to init the EAL because it tries to map the number 
>>> of free
>>> hugepages in the system rather than using the number specified in 
>>> the quota
>>> for that mount.
>> It looks to be a bug. Why adding an option?
>> What is the benefit of the old behaviour, not using --try-best?
>
> I do not see any benefit to the old behavior.
> Given that we need the signal handling for the cgroup use case, I 
> would be inclined to use
> this method as the default instead of trying to figure out how many 
> hugepages we have free, etc.
>
> Thoughts?

I tend to use this method as the default too, with some warning logs as 
suggested by David, and return error from rte_eal_memory() when sigbus 
is triggered under the case of RTE_EAL_SINGLE_FILE_SEGMENTS.

Thomas, all other trivial issues will be fixed in next version. Thank you!

Thanks,
Jianfeng

>
> Sergio
>
>>> +static sigjmp_buf jmpenv;
>>> +
>>> +static void sigbus_handler(int signo __rte_unused)
>>> +{
>>> +    siglongjmp(jmpenv, 1);
>>> +}
>>> +
>>> +/* Put setjmp into a wrap method to avoid compiling error. Any 
>>> non-volatile,
>>> + * non-static local variable in the stack frame calling sigsetjmp 
>>> might be
>>> + * clobbered by a call to longjmp.
>>> + */
>>> +static int wrap_sigsetjmp(void)
>>> +{
>>> +    return sigsetjmp(jmpenv, 1);
>>> +}
>> Please add the word "huge" to these variables and functions.
>>
>>> +static struct sigaction action_old;
>>> +static int need_recover;
>>> +
>>> +static void
>>> +register_sigbus(void)
>>> +{
>>> +    sigset_t mask;
>>> +    struct sigaction action;
>>> +
>>> +    sigemptyset(&mask);
>>> +    sigaddset(&mask, SIGBUS);
>>> +    action.sa_flags = 0;
>>> +    action.sa_mask = mask;
>>> +    action.sa_handler = sigbus_handler;
>>> +
>>> +    need_recover = !sigaction(SIGBUS, &action, &action_old);
>>> +}
>>> +
>>> +static void
>>> +recover_sigbus(void)
>>> +{
>>> +    if (need_recover) {
>>> +        sigaction(SIGBUS, &action_old, NULL);
>>> +        need_recover = 0;
>>> +    }
>>> +}
>> Idem, Please add the word "huge".
>>
>
  

Patch

diff --git a/lib/librte_eal/common/eal_common_options.c b/lib/librte_eal/common/eal_common_options.c
index 3efc90f..e9a111d 100644
--- a/lib/librte_eal/common/eal_common_options.c
+++ b/lib/librte_eal/common/eal_common_options.c
@@ -95,6 +95,7 @@  eal_long_options[] = {
 	{OPT_VFIO_INTR,         1, NULL, OPT_VFIO_INTR_NUM        },
 	{OPT_VMWARE_TSC_MAP,    0, NULL, OPT_VMWARE_TSC_MAP_NUM   },
 	{OPT_XEN_DOM0,          0, NULL, OPT_XEN_DOM0_NUM         },
+	{OPT_HUGE_TRYBEST,      0, NULL, OPT_HUGE_TRYBEST_NUM     },
 	{0,                     0, NULL, 0                        }
 };
 
@@ -899,6 +900,9 @@  eal_parse_common_option(int opt, const char *optarg,
 			return -1;
 		}
 		break;
+	case OPT_HUGE_TRYBEST_NUM:
+		internal_config.huge_trybest = 1;
+		break;
 
 	/* don't know what to do, leave this to caller */
 	default:
diff --git a/lib/librte_eal/common/eal_internal_cfg.h b/lib/librte_eal/common/eal_internal_cfg.h
index 5f1367e..90a3533 100644
--- a/lib/librte_eal/common/eal_internal_cfg.h
+++ b/lib/librte_eal/common/eal_internal_cfg.h
@@ -64,6 +64,7 @@  struct internal_config {
 	volatile unsigned force_nchannel; /**< force number of channels */
 	volatile unsigned force_nrank;    /**< force number of ranks */
 	volatile unsigned no_hugetlbfs;   /**< true to disable hugetlbfs */
+	volatile unsigned huge_trybest;   /**< try best to allocate hugepages */
 	unsigned hugepage_unlink;         /**< true to unlink backing files */
 	volatile unsigned xen_dom0_support; /**< support app running on Xen Dom0*/
 	volatile unsigned no_pci;         /**< true to disable PCI */
diff --git a/lib/librte_eal/common/eal_options.h b/lib/librte_eal/common/eal_options.h
index a881c62..02397c5 100644
--- a/lib/librte_eal/common/eal_options.h
+++ b/lib/librte_eal/common/eal_options.h
@@ -83,6 +83,8 @@  enum {
 	OPT_VMWARE_TSC_MAP_NUM,
 #define OPT_XEN_DOM0          "xen-dom0"
 	OPT_XEN_DOM0_NUM,
+#define OPT_HUGE_TRYBEST      "huge-trybest"
+	OPT_HUGE_TRYBEST_NUM,
 	OPT_LONG_MAX_NUM
 };
 
diff --git a/lib/librte_eal/linuxapp/eal/eal.c b/lib/librte_eal/linuxapp/eal/eal.c
index 8aafd51..eeb1d4e 100644
--- a/lib/librte_eal/linuxapp/eal/eal.c
+++ b/lib/librte_eal/linuxapp/eal/eal.c
@@ -343,6 +343,7 @@  eal_usage(const char *prgname)
 	       "  --"OPT_CREATE_UIO_DEV"    Create /dev/uioX (usually done by hotplug)\n"
 	       "  --"OPT_VFIO_INTR"         Interrupt mode for VFIO (legacy|msi|msix)\n"
 	       "  --"OPT_XEN_DOM0"          Support running on Xen dom0 without hugetlbfs\n"
+	       "  --"OPT_HUGE_TRYBEST"      Try best to accommodate hugepages\n"
 	       "\n");
 	/* Allow the application to print its usage message too if hook is set */
 	if ( rte_application_usage_hook ) {
diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c
index 5b9132c..8c77010 100644
--- a/lib/librte_eal/linuxapp/eal/eal_memory.c
+++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
@@ -80,6 +80,8 @@ 
 #include <errno.h>
 #include <sys/ioctl.h>
 #include <sys/time.h>
+#include <signal.h>
+#include <setjmp.h>
 
 #include <rte_log.h>
 #include <rte_memory.h>
@@ -309,6 +311,21 @@  get_virtual_area(size_t *size, size_t hugepage_sz)
 	return addr;
 }
 
+static sigjmp_buf jmpenv;
+
+static void sigbus_handler(int signo __rte_unused)
+{
+	siglongjmp(jmpenv, 1);
+}
+
+/* Put setjmp into a wrap method to avoid compiling error. Any non-volatile,
+ * non-static local variable in the stack frame calling sigsetjmp might be
+ * clobbered by a call to longjmp.
+ */
+static int wrap_sigsetjmp(void)
+{
+	return sigsetjmp(jmpenv, 1);
+}
 /*
  * Mmap all hugepages of hugepage table: it first open a file in
  * hugetlbfs, then mmap() hugepage_sz data in it. If orig is set, the
@@ -316,7 +333,7 @@  get_virtual_area(size_t *size, size_t hugepage_sz)
  * in hugepg_tbl[i].final_va. The second mapping (when orig is 0) tries to
  * map continguous physical blocks in contiguous virtual blocks.
  */
-static int
+static unsigned
 map_all_hugepages(struct hugepage_file *hugepg_tbl,
 		struct hugepage_info *hpi, int orig)
 {
@@ -394,9 +411,9 @@  map_all_hugepages(struct hugepage_file *hugepg_tbl,
 		/* try to create hugepage file */
 		fd = open(hugepg_tbl[i].filepath, O_CREAT | O_RDWR, 0755);
 		if (fd < 0) {
-			RTE_LOG(ERR, EAL, "%s(): open failed: %s\n", __func__,
+			RTE_LOG(DEBUG, EAL, "%s(): open failed: %s\n", __func__,
 					strerror(errno));
-			return -1;
+			return i;
 		}
 
 		/* map the segment, and populate page tables,
@@ -404,10 +421,10 @@  map_all_hugepages(struct hugepage_file *hugepg_tbl,
 		virtaddr = mmap(vma_addr, hugepage_sz, PROT_READ | PROT_WRITE,
 				MAP_SHARED | MAP_POPULATE, fd, 0);
 		if (virtaddr == MAP_FAILED) {
-			RTE_LOG(ERR, EAL, "%s(): mmap failed: %s\n", __func__,
+			RTE_LOG(DEBUG, EAL, "%s(): mmap failed: %s\n", __func__,
 					strerror(errno));
 			close(fd);
-			return -1;
+			return i;
 		}
 
 		if (orig) {
@@ -417,12 +434,33 @@  map_all_hugepages(struct hugepage_file *hugepg_tbl,
 			hugepg_tbl[i].final_va = virtaddr;
 		}
 
+		if (orig && internal_config.huge_trybest) {
+			/* In linux, hugetlb limitations, like cgroup, are
+			 * enforced at fault time instead of mmap(), even
+			 * with the option of MAP_POPULATE. Kernel will send
+			 * a SIGBUS signal. To avoid to be killed, save stack
+			 * environment here, if SIGBUS happens, we can jump
+			 * back here.
+			 */
+			if (wrap_sigsetjmp()) {
+				RTE_LOG(DEBUG, EAL, "SIGBUS: Cannot mmap more "
+					"hugepages of size %u MB\n",
+					(unsigned)(hugepage_sz / 0x100000));
+				munmap(virtaddr, hugepage_sz);
+				close(fd);
+				unlink(hugepg_tbl[i].filepath);
+				return i;
+			}
+			*(int *)virtaddr = 0;
+		}
+
+
 		/* set shared flock on the file. */
 		if (flock(fd, LOCK_SH | LOCK_NB) == -1) {
-			RTE_LOG(ERR, EAL, "%s(): Locking file failed:%s \n",
+			RTE_LOG(DEBUG, EAL, "%s(): Locking file failed:%s \n",
 				__func__, strerror(errno));
 			close(fd);
-			return -1;
+			return i;
 		}
 
 		close(fd);
@@ -430,7 +468,8 @@  map_all_hugepages(struct hugepage_file *hugepg_tbl,
 		vma_addr = (char *)vma_addr + hugepage_sz;
 		vma_len -= hugepage_sz;
 	}
-	return 0;
+
+	return i;
 }
 
 #ifdef RTE_EAL_SINGLE_FILE_SEGMENTS
@@ -1036,6 +1075,33 @@  calc_num_pages_per_socket(uint64_t * memory,
 	return total_num_pages;
 }
 
+static struct sigaction action_old;
+static int need_recover;
+
+static void
+register_sigbus(void)
+{
+	sigset_t mask;
+	struct sigaction action;
+
+	sigemptyset(&mask);
+	sigaddset(&mask, SIGBUS);
+	action.sa_flags = 0;
+	action.sa_mask = mask;
+	action.sa_handler = sigbus_handler;
+
+	need_recover = !sigaction(SIGBUS, &action, &action_old);
+}
+
+static void
+recover_sigbus(void)
+{
+	if (need_recover) {
+		sigaction(SIGBUS, &action_old, NULL);
+		need_recover = 0;
+	}
+}
+
 /*
  * Prepare physical memory mapping: fill configuration structure with
  * these infos, return 0 on success.
@@ -1122,8 +1188,12 @@  rte_eal_hugepage_init(void)
 
 	hp_offset = 0; /* where we start the current page size entries */
 
+	if (internal_config.huge_trybest)
+		register_sigbus();
+
 	/* map all hugepages and sort them */
 	for (i = 0; i < (int)internal_config.num_hugepage_sizes; i ++){
+		unsigned pages_old, pages_new;
 		struct hugepage_info *hpi;
 
 		/*
@@ -1137,10 +1207,24 @@  rte_eal_hugepage_init(void)
 			continue;
 
 		/* map all hugepages available */
-		if (map_all_hugepages(&tmp_hp[hp_offset], hpi, 1) < 0){
-			RTE_LOG(DEBUG, EAL, "Failed to mmap %u MB hugepages\n",
-					(unsigned)(hpi->hugepage_sz / 0x100000));
-			goto fail;
+		pages_old = hpi->num_pages[0];
+		pages_new = map_all_hugepages(&tmp_hp[hp_offset], hpi, 1);
+		if (pages_new < pages_old) {
+			RTE_LOG(DEBUG, EAL,
+				"%d not %d hugepages of size %u MB allocated\n",
+				pages_new, pages_old,
+				(unsigned)(hpi->hugepage_sz / 0x100000));
+			if (internal_config.huge_trybest) {
+				int pages = pages_old - pages_new;
+
+				internal_config.memory -=
+					hpi->hugepage_sz * pages;
+				nr_hugepages -= pages;
+				hpi->num_pages[0] = pages_new;
+				if (pages_new == 0)
+					continue;
+			} else
+				goto fail;
 		}
 
 		/* find physical addresses and sockets for each hugepage */
@@ -1172,8 +1256,9 @@  rte_eal_hugepage_init(void)
 		hp_offset += new_pages_count[i];
 #else
 		/* remap all hugepages */
-		if (map_all_hugepages(&tmp_hp[hp_offset], hpi, 0) < 0){
-			RTE_LOG(DEBUG, EAL, "Failed to remap %u MB pages\n",
+		if (map_all_hugepages(&tmp_hp[hp_offset], hpi, 0) !=
+		    hpi->num_pages[0]) {
+			RTE_LOG(ERR, EAL, "Failed to remap %u MB pages\n",
 					(unsigned)(hpi->hugepage_sz / 0x100000));
 			goto fail;
 		}
@@ -1187,6 +1272,9 @@  rte_eal_hugepage_init(void)
 #endif
 	}
 
+	if (internal_config.huge_trybest)
+		recover_sigbus();
+
 #ifdef RTE_EAL_SINGLE_FILE_SEGMENTS
 	nr_hugefiles = 0;
 	for (i = 0; i < (int) internal_config.num_hugepage_sizes; i++) {
@@ -1373,6 +1461,8 @@  rte_eal_hugepage_init(void)
 	return 0;
 
 fail:
+	if (internal_config.huge_trybest)
+		recover_sigbus();
 	free(tmp_hp);
 	return -1;
 }