From: Baoquan He <bhe@redhat.com>
To: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Muchun Song <songmuchun@bytedance.com>,
akpm@linux-foundation.org, mhocko@kernel.org,
rientjes@google.com, mgorman@suse.de, walken@google.com,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Jianchao Guo <guojianchao@bytedance.com>
Subject: Re: [PATCH v4] mm/hugetlb: add mempolicy check in the reservation routine
Date: Wed, 29 Jul 2020 18:33:59 +0800 [thread overview]
Message-ID: <20200729103359.GE14854@MiWiFi-R3L-srv> (raw)
In-Reply-To: <1b507031-d475-b495-bb4a-2cd9e665d02f@oracle.com>
On 07/28/20 at 09:46am, Mike Kravetz wrote:
> On 7/28/20 6:24 AM, Baoquan He wrote:
> > Hi Muchun,
> >
> > On 07/28/20 at 11:49am, Muchun Song wrote:
> >> In the reservation routine, we only check whether the cpuset meets
> >> the memory allocation requirements. But we ignore the mempolicy of
> >> MPOL_BIND case. If someone mmap hugetlb succeeds, but the subsequent
> >> memory allocation may fail due to mempolicy restrictions and receives
> >> the SIGBUS signal. This can be reproduced by the follow steps.
> >>
> >> 1) Compile the test case.
> >> cd tools/testing/selftests/vm/
> >> gcc map_hugetlb.c -o map_hugetlb
> >>
> >> 2) Pre-allocate huge pages. Suppose there are 2 numa nodes in the
> >> system. Each node will pre-allocate one huge page.
> >> echo 2 > /proc/sys/vm/nr_hugepages
> >>
> >> 3) Run test case(mmap 4MB). We receive the SIGBUS signal.
> >> numactl --membind=0 ./map_hugetlb 4
> >
> > I think supporting the mempolicy of MPOL_BIND case is a good idea.
> > I am wondering what about the other mempolicy cases, e.g MPOL_INTERLEAVE,
> > MPOL_PREFERRED. Asking these because we already have similar handling in
> > sysfs, proc nr_hugepages_mempolicy writting. Please see
> > __nr_hugepages_store_common() for detail.
>
> There is a high level difference in the function of this code and the code
> called by the sysfs and proc interfaces. This patch is dealing with reserving
> huge pages in the pool for later use. The sysfs and proc interfaces are
> allocating huge pages to be added to the pool.
>
> Using mempolicy to decide how to allocate huge pages is pretty straight
> forward. Using mempolicy to reserve pages is almost impossible to get
> correct. The comment at the beginning of hugetlb_acct_memory() and modified
> by this patch summarizes the issues.
>
> IMO, at this time it makes little sense to perform checks for more than
> MPOL_BIND at reservation time. If we ever take on the monumental task of
> supporting mempolicy directed per-node reservations throughout the life of
> a process, support for other policies will need to be taken into account.
I haven't figured out the difficulty of using mempolicy very clearly, will
read more codes and digest and understand your words. Thanks a lot for
these details.
Thanks
Baoquan
next prev parent reply other threads:[~2020-07-29 10:34 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-28 3:49 Muchun Song
2020-07-28 13:24 ` Baoquan He
2020-07-28 14:16 ` [External] " Muchun Song
2020-07-28 16:46 ` Mike Kravetz
2020-07-29 10:33 ` Baoquan He [this message]
2020-08-06 7:45 ` Muchun Song
2020-08-07 1:22 ` Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200729103359.GE14854@MiWiFi-R3L-srv \
--to=bhe@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=guojianchao@bytedance.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mhocko@kernel.org \
--cc=mike.kravetz@oracle.com \
--cc=rientjes@google.com \
--cc=songmuchun@bytedance.com \
--cc=walken@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox