linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Muchun Song <songmuchun@bytedance.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: mike.kravetz@oracle.com,
	Andrew Morton <akpm@linux-foundation.org>,
	 David Rientjes <rientjes@google.com>,
	mgorman@suse.de, walken@google.com,
	 Linux Memory Management List <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>,
	 Jianchao Guo <guojianchao@bytedance.com>
Subject: Re: [Phishing Risk] [External] Re: [PATCH v2] mm/hugetlb: add mempolicy check in the reservation routine
Date: Fri, 24 Jul 2020 21:56:29 +0800	[thread overview]
Message-ID: <CAMZfGtUVHN4HA45d18zxQVUJvWyVPimvKG=y3YDPJBhu4ocLPQ@mail.gmail.com> (raw)
In-Reply-To: <20200724113415.GG4061@dhcp22.suse.cz>

On Fri, Jul 24, 2020 at 7:34 PM Michal Hocko <mhocko@kernel.org> wrote:
>
> On Fri 24-07-20 18:03:06, Muchun Song wrote:
> > In the reservation routine, we only check whether the cpuset meets
> > the memory allocation requirements. But we ignore the mempolicy of
> > MPOL_BIND case. If someone mmap hugetlb succeeds, but the subsequent
> > memory allocation may fail due to mempolicy restrictions and receives
> > the SIGBUS signal. This can be reproduced by the follow steps.
> >
> >  1) Compile the test case.
> >     cd tools/testing/selftests/vm/
> >     gcc map_hugetlb.c -o map_hugetlb
> >
> >  2) Pre-allocate huge pages. Suppose there are 2 numa nodes in the
> >     system. Each node will pre-allocate one huge page.
> >     echo 2 > /proc/sys/vm/nr_hugepages
> >
> >  3) Run test case(mmap 4MB). We receive the SIGBUS signal.
> >     numactl --membind=0 ./map_hugetlb 4
> >
> > With this patch applied, the mmap will fail in the step 3) and throw
> > "mmap: Cannot allocate memory".
> >
> > Reported-by: Jianchao Guo <guojianchao@bytedance.com>
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > ---
> >
> > changelog in v2:
> >  1) Reuse policy_nodemask().
> >
> >  include/linux/mempolicy.h |  1 +
> >  mm/hugetlb.c              | 19 ++++++++++++++++---
> >  mm/mempolicy.c            |  2 +-
> >  3 files changed, 18 insertions(+), 4 deletions(-)
> >
> > diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
> > index ea9c15b60a96..6b9640f1c990 100644
> > --- a/include/linux/mempolicy.h
> > +++ b/include/linux/mempolicy.h
> > @@ -152,6 +152,7 @@ extern int huge_node(struct vm_area_struct *vma,
> >  extern bool init_nodemask_of_mempolicy(nodemask_t *mask);
> >  extern bool mempolicy_nodemask_intersects(struct task_struct *tsk,
> >                               const nodemask_t *mask);
> > +extern nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy);
> >  extern unsigned int mempolicy_slab_node(void);
> >
> >  extern enum zone_type policy_zone;
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 589c330df4db..a753fe8591b4 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -3463,12 +3463,25 @@ static int __init default_hugepagesz_setup(char *s)
> >  }
> >  __setup("default_hugepagesz=", default_hugepagesz_setup);
> >
> > -static unsigned int cpuset_mems_nr(unsigned int *array)
> > +static unsigned int allowed_mems_nr(struct hstate *h)
> >  {
> >       int node;
> >       unsigned int nr = 0;
> > +     struct mempolicy *mpol = get_task_policy(current);
> > +     nodemask_t *mpol_allowed, *mems_allowed, nodemask;
> > +     unsigned int *array = h->free_huge_pages_node;
> > +     gfp_t gfp_mask = htlb_alloc_mask(h);
> > +
> > +     mpol_allowed = policy_nodemask(gfp_mask, mpol);
> > +     if (mpol_allowed) {
> > +             nodes_and(nodemask, cpuset_current_mems_allowed,
> > +                       *mpol_allowed);
> > +             mems_allowed = &nodemask;
> > +     } else {
> > +             mems_allowed = &cpuset_current_mems_allowed;
> > +     }
>
> I believe you can simplify this and use a similar pattern as the page
> allocator. Something like
>
>         for_each_node_mask(node, mpol_allowed) {
>                 if (node_isset(node, &cpuset_current_mems_allowed))
>                         nr += array[node];
>         }
>
> There shouldn't be any need to allocate a potentially large nodemask on
> the stack.

An unsigned long can satisfy 64 nodes. So I think that nodemask is using
little stack memory. Right?

> --
> Michal Hocko
> SUSE Labs



-- 
Yours,
Muchun


  reply	other threads:[~2020-07-24 13:57 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-24 10:03 Muchun Song
2020-07-24 11:34 ` Michal Hocko
2020-07-24 13:56   ` Muchun Song [this message]
2020-07-24 14:26     ` [Phishing Risk] [External] " Michal Hocko
2020-07-24 17:41 ` Mike Kravetz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAMZfGtUVHN4HA45d18zxQVUJvWyVPimvKG=y3YDPJBhu4ocLPQ@mail.gmail.com' \
    --to=songmuchun@bytedance.com \
    --cc=akpm@linux-foundation.org \
    --cc=guojianchao@bytedance.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mhocko@kernel.org \
    --cc=mike.kravetz@oracle.com \
    --cc=rientjes@google.com \
    --cc=walken@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox