From: "Kasireddy, Vivek" <vivek.kasireddy@intel.com>
To: Steve Sistare <steven.sistare@oracle.com>,
"linux-mm@kvack.org" <linux-mm@kvack.org>
Cc: Muchun Song <muchun.song@linux.dev>,
Andrew Morton <akpm@linux-foundation.org>,
Matthew Wilcox <willy@infradead.org>,
Peter Xu <peterx@redhat.com>,
David Hildenbrand <david@redhat.com>,
Jason Gunthorpe <jgg@nvidia.com>
Subject: RE: [PATCH V1 3/5] mm/hugetlb: fix memfd_pin_folios resv_huge_pages leak
Date: Wed, 4 Sep 2024 01:04:30 +0000 [thread overview]
Message-ID: <IA0PR11MB71859E8FB22F495695AB5210F89C2@IA0PR11MB7185.namprd11.prod.outlook.com> (raw)
In-Reply-To: <1725373521-451395-4-git-send-email-steven.sistare@oracle.com>
Hi Steve,
> Subject: [PATCH V1 3/5] mm/hugetlb: fix memfd_pin_folios resv_huge_pages
> leak
>
> memfd_pin_folios followed by unpin_folios leaves resv_huge_pages
> elevated
> if the pages were not already faulted in. During a normal page fault,
> resv_huge_pages is consumed here:
>
> hugetlb_fault()
> alloc_hugetlb_folio()
> dequeue_hugetlb_folio_vma()
> dequeue_hugetlb_folio_nodemask()
> dequeue_hugetlb_folio_node_exact()
> free_huge_pages--
> resv_huge_pages--
>
> During memfd_pin_folios, the page is created by calling
> alloc_hugetlb_folio_nodemask instead of alloc_hugetlb_folio, and
> resv_huge_pages is not modified:
>
> memfd_alloc_folio()
> alloc_hugetlb_folio_nodemask()
> dequeue_hugetlb_folio_nodemask()
> dequeue_hugetlb_folio_node_exact()
> free_huge_pages--
>
> alloc_hugetlb_folio_nodemask has other callers that must not modify
> resv_huge_pages. Therefore, to fix, define an alternate version of
> alloc_hugetlb_folio_nodemask for this call site that adjusts
> resv_huge_pages.
>
> Fixes: 89c1905d9c14 ("mm/gup: introduce memfd_pin_folios() for pinning
> memfd folios")
>
> Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
> ---
> include/linux/hugetlb.h | 10 ++++++++++
> mm/hugetlb.c | 17 +++++++++++++++++
> mm/memfd.c | 9 ++++-----
> 3 files changed, 31 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index 45bf05a..3ddd69b 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -695,6 +695,9 @@ struct folio *alloc_hugetlb_folio(struct
> vm_area_struct *vma,
> struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int
> preferred_nid,
> nodemask_t *nmask, gfp_t gfp_mask,
> bool allow_alloc_fallback);
> +struct folio *alloc_hugetlb_folio_reserve(struct hstate *h, int preferred_nid,
> + nodemask_t *nmask, gfp_t
> gfp_mask);
> +
> int hugetlb_add_to_page_cache(struct folio *folio, struct address_space
> *mapping,
> pgoff_t idx);
> void restore_reserve_on_error(struct hstate *h, struct vm_area_struct
> *vma,
> @@ -1062,6 +1065,13 @@ static inline struct folio
> *alloc_hugetlb_folio(struct vm_area_struct *vma,
> }
>
> static inline struct folio *
> +alloc_hugetlb_folio_reserve(struct hstate *h, int preferred_nid,
> + nodemask_t *nmask, gfp_t gfp_mask)
> +{
> + return NULL;
> +}
> +
> +static inline struct folio *
> alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid,
> nodemask_t *nmask, gfp_t gfp_mask,
> bool allow_alloc_fallback)
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index aaf508b..c2d44a1 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2564,6 +2564,23 @@ struct folio
> *alloc_buddy_hugetlb_folio_with_mpol(struct hstate *h,
> return folio;
> }
>
> +struct folio *alloc_hugetlb_folio_reserve(struct hstate *h, int preferred_nid,
> + nodemask_t *nmask, gfp_t gfp_mask)
> +{
> + struct folio *folio;
> +
> + spin_lock_irq(&hugetlb_lock);
> + folio = dequeue_hugetlb_folio_nodemask(h, gfp_mask,
> preferred_nid,
I am assuming a check for available_huge_pages(h) before calling dequeue
would be redundant as it would simply return NULL if no huge pages are
available?
Acked-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
Thanks,
Vivek
> + nmask);
> + if (folio) {
> + VM_BUG_ON(!h->resv_huge_pages);
> + h->resv_huge_pages--;
> + }
> +
> + spin_unlock_irq(&hugetlb_lock);
> + return folio;
> +}
> +
> /* folio migration callback function */
> struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int
> preferred_nid,
> nodemask_t *nmask, gfp_t gfp_mask, bool
> allow_alloc_fallback)
> diff --git a/mm/memfd.c b/mm/memfd.c
> index e7b7c52..bfe0e71 100644
> --- a/mm/memfd.c
> +++ b/mm/memfd.c
> @@ -82,11 +82,10 @@ struct folio *memfd_alloc_folio(struct file *memfd,
> pgoff_t idx)
> gfp_mask = htlb_alloc_mask(hstate_file(memfd));
> gfp_mask &= ~(__GFP_HIGHMEM | __GFP_MOVABLE);
>
> - folio = alloc_hugetlb_folio_nodemask(hstate_file(memfd),
> - numa_node_id(),
> - NULL,
> - gfp_mask,
> - false);
> + folio = alloc_hugetlb_folio_reserve(hstate_file(memfd),
> + numa_node_id(),
> + NULL,
> + gfp_mask);
> if (folio && folio_try_get(folio)) {
> err = hugetlb_add_to_page_cache(folio,
> memfd->f_mapping,
> --
> 1.8.3.1
next prev parent reply other threads:[~2024-09-04 1:04 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-09-03 14:25 [PATCH V1 0/5] memfd-pin huge page fixes Steve Sistare
2024-09-03 14:25 ` [PATCH V1 1/5] mm/filemap: fix filemap_get_folios_contig THP panic Steve Sistare
2024-09-03 14:25 ` [PATCH V1 2/5] mm/hugetlb: fix memfd_pin_folios free_huge_pages leak Steve Sistare
2024-09-04 0:45 ` Kasireddy, Vivek
2024-09-04 14:52 ` Steven Sistare
2024-09-03 14:25 ` [PATCH V1 3/5] mm/hugetlb: fix memfd_pin_folios resv_huge_pages leak Steve Sistare
2024-09-04 1:04 ` Kasireddy, Vivek [this message]
2024-09-04 14:52 ` Steven Sistare
2024-09-03 14:25 ` [PATCH V1 4/5] mm/gup: fix memfd_pin_folios hugetlb page allocation Steve Sistare
2024-09-04 1:06 ` Kasireddy, Vivek
2024-09-04 14:51 ` Steven Sistare
2024-09-03 14:25 ` [PATCH V1 5/5] mm/gup: fix memfd_pin_folios alloc race panic Steve Sistare
2024-09-04 1:07 ` Kasireddy, Vivek
2024-09-04 1:12 ` [PATCH V1 0/5] memfd-pin huge page fixes Kasireddy, Vivek
2024-09-04 14:51 ` Steven Sistare
2024-09-06 8:09 ` Kasireddy, Vivek
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=IA0PR11MB71859E8FB22F495695AB5210F89C2@IA0PR11MB7185.namprd11.prod.outlook.com \
--to=vivek.kasireddy@intel.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=jgg@nvidia.com \
--cc=linux-mm@kvack.org \
--cc=muchun.song@linux.dev \
--cc=peterx@redhat.com \
--cc=steven.sistare@oracle.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox