From: Matthew Wilcox <willy@infradead.org>
To: Miaohe Lin <linmiaohe@huawei.com>
Cc: linux-mm@kvack.org, David Hildenbrand <david@redhat.com>,
Vlastimil Babka <vbabka@suse.cz>,
Muchun Song <muchun.song@linux.dev>,
Oscar Salvador <osalvador@suse.de>,
Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH 1/9] mm: Always initialise folio->_deferred_list
Date: Fri, 22 Mar 2024 13:00:04 +0000 [thread overview]
Message-ID: <Zf2A1OWa-ibDVRlq@casper.infradead.org> (raw)
In-Reply-To: <41ea6bcc-f40c-a651-2aae-e68fdaeb1021@huawei.com>
On Fri, Mar 22, 2024 at 04:23:59PM +0800, Miaohe Lin wrote:
> > +++ b/mm/hugetlb.c
> > @@ -1796,7 +1796,8 @@ static void __update_and_free_hugetlb_folio(struct hstate *h,
> > destroy_compound_gigantic_folio(folio, huge_page_order(h));
> > free_gigantic_folio(folio, huge_page_order(h));
> > } else {
> > - __free_pages(&folio->page, huge_page_order(h));
> > + INIT_LIST_HEAD(&folio->_deferred_list);
>
> Will it be better to add a comment to explain why INIT_LIST_HEAD is needed ?
Maybe? Something like
/* We reused this space for our own purposes */
> > + folio_put(folio);
>
> Can all __free_pages be replaced with folio_put in mm/hugetlb.c?
There's only one left, and indeed it can!
I'll drop this into my tree and send it as a proper patch later.
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 333f6278ef63..43cc7e6bc374 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2177,13 +2177,13 @@ static struct folio *alloc_buddy_hugetlb_folio(struct hstate *h,
nodemask_t *node_alloc_noretry)
{
int order = huge_page_order(h);
- struct page *page;
+ struct folio *folio;
bool alloc_try_hard = true;
bool retry = true;
/*
- * By default we always try hard to allocate the page with
- * __GFP_RETRY_MAYFAIL flag. However, if we are allocating pages in
+ * By default we always try hard to allocate the folio with
+ * __GFP_RETRY_MAYFAIL flag. However, if we are allocating folios in
* a loop (to adjust global huge page counts) and previous allocation
* failed, do not continue to try hard on the same node. Use the
* node_alloc_noretry bitmap to manage this state information.
@@ -2196,43 +2196,42 @@ static struct folio *alloc_buddy_hugetlb_folio(struct hstate *h,
if (nid == NUMA_NO_NODE)
nid = numa_mem_id();
retry:
- page = __alloc_pages(gfp_mask, order, nid, nmask);
+ folio = __folio_alloc(gfp_mask, order, nid, nmask);
- /* Freeze head page */
- if (page && !page_ref_freeze(page, 1)) {
- __free_pages(page, order);
+ if (folio && !folio_ref_freeze(folio, 1)) {
+ folio_put(folio);
if (retry) { /* retry once */
retry = false;
goto retry;
}
/* WOW! twice in a row. */
- pr_warn("HugeTLB head page unexpected inflated ref count\n");
- page = NULL;
+ pr_warn("HugeTLB unexpected inflated folio ref count\n");
+ folio = NULL;
}
/*
- * If we did not specify __GFP_RETRY_MAYFAIL, but still got a page this
- * indicates an overall state change. Clear bit so that we resume
- * normal 'try hard' allocations.
+ * If we did not specify __GFP_RETRY_MAYFAIL, but still got a
+ * folio this indicates an overall state change. Clear bit so
+ * that we resume normal 'try hard' allocations.
*/
- if (node_alloc_noretry && page && !alloc_try_hard)
+ if (node_alloc_noretry && folio && !alloc_try_hard)
node_clear(nid, *node_alloc_noretry);
/*
- * If we tried hard to get a page but failed, set bit so that
+ * If we tried hard to get a folio but failed, set bit so that
* subsequent attempts will not try as hard until there is an
* overall state change.
*/
- if (node_alloc_noretry && !page && alloc_try_hard)
+ if (node_alloc_noretry && !folio && alloc_try_hard)
node_set(nid, *node_alloc_noretry);
- if (!page) {
+ if (!folio) {
__count_vm_event(HTLB_BUDDY_PGALLOC_FAIL);
return NULL;
}
__count_vm_event(HTLB_BUDDY_PGALLOC);
- return page_folio(page);
+ return folio;
}
static struct folio *__alloc_fresh_hugetlb_folio(struct hstate *h,
next prev parent reply other threads:[~2024-03-22 13:00 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-21 14:24 [PATCH 0/9] Various significant MM patches Matthew Wilcox (Oracle)
2024-03-21 14:24 ` [PATCH 1/9] mm: Always initialise folio->_deferred_list Matthew Wilcox (Oracle)
2024-03-22 8:23 ` Miaohe Lin
2024-03-22 13:00 ` Matthew Wilcox [this message]
2024-04-01 3:14 ` Miaohe Lin
2024-03-22 9:30 ` Vlastimil Babka
2024-03-22 12:49 ` David Hildenbrand
2024-03-21 14:24 ` [PATCH 2/9] mm: Create FOLIO_FLAG_FALSE and FOLIO_TYPE_OPS macros Matthew Wilcox (Oracle)
2024-03-22 9:33 ` Vlastimil Babka
2024-03-21 14:24 ` [PATCH 3/9] mm: Remove folio_prep_large_rmappable() Matthew Wilcox (Oracle)
2024-03-22 9:37 ` Vlastimil Babka
2024-03-22 12:51 ` David Hildenbrand
2024-03-21 14:24 ` [PATCH 4/9] mm: Support page_mapcount() on page_has_type() pages Matthew Wilcox (Oracle)
2024-03-22 9:43 ` Vlastimil Babka
2024-03-22 12:43 ` Matthew Wilcox
2024-03-22 15:04 ` David Hildenbrand
2024-03-21 14:24 ` [PATCH 5/9] mm: Turn folio_test_hugetlb into a PageType Matthew Wilcox (Oracle)
2024-03-22 10:19 ` Vlastimil Babka
2024-03-22 15:06 ` David Hildenbrand
2024-03-23 3:24 ` Matthew Wilcox
2024-03-25 7:57 ` Vlastimil Babka
2024-03-25 18:48 ` Andrew Morton
2024-03-25 20:41 ` Matthew Wilcox
2024-03-25 20:47 ` Vlastimil Babka
2024-03-25 15:14 ` Matthew Wilcox
2024-03-25 15:18 ` Matthew Wilcox
2024-03-25 15:33 ` Matthew Wilcox
2024-03-21 14:24 ` [PATCH 6/9] mm: Remove a call to compound_head() from is_page_hwpoison() Matthew Wilcox (Oracle)
2024-03-22 10:28 ` Vlastimil Babka
2024-03-21 14:24 ` [PATCH 7/9] mm: Free up PG_slab Matthew Wilcox (Oracle)
2024-03-22 9:20 ` Miaohe Lin
2024-03-22 10:41 ` Vlastimil Babka
2024-04-01 3:38 ` Miaohe Lin
2024-03-22 15:09 ` David Hildenbrand
2024-03-25 15:19 ` Matthew Wilcox
2024-03-31 15:11 ` kernel test robot
2024-04-02 5:26 ` Matthew Wilcox
2024-03-21 14:24 ` [PATCH 8/9] mm: Improve dumping of mapcount and page_type Matthew Wilcox (Oracle)
2024-03-22 11:05 ` Vlastimil Babka
2024-03-22 15:10 ` David Hildenbrand
2024-03-21 14:24 ` [PATCH 9/9] hugetlb: Remove mention of destructors Matthew Wilcox (Oracle)
2024-03-22 11:08 ` Vlastimil Babka
2024-03-22 15:13 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zf2A1OWa-ibDVRlq@casper.infradead.org \
--to=willy@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=linmiaohe@huawei.com \
--cc=linux-mm@kvack.org \
--cc=muchun.song@linux.dev \
--cc=osalvador@suse.de \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox