From: Johannes Weiner <hannes@cmpxchg.org>
To: alexlzhu@fb.com
Cc: linux-mm@kvack.org, kernel-team@fb.com, willy@infradead.org,
riel@surriel.com
Subject: Re: [PATCH v3 3/3] mm: THP low utilization shrinker
Date: Thu, 13 Oct 2022 12:56:19 -0400 [thread overview]
Message-ID: <Y0hDM0wBrsGW4RkA@cmpxchg.org> (raw)
In-Reply-To: <Y0g/WA+Q1g+zTO4M@cmpxchg.org>
Oof, fatfingered send vs postpone. Here is the rest ;)
On Thu, Oct 13, 2022 at 12:39:53PM -0400, Johannes Weiner wrote:
> On Wed, Oct 12, 2022 at 03:51:47PM -0700, alexlzhu@fb.com wrote:
> > +
> > + if (get_page_unless_zero(head)) {
> > + if (!trylock_page(head)) {
> > + put_page(head);
> > + return LRU_SKIP;
> > + }
>
> The trylock could use a comment:
/* Inverse lock order from add_underutilized_thp() */
> > + list_lru_isolate(lru, item);
> > + spin_unlock_irq(lock);
> > + num_utilized_pages = thp_number_utilized_pages(head);
> > + bucket = thp_utilization_bucket(num_utilized_pages);
> > + if (bucket < THP_UTIL_BUCKET_NR - 1) {
> > + split_huge_page(head);
> > + spin_lock_irq(lock);
The re-lock also needs to be unconditional, or you'll get a double
unlock in the not-split case.
> > @@ -2737,6 +2800,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
> > struct folio *folio = page_folio(page);
> > struct deferred_split *ds_queue = get_deferred_split_queue(&folio->page);
> > XA_STATE(xas, &folio->mapping->i_pages, folio->index);
> > + struct list_head *underutilized_thp_list = page_underutilized_thp_list(&folio->page);
> > struct anon_vma *anon_vma = NULL;
> > struct address_space *mapping = NULL;
> > int extra_pins, ret;
> > @@ -2844,6 +2908,9 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
> > list_del(page_deferred_list(&folio->page));
> > }
> > spin_unlock(&ds_queue->split_queue_lock);
A brief comment would be useful:
/* Frozen refs lock out additions, test can be lockless */
> > + if (!list_empty(underutilized_thp_list))
> > + list_lru_del_page(&huge_low_util_page_lru, &folio->page,
> > + underutilized_thp_list);
> > if (mapping) {
> > int nr = folio_nr_pages(folio);
> >
> > @@ -2886,6 +2953,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
> > void free_transhuge_page(struct page *page)
> > {
> > struct deferred_split *ds_queue = get_deferred_split_queue(page);
> > + struct list_head *underutilized_thp_list = page_underutilized_thp_list(page);
> > unsigned long flags;
> >
> > spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
> > @@ -2894,6 +2962,12 @@ void free_transhuge_page(struct page *page)
> > list_del(page_deferred_list(page));
> > }
> > spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
Here as well:
/* A dead page cannot be re-added, test can be lockless */
> > + if (!list_empty(underutilized_thp_list))
> > + list_lru_del_page(&huge_low_util_page_lru, page, underutilized_thp_list);
> > +
> > + if (PageLRU(page))
> > + __folio_clear_lru_flags(page_folio(page));
> > +
> > free_compound_page(page);
> > }
> >
> > @@ -2934,6 +3008,39 @@ void deferred_split_huge_page(struct page *page)
> > spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
> > }
> >
> > +void add_underutilized_thp(struct page *page)
> > +{
> > + VM_BUG_ON_PAGE(!PageTransHuge(page), page);
> > +
> > + if (PageSwapCache(page))
> > + return;
Why is that?
> > +
> > + /*
> > + * Need to take a reference on the page to prevent the page from getting free'd from
> > + * under us while we are adding the THP to the shrinker.
> > + */
> > + if (!get_page_unless_zero(page))
> > + return;
> > +
> > + if (!is_anon_transparent_hugepage(page))
> > + goto out_put;
> > +
> > + if (is_huge_zero_page(page))
> > + goto out_put;
> > +
And here:
/* Stabilize page->memcg to allocate and add to the same list */
> > + lock_page(page);
> > +
> > + if (memcg_list_lru_alloc(page_memcg(page), &huge_low_util_page_lru, GFP_KERNEL))
> > + goto out_unlock;
The testbot pointed this out already, but this allocation call needs
an #ifdef CONFIG_MEMCG_KMEM guard.
next prev parent reply other threads:[~2022-10-13 16:56 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-10-12 22:51 [PATCH v3 0/3] THP Shrinker alexlzhu
2022-10-12 22:51 ` [PATCH v3 1/3] mm: add thp_utilization metrics to debugfs alexlzhu
2022-10-13 11:35 ` Kirill A. Shutemov
2022-10-13 22:53 ` Alex Zhu (Kernel)
2022-10-18 4:28 ` Huang, Ying
2022-10-18 3:21 ` Huang, Ying
2022-10-12 22:51 ` [PATCH v3 2/3] mm: changes to split_huge_page() to free zero filled tail pages alexlzhu
2022-10-13 17:11 ` kernel test robot
2022-10-19 5:43 ` Huang, Ying
2022-10-12 22:51 ` [PATCH v3 3/3] mm: THP low utilization shrinker alexlzhu
2022-10-13 13:48 ` kernel test robot
2022-10-13 16:39 ` Johannes Weiner
2022-10-13 16:56 ` Johannes Weiner [this message]
2022-10-17 18:19 ` Alex Zhu (Kernel)
2022-10-19 7:04 ` Huang, Ying
2022-10-19 19:08 ` Alex Zhu (Kernel)
2022-10-20 1:24 ` Huang, Ying
2022-10-20 3:29 ` Alex Zhu (Kernel)
2022-10-20 8:06 ` Huang, Ying
2022-10-20 17:04 ` Alex Zhu (Kernel)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y0hDM0wBrsGW4RkA@cmpxchg.org \
--to=hannes@cmpxchg.org \
--cc=alexlzhu@fb.com \
--cc=kernel-team@fb.com \
--cc=linux-mm@kvack.org \
--cc=riel@surriel.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox