linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Barry Song <21cnbao@gmail.com>
To: Ryan Roberts <ryan.roberts@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	David Hildenbrand <david@redhat.com>,
	 Matthew Wilcox <willy@infradead.org>,
	Huang Ying <ying.huang@intel.com>, Gao Xiang <xiang@kernel.org>,
	 Yu Zhao <yuzhao@google.com>, Yang Shi <shy828301@gmail.com>,
	Michal Hocko <mhocko@suse.com>,
	 Kefeng Wang <wangkefeng.wang@huawei.com>,
	Chris Li <chrisl@kernel.org>,  Lance Yang <ioworker0@gmail.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	 Barry Song <v-songbaohua@oppo.com>
Subject: Re: [PATCH v5 5/6] mm: vmscan: Avoid split during shrink_folio_list()
Date: Fri, 5 Apr 2024 17:06:03 +1300	[thread overview]
Message-ID: <CAGsJ_4xocWy7PyHbgWhaK1gQeHADMAng3cFtnPHFW4MGB7qkBA@mail.gmail.com> (raw)
In-Reply-To: <63c9caf4-3af4-4149-b3c2-e677788cb11f@arm.com>

On Wed, Apr 3, 2024 at 2:10 AM Ryan Roberts <ryan.roberts@arm.com> wrote:
>
> On 28/03/2024 08:18, Barry Song wrote:
> > On Thu, Mar 28, 2024 at 3:45 AM Ryan Roberts <ryan.roberts@arm.com> wrote:
> >>
> >> Now that swap supports storing all mTHP sizes, avoid splitting large
> >> folios before swap-out. This benefits performance of the swap-out path
> >> by eliding split_folio_to_list(), which is expensive, and also sets us
> >> up for swapping in large folios in a future series.
> >>
> >> If the folio is partially mapped, we continue to split it since we want
> >> to avoid the extra IO overhead and storage of writing out pages
> >> uneccessarily.
> >>
> >> Reviewed-by: David Hildenbrand <david@redhat.com>
> >> Reviewed-by: Barry Song <v-songbaohua@oppo.com>
> >> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> >> ---
> >>  mm/vmscan.c | 9 +++++----
> >>  1 file changed, 5 insertions(+), 4 deletions(-)
> >>
> >> diff --git a/mm/vmscan.c b/mm/vmscan.c
> >> index 00adaf1cb2c3..293120fe54f3 100644
> >> --- a/mm/vmscan.c
> >> +++ b/mm/vmscan.c
> >> @@ -1223,11 +1223,12 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
> >>                                         if (!can_split_folio(folio, NULL))
> >>                                                 goto activate_locked;
> >>                                         /*
> >> -                                        * Split folios without a PMD map right
> >> -                                        * away. Chances are some or all of the
> >> -                                        * tail pages can be freed without IO.
> >> +                                        * Split partially mapped folios right
> >> +                                        * away. We can free the unmapped pages
> >> +                                        * without IO.
> >>                                          */
> >> -                                       if (!folio_entire_mapcount(folio) &&
> >> +                                       if (data_race(!list_empty(
> >> +                                               &folio->_deferred_list)) &&
> >>                                             split_folio_to_list(folio,
> >>                                                                 folio_list))
> >>                                                 goto activate_locked;
> >
> > Hi Ryan,
> >
> > Sorry for bringing up another minor issue at this late stage.
>
> No problem - I'd rather take a bit longer and get it right, rather than rush it
> and get it wrong!
>
> >
> > During the debugging of thp counter patch v2, I noticed the discrepancy between
> > THP_SWPOUT_FALLBACK and THP_SWPOUT.
> >
> > Should we make adjustments to the counter?
>
> Yes, agreed; we want to be consistent here with all the other existing THP
> counters; they only refer to PMD-sized THP. I'll make the change for the next
> version.
>
> I guess we will eventually want equivalent counters for per-size mTHP using the
> framework you are adding.

Hi Ryan,

Today, I created counters for per-order SWPOUT and SWPOUT_FALLBACK.
I'd appreciate any
suggestions you might have before I submit this as patch 2/2 of my
mTHP counters series.

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index cc13fa14aa32..762a6d8759b9 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -267,6 +267,8 @@ unsigned long thp_vma_allowable_orders(struct
vm_area_struct *vma,
 enum thp_stat_item {
        THP_STAT_ANON_ALLOC,
        THP_STAT_ANON_ALLOC_FALLBACK,
+       THP_STAT_ANON_SWPOUT,
+       THP_STAT_ANON_SWPOUT_FALLBACK,
        __THP_STAT_COUNT
 };

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index e704b4408181..7f2b5d2852cc 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -554,10 +554,14 @@ static struct kobj_attribute _name##_attr =
__ATTR_RO(_name)

 THP_STATE_ATTR(anon_alloc, THP_STAT_ANON_ALLOC);
 THP_STATE_ATTR(anon_alloc_fallback, THP_STAT_ANON_ALLOC_FALLBACK);
+THP_STATE_ATTR(anon_swpout, THP_STAT_ANON_SWPOUT);
+THP_STATE_ATTR(anon_swpout_fallback, THP_STAT_ANON_SWPOUT_FALLBACK);

 static struct attribute *stats_attrs[] = {
        &anon_alloc_attr.attr,
        &anon_alloc_fallback_attr.attr,
+       &anon_swpout_attr.attr,
+       &anon_swpout_fallback_attr.attr,
        NULL,
 };

diff --git a/mm/page_io.c b/mm/page_io.c
index a9a7c236aecc..be4f822b39f8 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -212,13 +212,16 @@ int swap_writepage(struct page *page, struct
writeback_control *wbc)

 static inline void count_swpout_vm_event(struct folio *folio)
 {
+       long nr_pages = folio_nr_pages(folio);
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
        if (unlikely(folio_test_pmd_mappable(folio))) {
                count_memcg_folio_events(folio, THP_SWPOUT, 1);
                count_vm_event(THP_SWPOUT);
        }
+       if (nr_pages > 0 && nr_pages <= HPAGE_PMD_NR)
+               count_thp_state(folio_order(folio), THP_STAT_ANON_SWPOUT);
 #endif
-       count_vm_events(PSWPOUT, folio_nr_pages(folio));
+       count_vm_events(PSWPOUT, nr_pages);
 }

 #if defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index ffc4553c8615..b7c5fbd830b6 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1247,6 +1247,10 @@ static unsigned int shrink_folio_list(struct
list_head *folio_list,
                                                count_vm_event(
                                                        THP_SWPOUT_FALLBACK);
                                        }
+                                       if (nr_pages > 0 && nr_pages
<= HPAGE_PMD_NR)
+
count_thp_state(folio_order(folio),
+
THP_STAT_ANON_SWPOUT_FALLBACK);
+
 #endif
                                        if (!add_to_swap(folio))
                                                goto activate_locked_split;


Thanks
Barry


  parent reply	other threads:[~2024-04-05  4:06 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-27 14:45 [PATCH v5 0/6] Swap-out mTHP without splitting Ryan Roberts
2024-03-27 14:45 ` [PATCH v5 1/6] mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags Ryan Roberts
2024-03-29  1:56   ` Huang, Ying
2024-04-05  9:22   ` David Hildenbrand
2024-03-27 14:45 ` [PATCH v5 2/6] mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache() Ryan Roberts
2024-04-01  5:52   ` Huang, Ying
2024-04-02 11:15     ` Ryan Roberts
2024-04-03  3:57       ` Huang, Ying
2024-04-03  7:16         ` Ryan Roberts
2024-04-03  0:30   ` Zi Yan
2024-04-03  0:47     ` Lance Yang
2024-04-03  7:21     ` Ryan Roberts
2024-04-05  9:24       ` David Hildenbrand
2024-03-27 14:45 ` [PATCH v5 3/6] mm: swap: Simplify struct percpu_cluster Ryan Roberts
2024-03-27 14:45 ` [PATCH v5 4/6] mm: swap: Allow storage of all mTHP orders Ryan Roberts
2024-04-01  3:15   ` Huang, Ying
2024-04-02 11:18     ` Ryan Roberts
2024-04-03  3:07       ` Huang, Ying
2024-04-03  7:48         ` Ryan Roberts
2024-03-27 14:45 ` [PATCH v5 5/6] mm: vmscan: Avoid split during shrink_folio_list() Ryan Roberts
2024-03-28  8:18   ` Barry Song
2024-03-28  8:48     ` Ryan Roberts
2024-04-02 13:10     ` Ryan Roberts
2024-04-02 13:22       ` Lance Yang
2024-04-02 13:22       ` Ryan Roberts
2024-04-02 22:54         ` Barry Song
2024-04-05  4:06       ` Barry Song [this message]
2024-04-05  7:28         ` Ryan Roberts
2024-03-27 14:45 ` [PATCH v5 6/6] mm: madvise: Avoid split during MADV_PAGEOUT and MADV_COLD Ryan Roberts
2024-04-01 12:25   ` Lance Yang
2024-04-02 11:20     ` Ryan Roberts
2024-04-02 11:30       ` Lance Yang
2024-04-02 10:16   ` Barry Song
2024-04-02 10:56     ` Ryan Roberts
2024-04-02 11:01       ` Ryan Roberts

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAGsJ_4xocWy7PyHbgWhaK1gQeHADMAng3cFtnPHFW4MGB7qkBA@mail.gmail.com \
    --to=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=chrisl@kernel.org \
    --cc=david@redhat.com \
    --cc=ioworker0@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=ryan.roberts@arm.com \
    --cc=shy828301@gmail.com \
    --cc=v-songbaohua@oppo.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=willy@infradead.org \
    --cc=xiang@kernel.org \
    --cc=ying.huang@intel.com \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox