linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: 김재원 <jaewon31.kim@samsung.com>
To: Yu Zhao <yuzhao@google.com>
Cc: "rostedt@goodmis.org" <rostedt@goodmis.org>,
	"tjmercier@google.com" <tjmercier@google.com>,
	"kaleshsingh@google.com" <kaleshsingh@google.com>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	"vbabka@suse.cz" <vbabka@suse.cz>,
	"hannes@cmpxchg.org" <hannes@cmpxchg.org>,
	"sj@kernel.org" <sj@kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-trace-kernel@vger.kernel.org"
	<linux-trace-kernel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"jaewon31.kim@gmail.com" <jaewon31.kim@gmail.com>
Subject: RE: [PATCH v4] vmscan: add trace events for lru_gen
Date: Tue, 26 Sep 2023 23:15:19 +0900	[thread overview]
Message-ID: <20230926141519epcms1p5b7808c768df48647516f458529e4e3c8@epcms1p5> (raw)
In-Reply-To: <20230926073333epcms1p14c9798232b395007eb20becb5dbc4b4e@epcms1p1>

>>>On Mon, Sep 25, 2023 at 10:20?PM Jaewon Kim <jaewon31.kim@samsung.com> wrote:
>>>>
>>>> As the legacy lru provides, the lru_gen needs some trace events for
>>>> debugging.
>>>>
>>>> This commit introduces 2 trace events.
>>>>   trace_mm_vmscan_lru_gen_scan
>>>>   trace_mm_vmscan_lru_gen_evict
>>>>
>>>> Each event is similar to the following legacy events.
>>>>   trace_mm_vmscan_lru_isolate,
>>>>   trace_mm_vmscan_lru_shrink_[in]active
>>>
>>>We should just reuse trace_mm_vmscan_lru_isolate and
>>>trace_mm_vmscan_lru_shrink_inactive instead of adding new tracepoints.
>>>
>>>To reuse trace_mm_vmscan_lru_isolate, we'd just need to append two new
>>>names to LRU_NAMES.
>>>
>>>The naming of trace_mm_vmscan_lru_shrink_inactive might seem confusing
>>>but it's how MGLRU maintains the compatibility, e.g., the existing
>>>active/inactive counters in /proc/vmstat.
>>
>>
>>Hello
>>
>>Actually I had tried to reuse them. But some value was not that compatible.
>>Let me try that way again.
>>
>>>
>
>Hello Yu Zhao
>
>Could you look into what I tried below? I reused the legacy trace events as you recommened.
>
>For the nr_scanned for trace_mm_vmscan_lru_shrink_inactive, I just used the scanned returned from isolate_folios.
>I thought this is right as scan_folios also uses its isolated.
>  __count_vm_events(PGSCAN_ANON + type, isolated);
>But I guess the scanned in scan_folios is actually the one used in shrink_inactive_list

please ignore nr_scanned thing above I just misread the code.

This is an example, I think it works well.

 mm_vmscan_lru_isolate: isolate_mode=0 classzone=2 order=0 nr_requested=4096 nr_scanned=64 nr_skipped=0 nr_taken=64 lru=inactive_file
 mm_vmscan_lru_shrink_inactive: nid=0 nr_scanned=64 nr_reclaimed=63 nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activate_anon=0 nr_activate_file=1 nr_ref_keep=0 nr_unmap_fail=0 priority=2 flags=RECLAIM_WB_FILE|RECLAIM_WB_ASYNC

>
>I tested this on both 0 and 7 of /sys/kernel/mm/lru_gen/enabled
>
>
>diff --git a/mm/vmscan.c b/mm/vmscan.c
>index a4e44f1c97c1..b61a0156559c 100644
>--- a/mm/vmscan.c
>+++ b/mm/vmscan.c
>@@ -4328,6 +4328,7 @@ static int scan_folios(struct lruvec *lruvec, struct scan_control *sc,
>        int sorted = 0;
>        int scanned = 0;
>        int isolated = 0;
>+       int skipped = 0;
>        int remaining = MAX_LRU_BATCH;
>        struct lru_gen_folio *lrugen = &lruvec->lrugen;
>        struct mem_cgroup *memcg = lruvec_memcg(lruvec);
>@@ -4341,7 +4342,7 @@ static int scan_folios(struct lruvec *lruvec, struct scan_control *sc,
> 
>        for (i = MAX_NR_ZONES; i > 0; i--) {
>                LIST_HEAD(moved);
>-               int skipped = 0;
>+               int skipped_zone = 0;
>                int zone = (sc->reclaim_idx + i) % MAX_NR_ZONES;
>                struct list_head *head = &lrugen->folios[gen][type][zone];
> 
>@@ -4363,16 +4364,17 @@ static int scan_folios(struct lruvec *lruvec, struct scan_control *sc,
>                                isolated += delta;
>                        } else {
>                                list_move(&folio->lru, &moved);
>-                               skipped += delta;
>+                               skipped_zone += delta;
>                        }
> 
>-                       if (!--remaining || max(isolated, skipped) >= MIN_LRU_BATCH)
>+                       if (!--remaining || max(isolated, skipped_zone) >= MIN_LRU_BATCH)
>                                break;
>                }
> 
>-               if (skipped) {
>+               if (skipped_zone) {
>                        list_splice(&moved, head);
>-                       __count_zid_vm_events(PGSCAN_SKIP, zone, skipped);
>+                       __count_zid_vm_events(PGSCAN_SKIP, zone, skipped_zone);
>+                       skipped += skipped_zone;
>                }
> 
>                if (!remaining || isolated >= MIN_LRU_BATCH)
>@@ -4387,6 +4389,9 @@ static int scan_folios(struct lruvec *lruvec, struct scan_control *sc,
>        __count_memcg_events(memcg, item, isolated);
>        __count_memcg_events(memcg, PGREFILL, sorted);
>        __count_vm_events(PGSCAN_ANON + type, isolated);
>+       trace_mm_vmscan_lru_isolate(sc->reclaim_idx, sc->order, MAX_LRU_BATCH,
>+                                   scanned, skipped, isolated,
>+                                   type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON);
> 
>        /*
>         * There might not be eligible folios due to reclaim_idx. Check the
>@@ -4517,6 +4522,9 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap
> retry:
>        reclaimed = shrink_folio_list(&list, pgdat, sc, &stat, false);
>        sc->nr_reclaimed += reclaimed;
>+       trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id,
>+                       scanned, reclaimed, &stat, sc->priority,
>+                       type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON);
> 
>        list_for_each_entry_safe_reverse(folio, next, &list, lru) {
>                if (!folio_evictable(folio)) {
>


  parent reply	other threads:[~2023-09-26 14:15 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CGME20230926042019epcas1p11c28533f7b7db99db9f9d8a03ddd332c@epcas1p1.samsung.com>
2023-09-26  4:22 ` Jaewon Kim
2023-09-26  4:42   ` Yu Zhao
     [not found]   ` <CGME20230926042019epcas1p11c28533f7b7db99db9f9d8a03ddd332c@epcms1p3>
2023-09-26  5:10     ` 김재원
     [not found]   ` <CGME20230926042019epcas1p11c28533f7b7db99db9f9d8a03ddd332c@epcms1p1>
2023-09-26  7:33     ` 김재원
     [not found]   ` <CGME20230926042019epcas1p11c28533f7b7db99db9f9d8a03ddd332c@epcms1p5>
2023-09-26 14:15     ` 김재원 [this message]
2023-10-01 23:41       ` Jaewon Kim
2023-10-02  3:26         ` Yu Zhao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230926141519epcms1p5b7808c768df48647516f458529e4e3c8@epcms1p5 \
    --to=jaewon31.kim@samsung.com \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=jaewon31.kim@gmail.com \
    --cc=kaleshsingh@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-trace-kernel@vger.kernel.org \
    --cc=rostedt@goodmis.org \
    --cc=sj@kernel.org \
    --cc=tjmercier@google.com \
    --cc=vbabka@suse.cz \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox