From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D3C3EB64DC for ; Wed, 21 Jun 2023 18:17:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E5B0B8D0009; Wed, 21 Jun 2023 14:17:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DBCBD8D0002; Wed, 21 Jun 2023 14:17:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C11FF8D0009; Wed, 21 Jun 2023 14:17:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id B15228D0002 for ; Wed, 21 Jun 2023 14:17:15 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 5E5C6A09EA for ; Wed, 21 Jun 2023 18:17:15 +0000 (UTC) X-FDA: 80927562030.23.C9D6BD8 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by imf23.hostedemail.com (Postfix) with ESMTP id 5E362140015 for ; Wed, 21 Jun 2023 18:17:13 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=MVxoxG2U; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf23.hostedemail.com: domain of 3qD6TZAcKCJ0VR7K9ERDLLDIB.9LJIFKRU-JJHS79H.LOD@flex--yuanchu.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3qD6TZAcKCJ0VR7K9ERDLLDIB.9LJIFKRU-JJHS79H.LOD@flex--yuanchu.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687371433; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WHNe3JpSW3qQzxOEOwM65k3PJvHd1TN91bqMS64mhVU=; b=uE8Y5odztx1u6HWc9fSBY2YHqLQYZV9DysOKlsu6u5VzMpXFP5/WB20UWtfbIDGOu99c/6 qtlbf9lzxklXjliuRLJf5gI0S79L8ncJhpfVVb22Qbq0a4iS40PizYY3GdQ185O18+sLsg mbDcl4gniI0BpKVDAbH7NjOKZsmYpqw= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=MVxoxG2U; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf23.hostedemail.com: domain of 3qD6TZAcKCJ0VR7K9ERDLLDIB.9LJIFKRU-JJHS79H.LOD@flex--yuanchu.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3qD6TZAcKCJ0VR7K9ERDLLDIB.9LJIFKRU-JJHS79H.LOD@flex--yuanchu.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687371433; a=rsa-sha256; cv=none; b=G2rG05w+O4W9GlFbEv4TP384/Y4/TppH8OXYvVK8ydqgPxXsQx2/0ffIAPfmsvS+/mjIj4 nR+HyfJS9m3aY0c8K4C6xsgA/w5cSrz57hr558b9Z0XC8/+1a8gBmqW7hkcjRKWRBM1Ftb 2nJEByE9BbITM6CDWli7QyRJrH9kodg= Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-1b52f9b8b19so46912965ad.1 for ; Wed, 21 Jun 2023 11:17:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687371432; x=1689963432; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=WHNe3JpSW3qQzxOEOwM65k3PJvHd1TN91bqMS64mhVU=; b=MVxoxG2UJlFie+EE1fYVqH+HX9TC4DlWD+BHaHeLLJHx6Jxtu/DFPoB7LMunGaFsjp //1r8XRL5aRB7zBg/F9i2/UzJRwBnWI703NA2RibkRlg3moTohk16JYHt+4fDzmFU/lF J9YtXBEapHmbZvYFfm/TI8AJIbFeBlyUEi/8WTkJDyLh1QqStF5YuF45YeCkuAvBUg6q NXcwAJutswIyTGJSIujATExtb/Ttp52WLyhCNKELa/UsjoeripP+vytdkFYr7Yz97nyD KqivdIfLL9LXaMZgmLAkFXoY67oftZj07I4GBXIVmzIBpBgPbNV47l4Y63aZ1yUYhhRO y9Pg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687371432; x=1689963432; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=WHNe3JpSW3qQzxOEOwM65k3PJvHd1TN91bqMS64mhVU=; b=QNuBpMHzWrrSaqnY3vicL6NAzFrP++lbQ6/06VbU2Q502wbkehsZuP5xlnUmnfIYqT vR8rr7trIuc4xmVwJX3xHpDowBIN/MSQTBVkKXu2qci+4HpBjulr8bfvAdggMYPoRTXm EMwBCxGphmIVdXgPT7WPojuRarhGJ8asjTM6ZrJ3OU/XdpFN1dDfUD7JeaKeghREPiKs bQi2/ZRkKcgoCzEwIyn75xL02VlZAYqXOLdepmMsSi534uX0vD+/jIJxOPbhmjJak2IK QYoOrRxZ73uCSLeqUi8hFImlJio0uiZvH7FN54aiohhAHRIk3mZnZ85UrY8X2yH0mXg3 ARwg== X-Gm-Message-State: AC+VfDynQM3Gs/F4Z5/befrEA3ECg4ofvLKkLTBcNTR5N+n+bs+E3x+6 YJbSu6jA1YD4E0ftCcQYpX1X19qCNDYR X-Google-Smtp-Source: ACHHUZ4KOLt5QOgx4FB1LIr7plEmFPeCuljE0b32YnY15ZVuYdQlAh+psvGrpKhQXH2ap2Bx2Nfl0jTMoIC/ X-Received: from yuanchu.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:4024]) (user=yuanchu job=sendgmr) by 2002:a17:902:efd1:b0:1b3:d6be:4680 with SMTP id ja17-20020a170902efd100b001b3d6be4680mr2304661plb.13.1687371432115; Wed, 21 Jun 2023 11:17:12 -0700 (PDT) Date: Wed, 21 Jun 2023 18:04:53 +0000 In-Reply-To: <20230621180454.973862-1-yuanchu@google.com> Mime-Version: 1.0 References: <20230621180454.973862-1-yuanchu@google.com> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog Message-ID: <20230621180454.973862-6-yuanchu@google.com> Subject: [RFC PATCH v2 5/6] mm: add per-memcg reaccess histogram From: Yuanchu Xie To: Greg Kroah-Hartman , "Rafael J . Wysocki" , "Michael S . Tsirkin" , David Hildenbrand , Jason Wang , Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Yu Zhao , Kefeng Wang , Kairui Song , Yosry Ahmed , Yuanchu Xie , "T . J . Alumbaugh" Cc: Wei Xu , SeongJae Park , Sudarshan Rajagopalan , kai.huang@intel.com, hch@lst.de, jon@nutanix.com, Aneesh Kumar K V , Matthew Wilcox , Vasily Averin , linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, linux-mm@kvack.org, cgroups@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 5E362140015 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: k9s9ky5emexufbfy8qo1joij88mhc5uw X-HE-Tag: 1687371433-243483 X-HE-Meta: U2FsdGVkX1/qGmzklEEVH5EVAaaqosygtmZC22B+7BUzDYio/PoHFUiTCdrXorbT9SkAC52qwf7T1HHAReeq/vLJ0Z2s6WGwiQOTYigxN8bISHjBI1TMOTUOKM41Jlf6GrkyKRcbUChMSvIsLvPD4CW6BtIadU8UAND0Af55oFIZ8E6jNMPBoa2Mrc1mxxgquMuSB19OJbv021N5CxX3zJg0KYLmQig8TabC8P9vU57G7kxI17QCBojrZRM4AEHEW2X0xSh/fJxo4ZC0Jvt2D0ZzLUfrUPaHiTsbchNh76/jF5Ie3MufSg+e8+V2WwKhGBOJOhqN6AGlvdxQ9ApP0aU980n6uHFNS9bU5hc4ttLowvuYS3xDKaiDNVZfcBB8+dwgbo19gj8spOIaN6ry9maGOHZTJSaon4lr/RrNxkfqyAx7Yi79QmCc0MznfTL5kR9/02aziUFmSUX7MHjy/m7JYMjbUxMZ/tlUE5kbOd5JjD+i7FYa2ZCuMQKmJAd18+UHMTqg6vbsrD7UYd8EthE3mnR+KSjjXQ8AHqDRGXTAmZurcYaNphC8Tq22kJQ22EQQugBrbdqwzn6SMRgEjFB1L89il+DKVS0ht6JBgyj8Jq0ywhal/r1j3bh535X1Ivt7lvZKsd4GQsC+MQ1EXqa73LNNXAUFcm/1I/fiwiIrzEut7G2TyJdCxL2chNVG3dXWKeZKlASeqIvd5OVeZxM+jyqPZpPtaFmTS5eLhAyNpQ5O+9jdEWNw8bO4zAUsT99UlbgCWTpJb0RZcBa3OCly93jrCOl4wK6tjxx7SI9stzTmQ3c1xh0on1JkEsEshppeBMcH8nau5YVcB8ro9wNcZqPHs8EPQ90dd8wly/PwgnxdR3Xl9U08th9AKK8Quj9FgH5M1ySDdYVcjy6nUp34yFrH/1u3xjYMbW7SdvEIvxs9i6NN//FQ9s38o7PpmP9Pp18kJTCoIPRqWl7 SGIguEJq /oaWpihCbQHelCFnqSH6pKuGlEjuHsLHMuhT6mHKuuSGyW0qs9CRQP+Ik7op+qwZ/a3kuEL5ldZDGckVqfdTabomMFBz172BstBZk6YvkwSD/GUsocNd5e8n1xd3YsENrU3xSE7fwlh0Haaaa5BpETnRG4aom8utuhQmuNwULPjo5FnrMHGPEGxbzFdchpyLP7Uz2B02RRvh4ZutrQxaISR+Axp4f4xSKdB5zjmpP13rxsttH+xJa0LamJnRup5UuAfGmuPe0Vxk7Mc7S7VGaQJsrx5EOory/UPRNIYy16crTSukETFzj8WUrbbf1l85rRkUC5nk734ujjjU7VTbY/cw3+h8Ddbm7uZo3rFUl2OrjLeowfs0K0B/MO+woQGEXFpxP9Ig9L+1M/d63k/t4ruC3kEOdaUcWN4uY+9DQK1e9ykb8Gyvqbc0tTrSA3GF3UvNzETqsfg6SwNbx/KlrY+6ErBLh0TJZaGeeKnACqWD82RStpluwCEY5tMIYnnQpOJVCcCwhcLbuLhfpuiNAyQQHhte/zlVUlBbBUwTuzig5WUaHaGWmqBCR68hPO3+ryp/IzKgK8aOqUGSgT/HNgiSiXIyPQLBwNaUZ+ZStsKWgOPVw5+sVoL5e7w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: A reaccess refers to detecting an access on a page via refault or access bit harvesting after the initial access. Similar to the working set histogram, the reaccess histogram breaks down reaccesses into user-defined bins. Currently it only tracks reaccesses from access bit harvesting, and the plan is to include refaults in the same histogram by pulling information from folio->mapping->i_pages shadow entry for swapped out pages. Signed-off-by: T.J. Alumbaugh Signed-off-by: Yuanchu Xie --- include/linux/wsr.h | 9 +++- mm/memcontrol.c | 89 ++++++++++++++++++++++++++++++++++++++ mm/vmscan.c | 6 ++- mm/wsr.c | 101 ++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 203 insertions(+), 2 deletions(-) diff --git a/include/linux/wsr.h b/include/linux/wsr.h index d45f7cc0672ac..68246734679cd 100644 --- a/include/linux/wsr.h +++ b/include/linux/wsr.h @@ -26,11 +26,14 @@ struct ws_bin { struct wsr { /* protects bins */ struct mutex bins_lock; + /* protects reaccess_bins */ + struct mutex reaccess_bins_lock; struct kernfs_node *notifier; unsigned long timestamp; unsigned long report_threshold; unsigned long refresh_threshold; struct ws_bin bins[MAX_NR_BINS]; + struct ws_bin reaccess_bins[MAX_NR_BINS]; }; void wsr_register_node(struct node *node); @@ -48,6 +51,7 @@ ssize_t wsr_intervals_ms_parse(char *src, struct ws_bin *bins); */ void wsr_refresh(struct wsr *wsr, struct mem_cgroup *root, struct pglist_data *pgdat); +void report_reaccess(struct lruvec *lruvec, struct lru_gen_mm_walk *walk); void report_ws(struct pglist_data *pgdat, struct scan_control *sc); #else struct ws_bin; @@ -71,7 +75,10 @@ static inline ssize_t wsr_intervals_ms_parse(char *src, struct ws_bin *bins) return -EINVAL; } static inline void wsr_refresh(struct wsr *wsr, struct mem_cgroup *root, - struct pglist_data *pgdat) + struct pglist_data *pgdat) +{ +} +static inline void report_reaccess(struct lruvec *lruvec, struct lru_gen_mm_walk *walk) { } static inline void report_ws(struct pglist_data *pgdat, struct scan_control *sc) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index edf5bb31bb19c..b901982d659d2 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6736,6 +6736,56 @@ static ssize_t memory_wsr_intervals_ms_write(struct kernfs_open_file *of, return err ?: nbytes; } +static int memory_reaccess_intervals_ms_show(struct seq_file *m, void *v) +{ + int nid; + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); + + for_each_node_state(nid, N_MEMORY) { + struct wsr *wsr; + struct ws_bin *bin; + + wsr = lruvec_wsr(mem_cgroup_lruvec(memcg, NODE_DATA(nid))); + mutex_lock(&wsr->reaccess_bins_lock); + seq_printf(m, "N%d=", nid); + for (bin = wsr->reaccess_bins; bin->idle_age != -1; bin++) + seq_printf(m, "%u,", jiffies_to_msecs(bin->idle_age)); + mutex_unlock(&wsr->reaccess_bins_lock); + + seq_printf(m, "%lld ", LLONG_MAX); + } + seq_putc(m, '\n'); + + return 0; +} + +static ssize_t memory_reaccess_intervals_ms_write(struct kernfs_open_file *of, + char *buf, size_t nbytes, + loff_t off) +{ + unsigned int nid; + int err; + struct wsr *wsr; + struct ws_bin *bins; + struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of)); + + bins = kzalloc(sizeof(wsr->reaccess_bins), GFP_KERNEL); + if (!bins) + return -ENOMEM; + + err = memory_wsr_intervals_ms_parse(of, buf, nbytes, &nid, bins); + if (err) + goto failed; + + wsr = lruvec_wsr(mem_cgroup_lruvec(memcg, NODE_DATA(nid))); + mutex_lock(&wsr->reaccess_bins_lock); + memcpy(wsr->reaccess_bins, bins, sizeof(wsr->reaccess_bins)); + mutex_unlock(&wsr->reaccess_bins_lock); +failed: + kfree(bins); + return err ?: nbytes; +} + static int memory_wsr_refresh_ms_show(struct seq_file *m, void *v) { int nid; @@ -6874,6 +6924,34 @@ __poll_t memory_wsr_histogram_poll(struct kernfs_open_file *of, return DEFAULT_POLLMASK | EPOLLPRI; return DEFAULT_POLLMASK; } + +static int memory_reaccess_histogram_show(struct seq_file *m, void *v) +{ + int nid; + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); + + for_each_node_state(nid, N_MEMORY) { + struct wsr *wsr = + lruvec_wsr(mem_cgroup_lruvec(memcg, NODE_DATA(nid))); + struct ws_bin *bin; + + seq_printf(m, "N%d\n", nid); + + mutex_lock(&wsr->reaccess_bins_lock); + wsr_refresh(wsr, memcg, NODE_DATA(nid)); + for (bin = wsr->reaccess_bins; bin->idle_age != -1; bin++) + seq_printf(m, "%u anon=%lu file=%lu\n", + jiffies_to_msecs(bin->idle_age), + bin->nr_pages[0], bin->nr_pages[1]); + + seq_printf(m, "%lld anon=%lu file=%lu\n", LLONG_MAX, + bin->nr_pages[0], bin->nr_pages[1]); + + mutex_unlock(&wsr->reaccess_bins_lock); + } + + return 0; +} #endif static struct cftype memory_files[] = { @@ -6969,6 +7047,17 @@ static struct cftype memory_files[] = { .seq_show = memory_wsr_histogram_show, .poll = memory_wsr_histogram_poll, }, + { + .name = "reaccess.intervals_ms", + .flags = CFTYPE_NOT_ON_ROOT | CFTYPE_NS_DELEGATABLE, + .seq_show = memory_reaccess_intervals_ms_show, + .write = memory_reaccess_intervals_ms_write, + }, + { + .name = "reaccess.histogram", + .flags = CFTYPE_NOT_ON_ROOT | CFTYPE_NS_DELEGATABLE, + .seq_show = memory_reaccess_histogram_show, + }, #endif {} /* terminate */ }; diff --git a/mm/vmscan.c b/mm/vmscan.c index ba254b6e91e19..bc8c026ceef0d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4226,6 +4226,7 @@ static void walk_mm(struct lruvec *lruvec, struct mm_struct *mm, struct lru_gen_ mem_cgroup_unlock_pages(); if (walk->batched) { + report_reaccess(lruvec, walk); spin_lock_irq(&lruvec->lru_lock); reset_batch_size(lruvec, walk); spin_unlock_irq(&lruvec->lru_lock); @@ -5079,11 +5080,14 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap sc->nr_scanned -= folio_nr_pages(folio); } + walk = current->reclaim_state->mm_walk; + if (walk && walk->batched) + report_reaccess(lruvec, walk); + spin_lock_irq(&lruvec->lru_lock); move_folios_to_lru(lruvec, &list); - walk = current->reclaim_state->mm_walk; if (walk && walk->batched) reset_batch_size(lruvec, walk); diff --git a/mm/wsr.c b/mm/wsr.c index cd045ade5e9ba..a63d678e64f8b 100644 --- a/mm/wsr.c +++ b/mm/wsr.c @@ -23,8 +23,10 @@ void wsr_init(struct lruvec *lruvec) struct wsr *wsr = lruvec_wsr(lruvec); mutex_init(&wsr->bins_lock); + mutex_init(&wsr->reaccess_bins_lock); wsr->bins[0].idle_age = -1; wsr->notifier = NULL; + wsr->reaccess_bins[0].idle_age = -1; } void wsr_destroy(struct lruvec *lruvec) @@ -32,6 +34,7 @@ void wsr_destroy(struct lruvec *lruvec) struct wsr *wsr = lruvec_wsr(lruvec); mutex_destroy(&wsr->bins_lock); + mutex_destroy(&wsr->reaccess_bins_lock); memset(wsr, 0, sizeof(*wsr)); } @@ -172,6 +175,104 @@ void refresh_wsr(struct wsr *wsr, struct mem_cgroup *root, cond_resched(); } while ((memcg = mem_cgroup_iter(root, memcg, NULL))); } + +static void collect_reaccess_locked(struct wsr *wsr, + struct lru_gen_struct *lrugen, + struct lru_gen_mm_walk *walk) +{ + int gen, type, zone; + unsigned long curr_timestamp = jiffies; + unsigned long max_seq = READ_ONCE(walk->max_seq); + unsigned long min_seq[ANON_AND_FILE] = { + READ_ONCE(lrugen->min_seq[LRU_GEN_ANON]), + READ_ONCE(lrugen->min_seq[LRU_GEN_FILE]), + }; + + for (type = 0; type < ANON_AND_FILE; type++) { + unsigned long seq; + struct ws_bin *bin = wsr->reaccess_bins; + + lockdep_assert_held(&wsr->reaccess_bins_lock); + /* Skip max_seq because a reaccess moves a page from another seq + * to max_seq. We use the negative change in page count from + * other seqs to track the number of reaccesses. + */ + for (seq = max_seq - 1; seq + 1 > min_seq[type]; seq--) { + long error; + int next_gen; + unsigned long birth, gen_start; + long delta = 0; + + gen = lru_gen_from_seq(seq); + + for (zone = 0; zone < MAX_NR_ZONES; zone++) { + long nr_pages = walk->nr_pages[gen][type][zone]; + + if (nr_pages < 0) + delta += -nr_pages; + } + + birth = READ_ONCE(lrugen->timestamps[gen]); + next_gen = lru_gen_from_seq(seq + 1); + gen_start = READ_ONCE(lrugen->timestamps[next_gen]); + + /* ensure gen_start is within idle_age of bin */ + while (bin->idle_age != -1 && + time_before(gen_start + bin->idle_age, + curr_timestamp)) + bin++; + + error = delta; + /* gen exceeds the idle_age of bin */ + while (bin->idle_age != -1 && + time_before(birth + bin->idle_age, + curr_timestamp)) { + unsigned long proportion = + gen_start - + (curr_timestamp - bin->idle_age); + unsigned long gen_len = gen_start - birth; + + if (!gen_len) + break; + if (proportion) { + unsigned long split_bin = + delta / gen_len * proportion; + bin->nr_pages[type] += split_bin; + error -= split_bin; + } + gen_start = curr_timestamp - bin->idle_age; + bin++; + } + bin->nr_pages[type] += error; + } + } +} + +static void collect_reaccess(struct wsr *wsr, + struct lru_gen_struct *lrugen, + struct lru_gen_mm_walk *walk) +{ + if (READ_ONCE(wsr->reaccess_bins->idle_age) == -1) + return; + + mutex_lock(&wsr->reaccess_bins_lock); + collect_reaccess_locked(wsr, lrugen, walk); + mutex_unlock(&wsr->reaccess_bins_lock); +} + +void report_reaccess(struct lruvec *lruvec, struct lru_gen_mm_walk *walk) +{ + struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct mem_cgroup *memcg = lruvec_memcg(lruvec); + + while (memcg) { + collect_reaccess(lruvec_wsr(mem_cgroup_lruvec( + memcg, lruvec_pgdat(lruvec))), + lrugen, walk); + memcg = parent_mem_cgroup(memcg); + } +} + static struct pglist_data *kobj_to_pgdat(struct kobject *kobj) { int nid = IS_ENABLED(CONFIG_NUMA) ? kobj_to_dev(kobj)->id : -- 2.41.0.162.gfafddb0af9-goog