From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 883F7C433F5 for ; Tue, 14 Sep 2021 22:42:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2CCC2604D1 for ; Tue, 14 Sep 2021 22:42:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 2CCC2604D1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id C9ECE6B006C; Tue, 14 Sep 2021 18:42:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C4EC4940008; Tue, 14 Sep 2021 18:42:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AEE8D6B0073; Tue, 14 Sep 2021 18:42:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0241.hostedemail.com [216.40.44.241]) by kanga.kvack.org (Postfix) with ESMTP id A13C66B006C for ; Tue, 14 Sep 2021 18:42:02 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 652D52B367 for ; Tue, 14 Sep 2021 22:42:02 +0000 (UTC) X-FDA: 78587653284.24.200EB1B Received: from mail-ed1-f48.google.com (mail-ed1-f48.google.com [209.85.208.48]) by imf18.hostedemail.com (Postfix) with ESMTP id 1BD4B4002088 for ; Tue, 14 Sep 2021 22:42:01 +0000 (UTC) Received: by mail-ed1-f48.google.com with SMTP id 9so1147620edx.11 for ; Tue, 14 Sep 2021 15:42:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=fWovTyf3yVyF8vRfkGgZoVPNAy5dI3u45H87y5bM+qM=; b=QYLmlGLrsH/mMqSRlYoZ9PTMuCp9PCxK/mNDz9meUad+qe4/HqHMw9E0eBfD4BZgn/ NX282XbNKV4HV/pjSjpW4iU03c6aJJKfNsC++5bKucTvw6P0h6y+lxKKE5O1Fn92QIN5 iGx45G5gAX2/8t5KRKhydgMVUc5upCSYwO+/46s5a2BZXoFop/KTGLXAejSklpr19zLH qEhEpLLBND9H4/sW202fWLF0MB0Dmeau+FQ0JtY9JK0bPElMpDB1UeycuD0VEAqpsn9n 0qpnkeMNt1QHJLqBif7XxMvHEmrZ87kbL7XKD7iaqlp7resOa+uXKWecKVm/8DgcVVHW Saqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=fWovTyf3yVyF8vRfkGgZoVPNAy5dI3u45H87y5bM+qM=; b=xPt5KSoKx3Dy6OHXqA+OyCtvKn8T5qZKFSZuzM9r+D53Ijdtxee1gwrzFD8VgiMJW2 3ghIJHXMtaL/plpaAbFI8vpUbSEn+/RTtR6RztojfTgzd/dGhOGqDHnJNr6QFh01bDyC 5vilQs74qO7VE4S5QSolgSveLfzj7EB4UGGoNGNlqmOg36ZbBTdEkbjS3EJrCYp/YnVq glHgGcdUxP4IZitPfxEwX4ULuhV5mVZu9+n1pfi/eq2ABPkTTSoGxMMG9loOB7yCX22w E3Pb6lGOokaMDP67FA2GN5JfapumQuFT+PDWATKmwwwdrVyC81Ob65ejyoCdNpxiR3vR 7TeA== X-Gm-Message-State: AOAM531/6zaKythuXnqkqF3+WGxa3eNsexjA/iLJ1GkrhQZkZVQafs/j ThqXtD4ExUs8Nmdr6ET4e6KbzT4iatBwmx5Z9CM= X-Google-Smtp-Source: ABdhPJwDEoaY5QEgbCf44Ik5CaZ+Z0LNMi4iisLqdUixGBaC4rjA7K6yFY+gggikTOparBtSQ+tYVf6PjC8Prb1QJ+I= X-Received: by 2002:a05:6402:16c8:: with SMTP id r8mr632288edx.101.1631659320868; Tue, 14 Sep 2021 15:42:00 -0700 (PDT) MIME-Version: 1.0 References: <20210914013701.344956-1-ying.huang@intel.com> <20210914013701.344956-3-ying.huang@intel.com> In-Reply-To: <20210914013701.344956-3-ying.huang@intel.com> From: Yang Shi Date: Tue, 14 Sep 2021 15:41:49 -0700 Message-ID: Subject: Re: [PATCH -V8 2/6] memory tiering: add page promotion counter To: Huang Ying Cc: Linux Kernel Mailing List , Andrew Morton , Michal Hocko , Rik van Riel , Mel Gorman , Peter Zijlstra , Dave Hansen , Zi Yan , Wei Xu , osalvador , Shakeel Butt , Linux MM Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: krubs1spxts5wr5zrpbkshrsdr7kidkj Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=QYLmlGLr; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf18.hostedemail.com: domain of shy828301@gmail.com designates 209.85.208.48 as permitted sender) smtp.mailfrom=shy828301@gmail.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 1BD4B4002088 X-HE-Tag: 1631659321-144801 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Sep 13, 2021 at 6:37 PM Huang Ying wrote: > > To distinguish the number of the memory tiering promoted pages from > that of the originally inter-socket NUMA balancing migrated pages. > The counter is per-node (count in the target node). So this can be > used to identify promotion imbalance among the NUMA nodes. I'd like this patch be the very first one in the series. Since we need such counters regardless of all the optimizations. And actually I think this patch could go with the merged "migration in lieu of discard" patchset. > > Signed-off-by: "Huang, Ying" > Cc: Andrew Morton > Cc: Michal Hocko > Cc: Rik van Riel > Cc: Mel Gorman > Cc: Peter Zijlstra > Cc: Dave Hansen > Cc: Yang Shi > Cc: Zi Yan > Cc: Wei Xu > Cc: osalvador > Cc: Shakeel Butt > Cc: linux-kernel@vger.kernel.org > Cc: linux-mm@kvack.org > --- > include/linux/mmzone.h | 3 +++ > include/linux/node.h | 5 +++++ > mm/migrate.c | 11 +++++++++-- > mm/vmstat.c | 3 +++ > 4 files changed, 20 insertions(+), 2 deletions(-) > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index 6a1d79d84675..37ccd6158765 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -209,6 +209,9 @@ enum node_stat_item { > NR_PAGETABLE, /* used for pagetables */ > #ifdef CONFIG_SWAP > NR_SWAPCACHE, > +#endif > +#ifdef CONFIG_NUMA_BALANCING > + PGPROMOTE_SUCCESS, /* promote successfully */ > #endif > NR_VM_NODE_STAT_ITEMS > }; > diff --git a/include/linux/node.h b/include/linux/node.h > index 8e5a29897936..26e96fcc66af 100644 > --- a/include/linux/node.h > +++ b/include/linux/node.h > @@ -181,4 +181,9 @@ static inline void register_hugetlbfs_with_node(node_registration_func_t reg, > > #define to_node(device) container_of(device, struct node, dev) > > +static inline bool node_is_toptier(int node) > +{ > + return node_state(node, N_CPU); > +} > + > #endif /* _LINUX_NODE_H_ */ > diff --git a/mm/migrate.c b/mm/migrate.c > index a159a36dd412..6f7a6e2ef41f 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -2163,6 +2163,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, > pg_data_t *pgdat = NODE_DATA(node); > int isolated; > int nr_remaining; > + int nr_succeeded; > LIST_HEAD(migratepages); > new_page_t *new; > bool compound; > @@ -2201,7 +2202,8 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, > > list_add(&page->lru, &migratepages); > nr_remaining = migrate_pages(&migratepages, *new, NULL, node, > - MIGRATE_ASYNC, MR_NUMA_MISPLACED, NULL); > + MIGRATE_ASYNC, MR_NUMA_MISPLACED, > + &nr_succeeded); > if (nr_remaining) { > if (!list_empty(&migratepages)) { > list_del(&page->lru); > @@ -2210,8 +2212,13 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, > putback_lru_page(page); > } > isolated = 0; > - } else > + } else { > count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_pages); > + if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && > + !node_is_toptier(page_to_nid(page)) && node_is_toptier(node)) > + mod_node_page_state(NODE_DATA(node), PGPROMOTE_SUCCESS, > + nr_succeeded); > + } > BUG_ON(!list_empty(&migratepages)); > return isolated; > > diff --git a/mm/vmstat.c b/mm/vmstat.c > index 8ce2620344b2..fff0ec94d795 100644 > --- a/mm/vmstat.c > +++ b/mm/vmstat.c > @@ -1236,6 +1236,9 @@ const char * const vmstat_text[] = { > #ifdef CONFIG_SWAP > "nr_swapcached", > #endif > +#ifdef CONFIG_NUMA_BALANCING > + "pgpromote_success", > +#endif > > /* enum writeback_stat_item counters */ > "nr_dirty_threshold", > -- > 2.30.2 >