From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CAC9C433E0 for ; Sun, 27 Dec 2020 18:16:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2E7C722507 for ; Sun, 27 Dec 2020 18:16:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2E7C722507 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ADDB08D009C; Sun, 27 Dec 2020 13:16:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A68588D0080; Sun, 27 Dec 2020 13:16:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 957378D009C; Sun, 27 Dec 2020 13:16:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0143.hostedemail.com [216.40.44.143]) by kanga.kvack.org (Postfix) with ESMTP id 7B7188D0080 for ; Sun, 27 Dec 2020 13:16:48 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 383851EE6 for ; Sun, 27 Dec 2020 18:16:48 +0000 (UTC) X-FDA: 77639868096.27.root80_12156c62748d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id 1A8813D663 for ; Sun, 27 Dec 2020 18:16:48 +0000 (UTC) X-HE-Tag: root80_12156c62748d X-Filterd-Recvd-Size: 7171 Received: from mail-lf1-f43.google.com (mail-lf1-f43.google.com [209.85.167.43]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Sun, 27 Dec 2020 18:16:47 +0000 (UTC) Received: by mail-lf1-f43.google.com with SMTP id h22so19553636lfu.2 for ; Sun, 27 Dec 2020 10:16:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=bnArJigGBdabFnIq+hfnR+zQf4osesgW3QYyMon5IsY=; b=v2G03nV+9gGBzxb5Ei6xCyhk+ThA79SlBStyTRawZsx+G7RliHUy1xcyejpJptiZBU yGDD7PKM2T9YXLbQW7nrtC5GOpsfUZ4d1pzWzGW8oKU421LZZ6drxDOdj/jdkVCfc/JM OzFy/key+ZtK/8CJ0MTeV4nuzBMcYxPB69traZy8+yV5mntyLFs+iiln39ZhOtLCzkhK RBhRCdIODMRuedFES95/c1byzajReyejDRWbmdEfw1V66l+gwg/k2DOrr36yv01RHECl 08PqLzrZekSImPDElEQ/2xvnUcr/CymVw48DApfx7C+urzHwkO4GRvYg5oT7mAkhjnOs h5zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=bnArJigGBdabFnIq+hfnR+zQf4osesgW3QYyMon5IsY=; b=Y5T+y0eD6YugFePzqGH1JqAnLBOmkG70EW59KQ/1HLg/G8ompTs4eg3/qtUuyMmCeL UnBPX6palMwnwh5HyG6S6zzowGxPLd00HRZ8UEMrAyqVDLNJDhQYfy1r/l7qJbeEaXY1 3CniIJh41Oz+4o8tY173owIrnqdBSMYSmtBkPDdYo8oIceAHcApmIMve9ZTRtHAlGrzv 8pi9DHNI0TbTEPD902NzRdKq3gLXO92bTIVJUE+LHqcrC05xoJfYyR6yP/hrLKrqeQhS Zrt9zBIgnbpu66MvrsLHGSt29hMl9GvnoVni6x0neTjEhh3nQqlwDmz8bVufhX3vb9Ic lS0Q== X-Gm-Message-State: AOAM5320PLjBYgRWu2ukWcqhbr+FmoPC2T/Jox6y1sIQfFtUiiyPnB9J Xbtk4p/lgMXok+xsFCBiPWaGwxeblFOHPKVeQ4Ls9w== X-Google-Smtp-Source: ABdhPJwDYKZBXIYknNHaca3czY4qQ//XT+pNDroUzz129cq14PS/yME1kP4onu8WFtcbZBmG6ghGS3ZCRHe06MF2OAk= X-Received: by 2002:ac2:47e7:: with SMTP id b7mr16818656lfp.117.1609093005766; Sun, 27 Dec 2020 10:16:45 -0800 (PST) MIME-Version: 1.0 References: <20201227181310.3235210-1-shakeelb@google.com> <20201227181310.3235210-2-shakeelb@google.com> In-Reply-To: <20201227181310.3235210-2-shakeelb@google.com> From: Shakeel Butt Date: Sun, 27 Dec 2020 10:16:34 -0800 Message-ID: Subject: Re: [PATCH 2/2] mm: fix numa stats for thp migration To: Muchun Song , Naoya Horiguchi , Andrew Morton Cc: "Kirill A . Shutemov" , Johannes Weiner , Roman Gushchin , Cgroups , Linux MM , LKML , stable@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sun, Dec 27, 2020 at 10:14 AM Shakeel Butt wrote: > > Currently the kernel is not correctly updating the numa stats for > NR_FILE_PAGES and NR_SHMEM on THP migration. Fix that. For NR_FILE_DIRTY > and NR_ZONE_WRITE_PENDING, although at the moment there is no need to > handle THP migration as kernel still does not have write support for > file THP but to be more future proof, this patch adds the THP support > for those stats as well. > > Fixes: e71769ae52609 ("mm: enable thp migration for shmem thp") > Signed-off-by: Shakeel Butt > Cc: > --- > mm/migrate.c | 23 ++++++++++++----------- > 1 file changed, 12 insertions(+), 11 deletions(-) > > diff --git a/mm/migrate.c b/mm/migrate.c > index 613794f6a433..ade163c6ecdf 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -402,6 +402,7 @@ int migrate_page_move_mapping(struct address_space *mapping, > struct zone *oldzone, *newzone; > int dirty; > int expected_count = expected_page_refs(mapping, page) + extra_count; > + int nr = thp_nr_pages(page); > > if (!mapping) { > /* Anonymous page without mapping */ > @@ -437,7 +438,7 @@ int migrate_page_move_mapping(struct address_space *mapping, > */ > newpage->index = page->index; > newpage->mapping = page->mapping; > - page_ref_add(newpage, thp_nr_pages(page)); /* add cache reference */ > + page_ref_add(newpage, nr); /* add cache reference */ > if (PageSwapBacked(page)) { > __SetPageSwapBacked(newpage); > if (PageSwapCache(page)) { > @@ -459,7 +460,7 @@ int migrate_page_move_mapping(struct address_space *mapping, > if (PageTransHuge(page)) { > int i; > > - for (i = 1; i < HPAGE_PMD_NR; i++) { > + for (i = 1; i < nr; i++) { > xas_next(&xas); > xas_store(&xas, newpage); > } > @@ -470,7 +471,7 @@ int migrate_page_move_mapping(struct address_space *mapping, > * to one less reference. > * We know this isn't the last reference. > */ > - page_ref_unfreeze(page, expected_count - thp_nr_pages(page)); > + page_ref_unfreeze(page, expected_count - nr); > > xas_unlock(&xas); > /* Leave irq disabled to prevent preemption while updating stats */ > @@ -493,17 +494,17 @@ int migrate_page_move_mapping(struct address_space *mapping, > old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat); > new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat); > > - __dec_lruvec_state(old_lruvec, NR_FILE_PAGES); > - __inc_lruvec_state(new_lruvec, NR_FILE_PAGES); > + __mod_lruvec_state(old_lruvec, NR_FILE_PAGES, -nr); > + __mod_lruvec_state(new_lruvec, NR_FILE_PAGES, nr); > if (PageSwapBacked(page) && !PageSwapCache(page)) { > - __dec_lruvec_state(old_lruvec, NR_SHMEM); > - __inc_lruvec_state(new_lruvec, NR_SHMEM); > + __mod_lruvec_state(old_lruvec, NR_SHMEM, -nr); > + __mod_lruvec_state(new_lruvec, NR_SHMEM, nr); > } > if (dirty && mapping_can_writeback(mapping)) { > - __dec_lruvec_state(old_lruvec, NR_FILE_DIRTY); > - __dec_zone_state(oldzone, NR_ZONE_WRITE_PENDING); > - __inc_lruvec_state(new_lruvec, NR_FILE_DIRTY); > - __inc_zone_state(newzone, NR_ZONE_WRITE_PENDING); > + __mod_lruvec_state(old_lruvec, NR_FILE_DIRTY, -nr); > + __mod_zone_page_tate(oldzone, NR_ZONE_WRITE_PENDING, -nr); This should be __mod_zone_page_state(). I fixed locally but sent the older patch by mistake. > + __mod_lruvec_state(new_lruvec, NR_FILE_DIRTY, nr); > + __mod_zone_page_state(newzone, NR_ZONE_WRITE_PENDING, nr); > } > } > local_irq_enable(); > -- > 2.29.2.729.g45daf8777d-goog >