From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0B89C433E2 for ; Tue, 1 Sep 2020 15:41:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A2BAF207D3 for ; Tue, 1 Sep 2020 15:41:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="DKvWNEH3" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A2BAF207D3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3C52E6B0072; Tue, 1 Sep 2020 11:41:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 375396B0073; Tue, 1 Sep 2020 11:41:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 23DC3900010; Tue, 1 Sep 2020 11:41:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0148.hostedemail.com [216.40.44.148]) by kanga.kvack.org (Postfix) with ESMTP id 082C86B0072 for ; Tue, 1 Sep 2020 11:41:25 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 9C8811EE6 for ; Tue, 1 Sep 2020 15:41:24 +0000 (UTC) X-FDA: 77214906888.21.ducks96_4a1617127099 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 47A7E180442C2 for ; Tue, 1 Sep 2020 15:41:24 +0000 (UTC) X-HE-Tag: ducks96_4a1617127099 X-Filterd-Recvd-Size: 7941 Received: from mail-ej1-f66.google.com (mail-ej1-f66.google.com [209.85.218.66]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Tue, 1 Sep 2020 15:41:23 +0000 (UTC) Received: by mail-ej1-f66.google.com with SMTP id bo3so2295758ejb.11 for ; Tue, 01 Sep 2020 08:41:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=mRRv/JWr8lefJK8mOV5+SVMKnyOKM6jC1YlJAAQDR3k=; b=DKvWNEH3Gj78jemdv3xdYpWry18IwiJrX2WSmxBD9QiFUMXZk7WJJK1Ckv3j2MsKvd JkZDL3Cspa3ZWauj6niFf3qa3/wvuCOvJQCPlQCDK/7DrAmMohx4s35NZd60qn7rEaRf eD0Zh6lsBKI35S5bYfaD2ggRIZUIMTVnWUaWahiuKMTXQ2XYMYyedsSkPvMaM+qP04AN qLGyply6M6Uz0B5UiUDHj9ORJSfdI1J5B0ah9ay62g6TRX0DPYuDY3NbHcVlR2zj5cxC aHz1xeWR/XNN9xLbd7VSng/wpbh5+TAU8986gqSMno0rABJJjgxBSzNRsg2RzbDPmgOO eT4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=mRRv/JWr8lefJK8mOV5+SVMKnyOKM6jC1YlJAAQDR3k=; b=s+CEczgxeS2hjoiYKvjCGIQNj/bI44mTkwuWxCTiSTyj8nfnPAnDIYjR1eNvgJ8L7Z 2Kq6U36tY1OKBREZ51kfauZ6hkC8CscG2ffT9ThamQRn/vxRnHW/C1uSR8WFnwHpBb19 g93bEcST6e6EAT+iA0qNi5M313Ah911RYdLIk98/Z50Qb4Kp2no4T2PCoUpAiObaNp1x 946V+BGAax0qdBMmkAhRV8QANJPYq+gSdbh+65wmAw4O86NOw6nV34yhrAkDwkKuUZCE hTfMuF2t06tzoNWkuCNxSWFNoyV2ET5fJh9VkZ+ZQj602uv+z8ZzfSZUrPs52iBHJMwh DEhQ== X-Gm-Message-State: AOAM532KNpRGpPnQ7JSJji0cmNcXcD4BFzGKaZQ0sHRAh436VOT/+Xzh 0Et77RtSSi0H1uHufZaxF8D646+WXDmxnwcbAjg= X-Google-Smtp-Source: ABdhPJwSI7Ww+Iu3sIIm2sM4dYRhO7So9wRLA8acL04LZKVkQWQTxScqvQ4zcYZurITt0TyiquVJi606SSSQXpe6bs0= X-Received: by 2002:a17:906:edd6:: with SMTP id sb22mr1852146ejb.499.1598974882857; Tue, 01 Sep 2020 08:41:22 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Yang Shi Date: Tue, 1 Sep 2020 08:41:11 -0700 Message-ID: Subject: Re: [PATCH 5/5] mlock: fix unevictable_pgs event counts on THP To: Hugh Dickins Cc: Andrew Morton , Alex Shi , Johannes Weiner , Michal Hocko , Mike Kravetz , Shakeel Butt , Matthew Wilcox , Qian Cai , Linux Kernel Mailing List , Linux MM Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 47A7E180442C2 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sun, Aug 30, 2020 at 2:09 PM Hugh Dickins wrote: > > 5.8 commit 5d91f31faf8e ("mm: swap: fix vmstats for huge page") has > established that vm_events should count every subpage of a THP, > including unevictable_pgs_culled and unevictable_pgs_rescued; but > lru_cache_add_inactive_or_unevictable() was not doing so for > unevictable_pgs_mlocked, and mm/mlock.c was not doing so for > unevictable_pgs mlocked, munlocked, cleared and stranded. > > Fix them; but THPs don't go the pagevec way in mlock.c, > so no fixes needed on that path. > > Fixes: 5d91f31faf8e ("mm: swap: fix vmstats for huge page") > Signed-off-by: Hugh Dickins Acked-by: Yang Shi > --- > I've only checked UNEVICTABLEs: there may be more inconsistencies left. > The check_move_unevictable_pages() patch brought me to this one, but > this is more important because mlock works on all THPs, without needing > special testing "force". But, it's still just monotonically increasing > event counts, so not all that important. > > mm/mlock.c | 24 +++++++++++++++--------- > mm/swap.c | 6 +++--- > 2 files changed, 18 insertions(+), 12 deletions(-) > > --- 5.9-rc2/mm/mlock.c 2020-08-16 17:32:50.665507048 -0700 > +++ linux/mm/mlock.c 2020-08-28 17:42:07.975278411 -0700 > @@ -58,11 +58,14 @@ EXPORT_SYMBOL(can_do_mlock); > */ > void clear_page_mlock(struct page *page) > { > + int nr_pages; > + > if (!TestClearPageMlocked(page)) > return; > > - mod_zone_page_state(page_zone(page), NR_MLOCK, -thp_nr_pages(page)); > - count_vm_event(UNEVICTABLE_PGCLEARED); > + nr_pages = thp_nr_pages(page); > + mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages); > + count_vm_events(UNEVICTABLE_PGCLEARED, nr_pages); > /* > * The previous TestClearPageMlocked() corresponds to the smp_mb() > * in __pagevec_lru_add_fn(). > @@ -76,7 +79,7 @@ void clear_page_mlock(struct page *page) > * We lost the race. the page already moved to evictable list. > */ > if (PageUnevictable(page)) > - count_vm_event(UNEVICTABLE_PGSTRANDED); > + count_vm_events(UNEVICTABLE_PGSTRANDED, nr_pages); > } > } > > @@ -93,9 +96,10 @@ void mlock_vma_page(struct page *page) > VM_BUG_ON_PAGE(PageCompound(page) && PageDoubleMap(page), page); > > if (!TestSetPageMlocked(page)) { > - mod_zone_page_state(page_zone(page), NR_MLOCK, > - thp_nr_pages(page)); > - count_vm_event(UNEVICTABLE_PGMLOCKED); > + int nr_pages = thp_nr_pages(page); > + > + mod_zone_page_state(page_zone(page), NR_MLOCK, nr_pages); > + count_vm_events(UNEVICTABLE_PGMLOCKED, nr_pages); > if (!isolate_lru_page(page)) > putback_lru_page(page); > } > @@ -138,7 +142,7 @@ static void __munlock_isolated_page(stru > > /* Did try_to_unlock() succeed or punt? */ > if (!PageMlocked(page)) > - count_vm_event(UNEVICTABLE_PGMUNLOCKED); > + count_vm_events(UNEVICTABLE_PGMUNLOCKED, thp_nr_pages(page)); > > putback_lru_page(page); > } > @@ -154,10 +158,12 @@ static void __munlock_isolated_page(stru > */ > static void __munlock_isolation_failed(struct page *page) > { > + int nr_pages = thp_nr_pages(page); > + > if (PageUnevictable(page)) > - __count_vm_event(UNEVICTABLE_PGSTRANDED); > + __count_vm_events(UNEVICTABLE_PGSTRANDED, nr_pages); > else > - __count_vm_event(UNEVICTABLE_PGMUNLOCKED); > + __count_vm_events(UNEVICTABLE_PGMUNLOCKED, nr_pages); > } > > /** > --- 5.9-rc2/mm/swap.c 2020-08-16 17:32:50.709507284 -0700 > +++ linux/mm/swap.c 2020-08-28 17:42:07.975278411 -0700 > @@ -494,14 +494,14 @@ void lru_cache_add_inactive_or_unevictab > > unevictable = (vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) == VM_LOCKED; > if (unlikely(unevictable) && !TestSetPageMlocked(page)) { > + int nr_pages = thp_nr_pages(page); > /* > * We use the irq-unsafe __mod_zone_page_stat because this > * counter is not modified from interrupt context, and the pte > * lock is held(spinlock), which implies preemption disabled. > */ > - __mod_zone_page_state(page_zone(page), NR_MLOCK, > - thp_nr_pages(page)); > - count_vm_event(UNEVICTABLE_PGMLOCKED); > + __mod_zone_page_state(page_zone(page), NR_MLOCK, nr_pages); > + count_vm_events(UNEVICTABLE_PGMLOCKED, nr_pages); > } > lru_cache_add(page); > } >