From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A95D7C4741F for ; Wed, 4 Nov 2020 17:47:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 150BD2067B for ; Wed, 4 Nov 2020 17:47:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=cmpxchg-org.20150623.gappssmtp.com header.i=@cmpxchg-org.20150623.gappssmtp.com header.b="rli6FwsW" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 150BD2067B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 692F56B005D; Wed, 4 Nov 2020 12:47:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 641986B0068; Wed, 4 Nov 2020 12:47:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4E2006B006E; Wed, 4 Nov 2020 12:47:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0146.hostedemail.com [216.40.44.146]) by kanga.kvack.org (Postfix) with ESMTP id 20AA46B005D for ; Wed, 4 Nov 2020 12:47:51 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id BC28E8249980 for ; Wed, 4 Nov 2020 17:47:50 +0000 (UTC) X-FDA: 77447468700.10.cord61_550e22a272c3 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin10.hostedemail.com (Postfix) with ESMTP id 9533816A0D2 for ; Wed, 4 Nov 2020 17:47:50 +0000 (UTC) X-HE-Tag: cord61_550e22a272c3 X-Filterd-Recvd-Size: 7473 Received: from mail-qk1-f194.google.com (mail-qk1-f194.google.com [209.85.222.194]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Wed, 4 Nov 2020 17:47:49 +0000 (UTC) Received: by mail-qk1-f194.google.com with SMTP id 140so20094862qko.2 for ; Wed, 04 Nov 2020 09:47:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to; bh=7p9nMEZ9CzGNR2nnoc/NmQ6U82GjuP7yX7DwiANzfMc=; b=rli6FwsWHB6cip/0jF9V7BsRWZHrrjIg7apvuKsE1Kge0E1dkDcIwIFohwKtLUNDAk wz/XglDjWFEulLaAfpTOkI/0Uc4V5g+ashXdnCpUU5Y67NkbPnjYcHowLSGV3qpV1sm8 PXa4wmOaVMSbRNKZO03cdu+YlTVXPEjInI04LrO59b8qeBsErr+oo5tT/Kqu6AQo2ZRk 65sCqy2Xavh+oMTSK1yv/QuMP9HCT8mS1vrN5D4TUL9Ved1CVkPbN6xpb49UcVVNb2VC YNORrzogpHxam/vez2HbOF4GRctTlLzbSWFf3SULmRcwcG/9bFvtEV/CyaxRuImJqqNG bpXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to; bh=7p9nMEZ9CzGNR2nnoc/NmQ6U82GjuP7yX7DwiANzfMc=; b=d+v9ej/opEsSNl8aMuoLYKawON4wStJcB20aeknalf0otZVtKzruB8ItvDdXd9d+PK a69K9TGiHzU6cq1vFpULWbUvAL1tXBTMjTBy/C0Zb6iZTZ64sx2aAfK0WzOc/ZPgZO26 fA0STG26inv19UhZ6EMiGWIusOL0az76k/GOtRIh3/kJtydBQw9JBfW3V+cwIdY8bfMT gPmPN9fVsbKeua0Qpkg+30xmf31MW9ThPRi3StwzNg7DBvKxR/DmYMHuf0zOdzsZqC8a cF6JnxJ/qfrjXpT9rVhrEAoLCwLK18NK2GB8Zs+7wUcQy/eykcrcapbXrDLjckKQZ0TF Zd4w== X-Gm-Message-State: AOAM53256+g7r2G/EFlmxLfPI4qfC0BTiHSWSKLJh7DEz8DuZYEBxzGs aXZkRumChkK7m01vGJNAKuQung== X-Google-Smtp-Source: ABdhPJyMmSWtU9YdDH6PIQ8KMI59p6etR+e16HXhCBMFokoPryGCTptpZglCy4FPpuIv5MAhH7H3vA== X-Received: by 2002:a37:8542:: with SMTP id h63mr27249718qkd.102.1604512068975; Wed, 04 Nov 2020 09:47:48 -0800 (PST) Received: from localhost ([2620:10d:c091:480::1:9e9e]) by smtp.gmail.com with ESMTPSA id d145sm2908052qke.83.2020.11.04.09.47.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Nov 2020 09:47:48 -0800 (PST) Date: Wed, 4 Nov 2020 12:46:03 -0500 From: Johannes Weiner To: Alex Shi Cc: Matthew Wilcox , akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com, kirill@shutemov.name, alexander.duyck@gmail.com, rong.a.chen@intel.com, mhocko@suse.com, vdavydov.dev@gmail.com, shy828301@gmail.com, Vlastimil Babka , Minchan Kim Subject: Re: [PATCH v20 08/20] mm: page_idle_get_page() does not need lru_lock Message-ID: <20201104174603.GB744831@cmpxchg.org> References: <1603968305-8026-1-git-send-email-alex.shi@linux.alibaba.com> <1603968305-8026-9-git-send-email-alex.shi@linux.alibaba.com> <20201102144110.GB724984@cmpxchg.org> <20201102144927.GN27442@casper.infradead.org> <20201102202003.GA740958@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Nov 04, 2020 at 07:27:21PM +0800, Alex Shi wrote: > =E5=9C=A8 2020/11/3 =E4=B8=8A=E5=8D=884:20, Johannes Weiner =E5=86=99=E9= =81=93: > > On Mon, Nov 02, 2020 at 02:49:27PM +0000, Matthew Wilcox wrote: > >> On Mon, Nov 02, 2020 at 09:41:10AM -0500, Johannes Weiner wrote: > >>> On Thu, Oct 29, 2020 at 06:44:53PM +0800, Alex Shi wrote: > >>>> From: Hugh Dickins > >>>> > >>>> It is necessary for page_idle_get_page() to recheck PageLRU() afte= r > >>>> get_page_unless_zero(), but holding lru_lock around that serves no > >>>> useful purpose, and adds to lru_lock contention: delete it. > >>>> > >>>> See https://lore.kernel.org/lkml/20150504031722.GA2768@blaptop for= the > >>>> discussion that led to lru_lock there; but __page_set_anon_rmap() = now > >>>> uses WRITE_ONCE(), > >>> > >>> That doesn't seem to be the case in Linus's or Andrew's tree. Am I > >>> missing a dependent patch series? > >>> > >>>> and I see no other risk in page_idle_clear_pte_refs() using > >>>> rmap_walk() (beyond the risk of racing PageAnon->PageKsm, mostly b= ut > >>>> not entirely prevented by page_count() check in ksm.c's > >>>> write_protect_page(): that risk being shared with page_referenced(= ) > >>>> and not helped by lru_lock). > >>> > >>> Isn't it possible, as per Minchan's description, for page->mapping = to > >>> point to a struct anon_vma without PAGE_MAPPING_ANON set, and rmap > >>> thinking it's looking at a struct address_space? > >> > >> I don't think it can point to an anon_vma without the ANON bit set. > >> Minchan's concern in that email was that it might still be NULL. > >=20 > > Hm, no, the thread is a lengthy discussion about whether the store > > could be split such that page->mapping is actually pointing to > > something invalid (anon_vma without the PageAnon bit). > >=20 > > From his email: > >=20 > > CPU 0 = CPU 1 > >=20 > > do_anonymous_page > > __page_set_anon_rmap > > /* out of order happened so SetPageLRU is done ahead */ > > SetPageLRU(page) >=20 > This SetPageLRU done in __pagevec_lru_add_fn() which under the lru_lock > protection, so the original memory barrier or lock concern isn't > correct. that means, the SetPageLRU isn't possible to be here. > And then no warry on right side 'CPU 1' problem. The SetPageLRU is done under lru_lock, but the store to page->mapping is not, so what ensures ordering between them? And what prevents the compiler from tearing the store to page->mapping? The writer does this: CPU 0 page_add_new_anon_rmap() page->mapping =3D anon_vma + PAGE_MAPPING_ANON lru_cache_add_inactive_or_unevictable() spin_lock(lruvec->lock) SetPageLRU() spin_unlock(lruvec->lock) The concern is what CPU 1 will observe at page->mapping after observing PageLRU set on the page. 1. anon_vma + PAGE_MAPPING_ANON That's the in-order scenario and is fine. 2. NULL That's possible if the page->mapping store gets reordered to occur after SetPageLRU. That's fine too because we check for it. 3. anon_vma without the PAGE_MAPPING_ANON bit That would be a problem and could lead to all kinds of undesirable behavior including crashes and data corruption. Is it possible? AFAICT the compiler is allowed to tear the store to page->mapping and I don't see anything that would prevent it. That said, I also don't see how the reader testing PageLRU under the lru_lock would prevent that in the first place. AFAICT we need that WRITE_ONCE() around the page->mapping assignment that's referenced in the changelog of this patch but I cannot find in any tree or patch.