From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A66E2C433EF for ; Tue, 24 May 2022 19:27:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2DDCD8D0003; Tue, 24 May 2022 15:27:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 28C258D0002; Tue, 24 May 2022 15:27:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 175E68D0003; Tue, 24 May 2022 15:27:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 08ADE8D0002 for ; Tue, 24 May 2022 15:27:23 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id CBB92352C5 for ; Tue, 24 May 2022 19:27:22 +0000 (UTC) X-FDA: 79501620324.22.D2DE754 Received: from mail-qv1-f50.google.com (mail-qv1-f50.google.com [209.85.219.50]) by imf03.hostedemail.com (Postfix) with ESMTP id 28BF920025 for ; Tue, 24 May 2022 19:27:09 +0000 (UTC) Received: by mail-qv1-f50.google.com with SMTP id dn11so4440908qvb.7 for ; Tue, 24 May 2022 12:27:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=iHC/s/wKwgFf20BDbmEN0QqnLll5khAsu3zMTLCS7uM=; b=UOC+0QNjQ3eKtHdh787nVfBuk5iczhXDMucSuwob+bzfFcNtAZ+/MFwi7vgLF+FeJP 11/fIC8zmH9vY/GeNbcQ712AygNw9xkKvd6SDfeEPD3F6qaP1f8fEn7ZrlDOU4WGI5Ff z5pdg9OTxMI4cL/Ehl8ERiSnZ/igYnqO4lPvjTIbs0efPcmei6QsIaloTvfDsyoKW6fd LNxBy8sftRPoA3ZDAK0onSKD2/ifzv8crV+QgtDtMb2OK3gGaWEOmwQa8F5ZbYREzYva japzLzS7tfBh0n+gxMVTf/NTZ6c9AVBsngN53dvoa7sL8jl3O++5lxB+FRm3VpLY+CKu gC3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=iHC/s/wKwgFf20BDbmEN0QqnLll5khAsu3zMTLCS7uM=; b=hwgEnjF+CjnLzPh26eFNqm1CC1jPPaBDo3pyWIZNHpkY7X5P8Rl/XCBPFEtz0LlC9M 9SjRHN78KLn8CI31yoRJgCLDAns5+NehxF9rgrF+O2J4eYLiIXPnrr7zyuc/5rFMXtDl pdpnOJmFOM9/Pa3bok7Te+sWWPpGik9qScanvZ0iqp9S5Nqk8RFniKuJK0KwrmaBdGbM hDSFnPg0QotnTTZ9b7uiUqh8iVpS7pgNjGY6U5MQXM0a0p1V1pv+pXnlTLprp5yd1GJb /SsCWJqgRJu6EbRSqVatCu8rSWWdP5Ut58fD5Vx+DO6uGBrdoqOWWkRQEFNTs8KhGjPj PvCw== X-Gm-Message-State: AOAM531E6RZJDMv326lTGwmAV8LbStXCrhRnFZQsb2u4YMpsrUxy8JMC 40JBmYZVn9rZzrB3xIZ1bBKcyQ== X-Google-Smtp-Source: ABdhPJy2OwvCOJmjjwyCa3viSDMz2mksAOJUg3A06mu8Q30SP28gw1u1rhxY3E5kqJtA2PkBP3xgQw== X-Received: by 2002:ad4:47ca:0:b0:461:d5ac:b65b with SMTP id p10-20020ad447ca000000b00461d5acb65bmr22659303qvw.85.1653420441552; Tue, 24 May 2022 12:27:21 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::1:741f]) by smtp.gmail.com with ESMTPSA id i25-20020ac860d9000000b002f39b99f66fsm140310qtm.9.2022.05.24.12.27.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 24 May 2022 12:27:21 -0700 (PDT) Date: Tue, 24 May 2022 15:27:20 -0400 From: Johannes Weiner To: Muchun Song Cc: mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com Subject: Re: [PATCH v4 03/11] mm: memcontrol: make lruvec lock safe when LRU pages are reparented Message-ID: References: <20220524060551.80037-1-songmuchun@bytedance.com> <20220524060551.80037-4-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220524060551.80037-4-songmuchun@bytedance.com> X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 28BF920025 X-Stat-Signature: 6iaepmgrofstfkr34ft9h7f1n3ayxcdh X-Rspam-User: Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=cmpxchg-org.20210112.gappssmtp.com header.s=20210112 header.b=UOC+0QNj; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf03.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.219.50 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org X-HE-Tag: 1653420429-826905 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, May 24, 2022 at 02:05:43PM +0800, Muchun Song wrote: > The diagram below shows how to make the folio lruvec lock safe when LRU > pages are reparented. > > folio_lruvec_lock(folio) > retry: > lruvec = folio_lruvec(folio); > > // The folio is reparented at this time. > spin_lock(&lruvec->lru_lock); > > if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) > // Acquired the wrong lruvec lock and need to retry. > // Because this folio is on the parent memcg lruvec list. > goto retry; > > // If we reach here, it means that folio_memcg(folio) is stable. > > memcg_reparent_objcgs(memcg) > // lruvec belongs to memcg and lruvec_parent belongs to parent memcg. > spin_lock(&lruvec->lru_lock); > spin_lock(&lruvec_parent->lru_lock); > > // Move all the pages from the lruvec list to the parent lruvec list. > > spin_unlock(&lruvec_parent->lru_lock); > spin_unlock(&lruvec->lru_lock); > > After we acquire the lruvec lock, we need to check whether the folio is > reparented. If so, we need to reacquire the new lruvec lock. On the > routine of the LRU pages reparenting, we will also acquire the lruvec > lock (will be implemented in the later patch). So folio_memcg() cannot > be changed when we hold the lruvec lock. > > Since lruvec_memcg(lruvec) is always equal to folio_memcg(folio) after > we hold the lruvec lock, lruvec_memcg_debug() check is pointless. So > remove it. > > This is a preparation for reparenting the LRU pages. > > Signed-off-by: Muchun Song This looks good to me. Just one question: > @@ -1230,10 +1213,23 @@ void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) > */ > struct lruvec *folio_lruvec_lock(struct folio *folio) > { > - struct lruvec *lruvec = folio_lruvec(folio); > + struct lruvec *lruvec; > > + rcu_read_lock(); > +retry: > + lruvec = folio_lruvec(folio); > spin_lock(&lruvec->lru_lock); > - lruvec_memcg_debug(lruvec, folio); > + > + if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { > + spin_unlock(&lruvec->lru_lock); > + goto retry; > + } > + > + /* > + * Preemption is disabled in the internal of spin_lock, which can serve > + * as RCU read-side critical sections. > + */ > + rcu_read_unlock(); The code looks right to me, but I don't understand the comment: why do we care that the rcu read-side continues? With the lru_lock held, reparenting is on hold and the lruvec cannot be rcu-freed anyway, no?