From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D2A1C433EF for ; Wed, 25 May 2022 14:48:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B88018D0003; Wed, 25 May 2022 10:48:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B3A918D0001; Wed, 25 May 2022 10:48:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9FDC18D0003; Wed, 25 May 2022 10:48:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 912328D0001 for ; Wed, 25 May 2022 10:48:57 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 7165D80FF0 for ; Wed, 25 May 2022 14:48:57 +0000 (UTC) X-FDA: 79504547514.13.763E949 Received: from mail-qk1-f174.google.com (mail-qk1-f174.google.com [209.85.222.174]) by imf05.hostedemail.com (Postfix) with ESMTP id A8C16100033 for ; Wed, 25 May 2022 14:48:27 +0000 (UTC) Received: by mail-qk1-f174.google.com with SMTP id c1so17546267qkf.13 for ; Wed, 25 May 2022 07:48:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=EC3qSzV90yIEpJPSb9gAka294lEXNvP+jK/k6V2VkwY=; b=sP6nIiJRj1gRoaVX7vHfBXwVVy5eYYU++U485svk6SpXKrAg619n1A96DqCaV5FkTy w5w3iKKMFkLE4auXFj/aXfZJ5LPy5up3GbiHdOpKNiIE2VFqSozTMNo8Z5JrGnlrPSxr FGBkY22R8Ok4K4xbXLOtAw+5WpvMIZnnDRImsKSd8fZ6krbHAwyGN5O/1zXBg7t2iiKd 6QUp7DCaIcUgitzRn19UEloMeMIf222lzADnzLtWyvg1k1G6/IvzHz7fDWDjcRMCUJh4 XmmmhpgKWvFtDkJT7EDuBJkZ21628wMOPAE7wtEyiNB5/GOfFuje3Xy9mAPkwzShrjmb xE+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=EC3qSzV90yIEpJPSb9gAka294lEXNvP+jK/k6V2VkwY=; b=G7qaEuSmJR+QmdnEpLsrUo0kuaZ0qok/j40AixmwZrIi1WsRJkH5p3wJdIbnchlP7Y onMran6WGzdvBotbcf4+ajzCkpfG5Vr0NyhoVufIi6qttMOtfCaaMCu96aK/cjXBmxb2 wmlmPLkHt2/GEALP1cvyEaRTCz8+m/77wZek5mshDOYDez/WAzmq6j8e6oc0LYPWz7/C YXZe7thXqi1Pao9+zO0qd7GkJTlO7vUbC7mwSgYiDBg9DOMdkRvHp/Bs9CyLuStyjYW4 cJL4cc8t50V0Bt8rRFM3mnw0/QwnSTNwj4KaIQ/A1t87bZQAjekAXDeEzsazr06r1kdf +P/g== X-Gm-Message-State: AOAM532oYEcy0EeqjQnsCH0Z0iFGduQncO7sftxkeobmYiMiOstdIKcx V+jC3HKbx4iudToMd4xO+wpROg== X-Google-Smtp-Source: ABdhPJxvS5UW3uut5wpR0Ev8PQCVzxaIKMBquTJCK5MdIQkK4XdIeIg7BeodNwxRYmzmYLQIXoHEsw== X-Received: by 2002:a05:620a:4154:b0:6a5:7577:3e1b with SMTP id k20-20020a05620a415400b006a575773e1bmr5859679qko.694.1653490135896; Wed, 25 May 2022 07:48:55 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::1:741f]) by smtp.gmail.com with ESMTPSA id a3-20020ac85b83000000b002f9303ce545sm1666015qta.39.2022.05.25.07.48.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 May 2022 07:48:55 -0700 (PDT) Date: Wed, 25 May 2022 10:48:54 -0400 From: Johannes Weiner To: Muchun Song Cc: mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com Subject: Re: [PATCH v4 03/11] mm: memcontrol: make lruvec lock safe when LRU pages are reparented Message-ID: References: <20220524060551.80037-1-songmuchun@bytedance.com> <20220524060551.80037-4-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Stat-Signature: twhunsg71jy4awfdmgz9k5brese43fqa X-Rspam-User: Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=cmpxchg-org.20210112.gappssmtp.com header.s=20210112 header.b=sP6nIiJR; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf05.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.222.174 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: A8C16100033 X-HE-Tag: 1653490107-509602 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, May 25, 2022 at 09:03:59PM +0800, Muchun Song wrote: > On Wed, May 25, 2022 at 08:30:15AM -0400, Johannes Weiner wrote: > > On Wed, May 25, 2022 at 05:53:30PM +0800, Muchun Song wrote: > > > On Tue, May 24, 2022 at 03:27:20PM -0400, Johannes Weiner wrote: > > > > On Tue, May 24, 2022 at 02:05:43PM +0800, Muchun Song wrote: > > > > > @@ -1230,10 +1213,23 @@ void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) > > > > > */ > > > > > struct lruvec *folio_lruvec_lock(struct folio *folio) > > > > > { > > > > > - struct lruvec *lruvec = folio_lruvec(folio); > > > > > + struct lruvec *lruvec; > > > > > > > > > > + rcu_read_lock(); > > > > > +retry: > > > > > + lruvec = folio_lruvec(folio); > > > > > spin_lock(&lruvec->lru_lock); > > > > > - lruvec_memcg_debug(lruvec, folio); > > > > > + > > > > > + if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { > > > > > + spin_unlock(&lruvec->lru_lock); > > > > > + goto retry; > > > > > + } > > > > > + > > > > > + /* > > > > > + * Preemption is disabled in the internal of spin_lock, which can serve > > > > > + * as RCU read-side critical sections. > > > > > + */ > > > > > + rcu_read_unlock(); > > > > > > > > The code looks right to me, but I don't understand the comment: why do > > > > we care that the rcu read-side continues? With the lru_lock held, > > > > reparenting is on hold and the lruvec cannot be rcu-freed anyway, no? > > > > > > > > > > Right. We could hold rcu read lock until end of reparting. So you mean > > > we do rcu_read_unlock in folio_lruvec_lock()? > > > > The comment seems to suggest that disabling preemption is what keeps > > the lruvec alive. But it's the lru_lock that keeps it alive. The > > cgroup destruction path tries to take the lru_lock long before it even > > gets to synchronize_rcu(). Once you hold the lru_lock, having an > > implied read-side critical section as well doesn't seem to matter. > > > > Well, I thought that spinlocks have implicit read-side critical sections > because it disables preemption (I learned from the comments above > synchronize_rcu() that says interrupts, preemption, or softirqs have been > disabled also serve as RCU read-side critical sections). So I have a > question: is it still true in a PREEMPT_RT kernel (I am not familiar with > this)? Yes, but you're missing my point. > > Should the comment be deleted? > > I think we could remove the comments. If the above question is false, seems > like we should continue holding rcu read lock. It's true. But assume it's false for a second. Why would you need to continue holding it? What would it protect? The lruvec would be pinned by the spinlock even if it DIDN'T imply an RCU lock, right? So I don't understand the point of the comment. If the implied RCU lock is protecting something not covered by the bare spinlock itself, it should be added to the comment. Otherwise, the comment should go.