From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CEF10C433EF for ; Wed, 25 May 2022 09:38:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 53A818D0005; Wed, 25 May 2022 05:38:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4E67A8D0002; Wed, 25 May 2022 05:38:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3D4F48D0005; Wed, 25 May 2022 05:38:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 30C1C8D0002 for ; Wed, 25 May 2022 05:38:49 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id D8853120827 for ; Wed, 25 May 2022 09:38:48 +0000 (UTC) X-FDA: 79503765936.27.6866159 Received: from mail-pj1-f54.google.com (mail-pj1-f54.google.com [209.85.216.54]) by imf28.hostedemail.com (Postfix) with ESMTP id 4C3FAC0013 for ; Wed, 25 May 2022 09:38:15 +0000 (UTC) Received: by mail-pj1-f54.google.com with SMTP id gk22so773719pjb.1 for ; Wed, 25 May 2022 02:38:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=Sl7ih2YcYfsKEJm1KeaU131K5lwzTZba3+OLLF53Vww=; b=Rn2pap2S0KViSmWOIIHDkjxdgyfSovtzuKxNe2Ls2oS3VWSeEsTOBJ7CaNAkB8SxJo gCC717pKh8MadpjyqV8DcX8KykwT4iGOgd66to1uZTIxylcRuPrzke0u5sn2RHbXhCM0 yy/02qaaTXJyJlqE/KyvV9jXXe7H6GdcX9qcNV9ovWcGxqfw5acK3KAh3EAPjtm9fxZv 4WtI49bHN3Y/EV44y9ND2e4WUKF1EQxQX381CosCpZiOhaw6JWbF++Cxnv3sQRKq7Stq KlHpR4b8Ab1tvHM2RpmS3ctwg2v9j3oYfsnU8Bu+djUNashMDbSsk/LuKGGz7vylEvab LwMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=Sl7ih2YcYfsKEJm1KeaU131K5lwzTZba3+OLLF53Vww=; b=NHN04nddCG5RIkF5YKofSAH/DDvS+Pr3O7NUX+lo5PIK/RmbS5D+2k6GRxKTuQ4PrX 6ks6U9EgQ4IINpcD1szo2MCmCaWK5Uh6lPrejqe8rPBoyihLrYRDPM2lebI8pLw1MYjG upwuC/L7OaJWijkO9vepQwuX1msevS0PShFU7DjKCCwo+MBSyRy4Kw2nX7cKJNt9ShWP sVNS0myXC6j5CbenoWKaKt+GQUUFSopzlV4r5sctJK3l1QwB7OGMf6Vl7ACeBLMcqvbR XXjEX42k7YrPVsfwqRAeOtnWLN6nbIqli72lrsGrUtXcJ1JPoFgENdqeJgCGf9cBqPeT J1Dw== X-Gm-Message-State: AOAM531f8Skb4KxWrnTvVsEhHBs1s+JktsrUTUtQku74MJjV48Qk7Wyo qU5Nx0+fdWV5ELwHPo+Ya8hBuQ== X-Google-Smtp-Source: ABdhPJzs4mYbPl2+GVN3dqT2Kjrn2lZklnRuhZcuefWSSk5g14iB9r2baDj7hF1oIxX9Mj3ico58tw== X-Received: by 2002:a17:90b:1642:b0:1e0:96b:c3f2 with SMTP id il2-20020a17090b164200b001e0096bc3f2mr9424262pjb.228.1653471526466; Wed, 25 May 2022 02:38:46 -0700 (PDT) Received: from localhost ([2408:8207:18da:2310:c40f:7b5:4fa8:df3f]) by smtp.gmail.com with ESMTPSA id o1-20020a170902d4c100b0015edb22aba1sm8984602plg.270.2022.05.25.02.38.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 May 2022 02:38:46 -0700 (PDT) Date: Wed, 25 May 2022 17:38:41 +0800 From: Muchun Song To: Johannes Weiner Cc: mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com Subject: Re: [PATCH v4 02/11] mm: memcontrol: introduce compact_folio_lruvec_lock_irqsave Message-ID: References: <20220524060551.80037-1-songmuchun@bytedance.com> <20220524060551.80037-3-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 4C3FAC0013 X-Stat-Signature: xpps6zbyf6wi6thnx1jjsc94ojckyu8c Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=Rn2pap2S; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf28.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.54 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspam-User: X-HE-Tag: 1653471495-436809 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, May 24, 2022 at 03:22:55PM -0400, Johannes Weiner wrote: > On Tue, May 24, 2022 at 02:05:42PM +0800, Muchun Song wrote: > > If we reuse the objcg APIs to charge LRU pages, the folio_memcg() > > can be changed when the LRU pages reparented. In this case, we need > > to acquire the new lruvec lock. > > > > lruvec = folio_lruvec(folio); > > > > // The page is reparented. > > > > compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); > > > > // Acquired the wrong lruvec lock and need to retry. > > > > But compact_lock_irqsave() only take lruvec lock as the parameter, > > we cannot aware this change. If it can take the page as parameter > > to acquire the lruvec lock. When the page memcg is changed, we can > > use the folio_memcg() detect whether we need to reacquire the new > > lruvec lock. So compact_lock_irqsave() is not suitable for us. > > Similar to folio_lruvec_lock_irqsave(), introduce > > compact_folio_lruvec_lock_irqsave() to acquire the lruvec lock in > > the compaction routine. > > > > Signed-off-by: Muchun Song > > This looks generally good to me. > > It did raise the question how deferencing lruvec is safe before the > lock is acquired when reparenting can race. The answer is in the next > patch when you add the rcu_read_lock(). Since the patches aren't big, > it would probably be better to merge them. > Will do in v5. > > @@ -509,6 +509,29 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags, > > return true; > > } > > > > +static struct lruvec * > > +compact_folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags, > > + struct compact_control *cc) > > +{ > > + struct lruvec *lruvec; > > + > > + lruvec = folio_lruvec(folio); > > + > > + /* Track if the lock is contended in async mode */ > > + if (cc->mode == MIGRATE_ASYNC && !cc->contended) { > > + if (spin_trylock_irqsave(&lruvec->lru_lock, *flags)) > > + goto out; > > + > > + cc->contended = true; > > + } > > + > > + spin_lock_irqsave(&lruvec->lru_lock, *flags); > > Can you implement this on top of the existing one? > > lruvec = folio_lruvec(folio); > compact_lock_irqsave(&lruvec->lru_lock, flags); > lruvec_memcg_debug(lruvec, folio); > return lruvec; > I'll do a try. Thanks for your suggestions.