From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 942E0C433F5 for ; Tue, 24 May 2022 19:23:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0D3348D0003; Tue, 24 May 2022 15:23:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0823C8D0002; Tue, 24 May 2022 15:23:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E8A328D0003; Tue, 24 May 2022 15:23:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id D59928D0002 for ; Tue, 24 May 2022 15:23:04 -0400 (EDT) Received: from smtpin31.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id A3B519BC for ; Tue, 24 May 2022 19:23:04 +0000 (UTC) X-FDA: 79501609488.31.14C3704 Received: from mail-qk1-f177.google.com (mail-qk1-f177.google.com [209.85.222.177]) by imf21.hostedemail.com (Postfix) with ESMTP id 5CFB61C002A for ; Tue, 24 May 2022 19:22:52 +0000 (UTC) Received: by mail-qk1-f177.google.com with SMTP id x65so11534871qke.2 for ; Tue, 24 May 2022 12:23:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=eBZErP5czN+01wncC3vU6ANkkXJCnBdE/nLZtn+4no4=; b=Ud5kMRHcsdoYSkO0HP8seo+5h8qtDRtoKNaGP9nfR8+ds9MnDJ6WSJaq2UWUXOET9K Jw9+WsOb+cRJuXQDbo/gnz31Cw1KnFqB2MrLxptKAlyjIzx/VnKAaOdduG0J8Zj2YSi2 lTeihSOkv3MJjoQXRYaYHZeSI+P7ygyIcpSwgEiIkaZXpVwgexkowHJiiAed1Mi1r+U3 esI/F7rw+Nxy0NeRFtqOkxT9Qgl9lpS3jvGj8iTUJO2HG+lTdnnUGKxDbiMovr90icXe kQ/se6khCaSSViH2cyEnQtPuk56rfMbiRiy6vccXBphwfOgDKMgIh62neEVNenYUBh9D YlEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=eBZErP5czN+01wncC3vU6ANkkXJCnBdE/nLZtn+4no4=; b=lnDqaglvBpq2wgG5EPbOWbEdujFEVwR5haZgYW+BsQ+QYi8uw0+3lFCJR8OIlby+0y vQVmK3fk3zfaHoRgBYvTMaaL9QnOvdWD5w2q+AqJB8BayyZmlg9IGtxjM/MT/ZXquJ+N xxdMka9pU56qqH+sFKkE3MrZcQeKuxkYfdTYzE6jHfLsxqI3NKfaTSOlKuOJFlc/yQU7 bjF6+8S+zRWwAGm4Q4ixg83FzXMcLWrKaLWeu/PmuV8v8DLjT3ZnUgw8LD+Cfx1cVEsV x40OYEtg9DpAiwxrAvN/xSnktLGH75T+I184pMFd6blunGZ7yfPW41ZocqwgMPAQc6c9 LWLg== X-Gm-Message-State: AOAM530g2tseZwl9bWYcIj0COVmLUIqZv8QhluxriQbaaU66WZyy8uQ0 1lhgwE7Wb3r/xg6GoZ7ZdvqYVg== X-Google-Smtp-Source: ABdhPJwwjtpDlHgQpvTIIv25MCTF+gUT3NytbYxtasTIYco4Glt2sc4uT1Wff5MRD/hs4bA0tyF+oQ== X-Received: by 2002:a37:a6d5:0:b0:6a3:4872:32fb with SMTP id p204-20020a37a6d5000000b006a3487232fbmr15638364qke.588.1653420177122; Tue, 24 May 2022 12:22:57 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::1:741f]) by smtp.gmail.com with ESMTPSA id k5-20020a37ba05000000b006a39da89f32sm40030qkf.68.2022.05.24.12.22.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 24 May 2022 12:22:56 -0700 (PDT) Date: Tue, 24 May 2022 15:22:55 -0400 From: Johannes Weiner To: Muchun Song Cc: mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com Subject: Re: [PATCH v4 02/11] mm: memcontrol: introduce compact_folio_lruvec_lock_irqsave Message-ID: References: <20220524060551.80037-1-songmuchun@bytedance.com> <20220524060551.80037-3-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220524060551.80037-3-songmuchun@bytedance.com> X-Rspam-User: X-Rspamd-Queue-Id: 5CFB61C002A X-Stat-Signature: sxhmhuh9jocifzpp3hpghiwshjbnm4yc Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=cmpxchg-org.20210112.gappssmtp.com header.s=20210112 header.b=Ud5kMRHc; spf=pass (imf21.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.222.177 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org X-Rspamd-Server: rspam03 X-HE-Tag: 1653420172-900314 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000045, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, May 24, 2022 at 02:05:42PM +0800, Muchun Song wrote: > If we reuse the objcg APIs to charge LRU pages, the folio_memcg() > can be changed when the LRU pages reparented. In this case, we need > to acquire the new lruvec lock. > > lruvec = folio_lruvec(folio); > > // The page is reparented. > > compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); > > // Acquired the wrong lruvec lock and need to retry. > > But compact_lock_irqsave() only take lruvec lock as the parameter, > we cannot aware this change. If it can take the page as parameter > to acquire the lruvec lock. When the page memcg is changed, we can > use the folio_memcg() detect whether we need to reacquire the new > lruvec lock. So compact_lock_irqsave() is not suitable for us. > Similar to folio_lruvec_lock_irqsave(), introduce > compact_folio_lruvec_lock_irqsave() to acquire the lruvec lock in > the compaction routine. > > Signed-off-by: Muchun Song This looks generally good to me. It did raise the question how deferencing lruvec is safe before the lock is acquired when reparenting can race. The answer is in the next patch when you add the rcu_read_lock(). Since the patches aren't big, it would probably be better to merge them. > @@ -509,6 +509,29 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags, > return true; > } > > +static struct lruvec * > +compact_folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags, > + struct compact_control *cc) > +{ > + struct lruvec *lruvec; > + > + lruvec = folio_lruvec(folio); > + > + /* Track if the lock is contended in async mode */ > + if (cc->mode == MIGRATE_ASYNC && !cc->contended) { > + if (spin_trylock_irqsave(&lruvec->lru_lock, *flags)) > + goto out; > + > + cc->contended = true; > + } > + > + spin_lock_irqsave(&lruvec->lru_lock, *flags); Can you implement this on top of the existing one? lruvec = folio_lruvec(folio); compact_lock_irqsave(&lruvec->lru_lock, flags); lruvec_memcg_debug(lruvec, folio); return lruvec;