From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34605C433F5 for ; Wed, 25 May 2022 11:43:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AFAE48D0003; Wed, 25 May 2022 07:43:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AA9DB8D0002; Wed, 25 May 2022 07:43:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 997A78D0003; Wed, 25 May 2022 07:43:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 86C9A8D0002 for ; Wed, 25 May 2022 07:43:23 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 5E5A06116F for ; Wed, 25 May 2022 11:43:23 +0000 (UTC) X-FDA: 79504079886.01.1D4E1C9 Received: from mail-pg1-f179.google.com (mail-pg1-f179.google.com [209.85.215.179]) by imf29.hostedemail.com (Postfix) with ESMTP id 9C113120018 for ; Wed, 25 May 2022 11:43:11 +0000 (UTC) Received: by mail-pg1-f179.google.com with SMTP id a9so16670719pgv.12 for ; Wed, 25 May 2022 04:43:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=P5Wb2ffVHqC6iCrfliBILNM6nfMo0Dy7zA6WZidhWjM=; b=xT6MvecVMKRRz8av3Sya8O1ZJeuGiGA+WjQE/5q5ewBNlvJIXz/uVo+wHgha/jtAKa gqsEmdYLyjjk9AKvVZ8UKVjp/DRxcy/QCFR3Jtp2MQO3AS91ccBWQjQ9CjpVZ1zCfVDl S1lzanJDy15I383ezha5GJFz/jA8fj5QKUYQpsHS1+mGd2S30XrJHeifzF1KgUFTwU6K QZCRd3eB6OKvKba0StLacOehhYzwPpfpPUGVUL0WxP3yJg+m5WS6DhOGgcHze9lY4zCL SBuR9XBGgDvVdrGAPGtK6EBdr1UxZCYRiptOKfbKp2HAIN7SbcgiC7qzH5VvAH4e6KW9 Bpyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=P5Wb2ffVHqC6iCrfliBILNM6nfMo0Dy7zA6WZidhWjM=; b=4n6nYn92WbI8FnZKFCuI5kstSVvARVBczAbpLpB+S7sqZa4dFC6iSaF0K2WSihlhww sz2FkwNxSO0cah7DZk3MtPQjionQS2idVwJB7P8DJJKW1Us5bTwfH8adROblr3YhHRuz W9WFW2ZjX7GOu5brn06V2Yb23PJp4NBxmdBMlaPQxFygTGQK3RHjydp2QY/5prg362Vr aLv/WFT7iUb+poXrfT+CYfMESixvCSng9eLTalOw2EdG0BxS4sntSxgLN4tAkqD55aV9 mbGddSnQTcUqsw72IZBMzi2O5kUijuNLsr049YAk3Fzf2ApQLWixRWhv90U4V6sVdmvp nvvA== X-Gm-Message-State: AOAM5338dCV/DgRcAnmp/HGTFwBjrp9+BHA48q09OHStX1LNf5zpgeZB 83e0EEV58cDZaqomCiRkRlMhzQ== X-Google-Smtp-Source: ABdhPJxV4+IcBCGDJnQJuk84cQ/DrWzi7EU8GdYzBd6tAxnMdBv7RLpDeO1pGM5IAR0RwsJh/yezag== X-Received: by 2002:a05:6a00:21c6:b0:4fa:914c:2c2b with SMTP id t6-20020a056a0021c600b004fa914c2c2bmr33195929pfj.56.1653479001571; Wed, 25 May 2022 04:43:21 -0700 (PDT) Received: from localhost ([2408:8207:18da:2310:c40f:7b5:4fa8:df3f]) by smtp.gmail.com with ESMTPSA id a15-20020a170902710f00b0015e8d4eb283sm9087052pll.205.2022.05.25.04.43.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 May 2022 04:43:21 -0700 (PDT) Date: Wed, 25 May 2022 19:43:15 +0800 From: Muchun Song To: Waiman Long Cc: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com Subject: Re: [PATCH v4 04/11] mm: vmscan: rework move_pages_to_lru() Message-ID: References: <20220524060551.80037-1-songmuchun@bytedance.com> <20220524060551.80037-5-songmuchun@bytedance.com> <78de6197-7de6-9fe7-9567-1321c06c6e9b@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <78de6197-7de6-9fe7-9567-1321c06c6e9b@redhat.com> X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 9C113120018 X-Stat-Signature: 3k1gxo83fikmnaxt6y3r9b6i58gfn4tg X-Rspam-User: Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=xT6MvecV; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf29.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.179 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-HE-Tag: 1653478991-98404 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, May 24, 2022 at 03:52:22PM -0400, Waiman Long wrote: > On 5/24/22 02:05, Muchun Song wrote: > > In the later patch, we will reparent the LRU pages. The pages moved to > > appropriate LRU list can be reparented during the process of the > > move_pages_to_lru(). So holding a lruvec lock by the caller is wrong, we > > should use the more general interface of folio_lruvec_relock_irq() to > > acquire the correct lruvec lock. > > > > Signed-off-by: Muchun Song > > --- > > mm/vmscan.c | 49 +++++++++++++++++++++++++------------------------ > > 1 file changed, 25 insertions(+), 24 deletions(-) > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index 1678802e03e7..761d5e0dd78d 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -2230,23 +2230,28 @@ static int too_many_isolated(struct pglist_data *pgdat, int file, > > * move_pages_to_lru() moves pages from private @list to appropriate LRU list. > > * On return, @list is reused as a list of pages to be freed by the caller. > > * > > - * Returns the number of pages moved to the given lruvec. > > + * Returns the number of pages moved to the appropriate LRU list. > > + * > > + * Note: The caller must not hold any lruvec lock. > > */ > > -static unsigned int move_pages_to_lru(struct lruvec *lruvec, > > - struct list_head *list) > > +static unsigned int move_pages_to_lru(struct list_head *list) > > { > > - int nr_pages, nr_moved = 0; > > + int nr_moved = 0; > > + struct lruvec *lruvec = NULL; > > LIST_HEAD(pages_to_free); > > - struct page *page; > > while (!list_empty(list)) { > > - page = lru_to_page(list); > > + int nr_pages; > > + struct folio *folio = lru_to_folio(list); > > + struct page *page = &folio->page; > > + > > + lruvec = folio_lruvec_relock_irq(folio, lruvec); > > VM_BUG_ON_PAGE(PageLRU(page), page); > > list_del(&page->lru); > > if (unlikely(!page_evictable(page))) { > > - spin_unlock_irq(&lruvec->lru_lock); > > + unlock_page_lruvec_irq(lruvec); > > putback_lru_page(page); > > - spin_lock_irq(&lruvec->lru_lock); > > + lruvec = NULL; > > continue; > > } > > @@ -2267,20 +2272,16 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec, > > __clear_page_lru_flags(page); > > if (unlikely(PageCompound(page))) { > > - spin_unlock_irq(&lruvec->lru_lock); > > + unlock_page_lruvec_irq(lruvec); > > destroy_compound_page(page); > > - spin_lock_irq(&lruvec->lru_lock); > > + lruvec = NULL; > > } else > > list_add(&page->lru, &pages_to_free); > > continue; > > } > > - /* > > - * All pages were isolated from the same lruvec (and isolation > > - * inhibits memcg migration). > > - */ > > - VM_BUG_ON_PAGE(!folio_matches_lruvec(page_folio(page), lruvec), page); > > + VM_BUG_ON_PAGE(!folio_matches_lruvec(folio, lruvec), page); > > add_page_to_lru_list(page, lruvec); > > nr_pages = thp_nr_pages(page); > > nr_moved += nr_pages; > > @@ -2288,6 +2289,8 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec, > > workingset_age_nonresident(lruvec, nr_pages); > > } > > + if (lruvec) > > + unlock_page_lruvec_irq(lruvec); > > /* > > * To save our caller's stack, now use input list for pages to free. > > */ > > @@ -2359,16 +2362,16 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec, > > nr_reclaimed = shrink_page_list(&page_list, pgdat, sc, &stat, false); > > - spin_lock_irq(&lruvec->lru_lock); > > - move_pages_to_lru(lruvec, &page_list); > > + move_pages_to_lru(&page_list); > > + local_irq_disable(); > > __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); > > item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT; > > if (!cgroup_reclaim(sc)) > > __count_vm_events(item, nr_reclaimed); > > __count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed); > > __count_vm_events(PGSTEAL_ANON + file, nr_reclaimed); > > - spin_unlock_irq(&lruvec->lru_lock); > > + local_irq_enable(); > > lru_note_cost(lruvec, file, stat.nr_pageout); > > mem_cgroup_uncharge_list(&page_list); > > @@ -2498,18 +2501,16 @@ static void shrink_active_list(unsigned long nr_to_scan, > > /* > > * Move pages back to the lru list. > > */ > > - spin_lock_irq(&lruvec->lru_lock); > > - > > - nr_activate = move_pages_to_lru(lruvec, &l_active); > > - nr_deactivate = move_pages_to_lru(lruvec, &l_inactive); > > + nr_activate = move_pages_to_lru(&l_active); > > + nr_deactivate = move_pages_to_lru(&l_inactive); > > /* Keep all free pages in l_active list */ > > list_splice(&l_inactive, &l_active); > > + local_irq_disable(); > > __count_vm_events(PGDEACTIVATE, nr_deactivate); > > __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate); > > - > > __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); > > - spin_unlock_irq(&lruvec->lru_lock); > > + local_irq_enable(); > > mem_cgroup_uncharge_list(&l_active); > > free_unref_page_list(&l_active); > > Note that the RT engineers will likely change the > local_irq_disable()/local_irq_enable() to > local_lock_irq()/local_unlock_irq(). > Thanks. I'll replace them with local_lock/unlock_irq().