From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.1 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 614ACC433E0 for ; Fri, 3 Jul 2020 09:13:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 15DBC20836 for ; Fri, 3 Jul 2020 09:13:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="YpIy1IBv" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 15DBC20836 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 909878D0052; Fri, 3 Jul 2020 05:13:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8B8038D0036; Fri, 3 Jul 2020 05:13:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7A6EB8D0052; Fri, 3 Jul 2020 05:13:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0130.hostedemail.com [216.40.44.130]) by kanga.kvack.org (Postfix) with ESMTP id 65ECD8D0036 for ; Fri, 3 Jul 2020 05:13:26 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 260C2824C742 for ; Fri, 3 Jul 2020 09:13:26 +0000 (UTC) X-FDA: 76996201212.13.tramp59_521388026e90 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id EE30618140B72 for ; Fri, 3 Jul 2020 09:13:25 +0000 (UTC) X-HE-Tag: tramp59_521388026e90 X-Filterd-Recvd-Size: 10493 Received: from mail-il1-f193.google.com (mail-il1-f193.google.com [209.85.166.193]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Fri, 3 Jul 2020 09:13:25 +0000 (UTC) Received: by mail-il1-f193.google.com with SMTP id a11so18476515ilk.0 for ; Fri, 03 Jul 2020 02:13:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Iaw9LIET2b5Zr3PfzFEZD0ZO7MflodPo5riog/VYOpY=; b=YpIy1IBvV/vLDmas280jG6wn7S0jnFsNNvzfC29z8poUfzNVniy27kGXdSPXMeuxQo 9JSXKH42L1HWQIH+kXrA0YiI/7Zy401o50eq+uimBMyWhCYYRt+uQa9lR/eGlV42uabK mpvPUqYHyqrHmk7zLwDAln4Pr0T49Uc/f6p4N5Q+iN6sJwX1P4kCRJfrNMG2nUhf/nHL l+FEcYMwI/gtkC5uhtXAgzG4qm6YpjwREqczaUT2sqkIBVhrKlVRVqnIsFEpspNOeCLI ueS1v9GV7K3eTKXn1row8vxgATK6cOERngz1i7o3+hf+X25EKaGK0p1cFb/UagT0Ra2p jVUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Iaw9LIET2b5Zr3PfzFEZD0ZO7MflodPo5riog/VYOpY=; b=N5+7261bQB+3nI3Ugcuoyk23IG4nGhjR8TMFPBcQb/phpunlLuL1TL4g3OnY/SBiAt +Xy2PE14/BGFmjI1295o8MA6r9DSC4K2rnlSLtic2I8xCpDzHB2OmhP10GuYCTTuPAe3 ozBkZtEf26abLPEVVcRj5OJoTVC4azqWni1kH4BbCHZtDeW87JyQFviWKWAs+2DAIfna WxDXUVQ5dYI2PkKJozrgiw9DMv7cnVuQUph4cG87iSbvGygFdlkJWKrTuqr9STnaFUDO Fe0y+dV5FyMEf40ejTzkLHaCteX32LcHjV+V6EFBZDQ0kZbHCslsNlB3pAMOL8po7UzG p56A== X-Gm-Message-State: AOAM532jCkf5dhgeLNGuB76qTmJL7rfAlA5KMreTMnCa6N/j//CJlrMd iE0ICW/Ov6fk+ekcs+F3KQ3zsHAhx/Mp1NvbE2M= X-Google-Smtp-Source: ABdhPJwXEBMUa2aMAP8q7reOsOn7JWMVBv9lH9Wka6vYuFUYz3A/Uc1V9oTxMQbgJosNCGX2ixDSbcUhc2wyDTuJKVs= X-Received: by 2002:a92:a148:: with SMTP id v69mr16694332ili.7.1593767604804; Fri, 03 Jul 2020 02:13:24 -0700 (PDT) MIME-Version: 1.0 References: <1593752873-4493-1-git-send-email-alex.shi@linux.alibaba.com> <1593752873-4493-16-git-send-email-alex.shi@linux.alibaba.com> In-Reply-To: <1593752873-4493-16-git-send-email-alex.shi@linux.alibaba.com> From: Konstantin Khlebnikov Date: Fri, 3 Jul 2020 12:13:14 +0300 Message-ID: Subject: Re: [PATCH v14 15/20] mm/swap: serialize memcg changes during pagevec_lru_move_fn To: Alex Shi Cc: Andrew Morton , Mel Gorman , Tejun Heo , Hugh Dickins , =?UTF-8?B?0JrQvtC90YHRgtCw0L3RgtC40L0g0KXQu9C10LHQvdC40LrQvtCy?= , daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, Matthew Wilcox , Johannes Weiner , lkp@intel.com, linux-mm@kvack.org, Linux Kernel Mailing List , Cgroups , shakeelb@google.com, Joonsoo Kim , richard.weiyang@gmail.com Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: EE30618140B72 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jul 3, 2020 at 8:09 AM Alex Shi wrote: > > Hugh Dickins' found a memcg change bug on original version: > If we want to change the pgdat->lru_lock to memcg's lruvec lock, we have > to serialize mem_cgroup_move_account during pagevec_lru_move_fn. The > possible bad scenario would like: > > cpu 0 cpu 1 > lruvec = mem_cgroup_page_lruvec() > if (!isolate_lru_page()) > mem_cgroup_move_account > > spin_lock_irqsave(&lruvec->lru_lock <== wrong lock. > > So we need the ClearPageLRU to block isolate_lru_page(), then serialize > the memcg change here. > > Reported-by: Hugh Dickins > Signed-off-by: Alex Shi > Cc: Andrew Morton > Cc: linux-mm@kvack.org > Cc: linux-kernel@vger.kernel.org > --- > mm/swap.c | 31 +++++++++++++++++++------------ > 1 file changed, 19 insertions(+), 12 deletions(-) > > diff --git a/mm/swap.c b/mm/swap.c > index b24d5f69b93a..55eb2c2eed03 100644 > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -203,7 +203,7 @@ int get_kernel_page(unsigned long start, int write, struct page **pages) > EXPORT_SYMBOL_GPL(get_kernel_page); > > static void pagevec_lru_move_fn(struct pagevec *pvec, > - void (*move_fn)(struct page *page, struct lruvec *lruvec)) > + void (*move_fn)(struct page *page, struct lruvec *lruvec), bool add) > { > int i; > struct pglist_data *pgdat = NULL; > @@ -221,8 +221,15 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, > spin_lock_irqsave(&pgdat->lru_lock, flags); > } > > + /* new page add to lru or page moving between lru */ > + if (!add && !TestClearPageLRU(page)) > + continue; > + > lruvec = mem_cgroup_page_lruvec(page, pgdat); > (*move_fn)(page, lruvec); > + > + if (!add) > + SetPageLRU(page); > } > if (pgdat) > spin_unlock_irqrestore(&pgdat->lru_lock, flags); > @@ -259,7 +266,7 @@ void rotate_reclaimable_page(struct page *page) > local_lock_irqsave(&lru_rotate.lock, flags); > pvec = this_cpu_ptr(&lru_rotate.pvec); > if (!pagevec_add(pvec, page) || PageCompound(page)) > - pagevec_lru_move_fn(pvec, pagevec_move_tail_fn); > + pagevec_lru_move_fn(pvec, pagevec_move_tail_fn, false); > local_unlock_irqrestore(&lru_rotate.lock, flags); > } > } > @@ -328,7 +335,7 @@ static void activate_page_drain(int cpu) > struct pagevec *pvec = &per_cpu(lru_pvecs.activate_page, cpu); > > if (pagevec_count(pvec)) > - pagevec_lru_move_fn(pvec, __activate_page); > + pagevec_lru_move_fn(pvec, __activate_page, false); > } > > static bool need_activate_page_drain(int cpu) > @@ -346,7 +353,7 @@ void activate_page(struct page *page) > pvec = this_cpu_ptr(&lru_pvecs.activate_page); > get_page(page); > if (!pagevec_add(pvec, page) || PageCompound(page)) > - pagevec_lru_move_fn(pvec, __activate_page); > + pagevec_lru_move_fn(pvec, __activate_page, false); > local_unlock(&lru_pvecs.lock); > } > } > @@ -621,21 +628,21 @@ void lru_add_drain_cpu(int cpu) > > /* No harm done if a racing interrupt already did this */ > local_lock_irqsave(&lru_rotate.lock, flags); > - pagevec_lru_move_fn(pvec, pagevec_move_tail_fn); > + pagevec_lru_move_fn(pvec, pagevec_move_tail_fn, false); > local_unlock_irqrestore(&lru_rotate.lock, flags); > } > > pvec = &per_cpu(lru_pvecs.lru_deactivate_file, cpu); > if (pagevec_count(pvec)) > - pagevec_lru_move_fn(pvec, lru_deactivate_file_fn); > + pagevec_lru_move_fn(pvec, lru_deactivate_file_fn, false); > > pvec = &per_cpu(lru_pvecs.lru_deactivate, cpu); > if (pagevec_count(pvec)) > - pagevec_lru_move_fn(pvec, lru_deactivate_fn); > + pagevec_lru_move_fn(pvec, lru_deactivate_fn, false); > > pvec = &per_cpu(lru_pvecs.lru_lazyfree, cpu); > if (pagevec_count(pvec)) > - pagevec_lru_move_fn(pvec, lru_lazyfree_fn); > + pagevec_lru_move_fn(pvec, lru_lazyfree_fn, false); > > activate_page_drain(cpu); > } > @@ -664,7 +671,7 @@ void deactivate_file_page(struct page *page) > pvec = this_cpu_ptr(&lru_pvecs.lru_deactivate_file); > > if (!pagevec_add(pvec, page) || PageCompound(page)) > - pagevec_lru_move_fn(pvec, lru_deactivate_file_fn); > + pagevec_lru_move_fn(pvec, lru_deactivate_file_fn, false); > local_unlock(&lru_pvecs.lock); > } > } > @@ -686,7 +693,7 @@ void deactivate_page(struct page *page) > pvec = this_cpu_ptr(&lru_pvecs.lru_deactivate); > get_page(page); > if (!pagevec_add(pvec, page) || PageCompound(page)) > - pagevec_lru_move_fn(pvec, lru_deactivate_fn); > + pagevec_lru_move_fn(pvec, lru_deactivate_fn, false); > local_unlock(&lru_pvecs.lock); > } > } > @@ -708,7 +715,7 @@ void mark_page_lazyfree(struct page *page) > pvec = this_cpu_ptr(&lru_pvecs.lru_lazyfree); > get_page(page); > if (!pagevec_add(pvec, page) || PageCompound(page)) > - pagevec_lru_move_fn(pvec, lru_lazyfree_fn); > + pagevec_lru_move_fn(pvec, lru_lazyfree_fn, false); > local_unlock(&lru_pvecs.lock); > } > } > @@ -976,7 +983,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec) > */ > void __pagevec_lru_add(struct pagevec *pvec) > { > - pagevec_lru_move_fn(pvec, __pagevec_lru_add_fn); > + pagevec_lru_move_fn(pvec, __pagevec_lru_add_fn, true); > } It seems better to open code version in lru_add than adding a bool argument which is true just for one user. Also with this new lru protection logic lru_add could be optimized: It could prepare a list of pages and under lru_lock do only list splice and bumping counter. Since PageLRU isn't set yet nobody could touch these pages in lru. After that lru_add could iterate pages from first to last without lru_lock to set PageLRU and drop reference. So, lru_add will do O(1) operations under lru_lock regardless of the count of pages it added. Actually per-cpu vector for adding could be replaced with per-cpu lists and\or per-lruvec atomic slist. Thus incommig pages will be already in list structure rather than page vector. This allows to accumulate more pages and offload adding to kswapd or direct reclaim. > > /** > -- > 1.8.3.1 > >