From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09D50C433F5 for ; Mon, 13 Dec 2021 23:14:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 895AA6B0071; Mon, 13 Dec 2021 18:14:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8445E6B0072; Mon, 13 Dec 2021 18:14:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 70BFD6B0074; Mon, 13 Dec 2021 18:14:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0179.hostedemail.com [216.40.44.179]) by kanga.kvack.org (Postfix) with ESMTP id 60F286B0071 for ; Mon, 13 Dec 2021 18:14:24 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 19C6689131 for ; Mon, 13 Dec 2021 23:14:14 +0000 (UTC) X-FDA: 78914326428.15.67AD008 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by imf14.hostedemail.com (Postfix) with ESMTP id 9A656100004 for ; Mon, 13 Dec 2021 23:14:12 +0000 (UTC) Received: by mail-pl1-f170.google.com with SMTP id p18so12231308plf.13 for ; Mon, 13 Dec 2021 15:14:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=FwNmwVpIgAbDB44rnby2J0WVT+bFYdkspTHcRPJBdaM=; b=gtktlhzdt8fxMF16T+F7VMnDiItDek8wMuKKMo6q+2Hw7DG3t1SitU6BsUd5x5YMsB Pojd7pkm6yui1UDMoweINKELjl/DrLgp56uSvsjFqxvoPL3Zp5BQcFiTEQtN8HhGAg3P mR6jc3G5WEKXCyuDazWnKHbhmku8ZxX6tp/3AwiGvRCc9nrQAYdRC8y39MG6papq2Vj2 22Kvtnk5yrEA5vdQmiZ1KZj6oMOFN1+efiOikcJD+8yESPzPKUjvLLQpe4v4Eln0TWyp 7Ri0nu6Z/dBwtrkJJVpviXF8Z0+6rKuECf+B5uQqyxTj4BUCAxWfztLpRaYzjTvpN9uF wy1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to; bh=FwNmwVpIgAbDB44rnby2J0WVT+bFYdkspTHcRPJBdaM=; b=j2SNnIZ5dZmulKCYSGjo3AXapKxJOnDBAAYbsx0w9qhYINwr1aWwwLw0/hD7IWW7nw 5r7NL7LB0IRP0Svnc8jWc/XMQ8l+1/8/d3ODdCv/NwGAVGUCi5MfRAb1QG87rQ4EQ4UE Z/l6he4X07JpGL/nHug0SI2CgGGfWqhKRol/XcNBuXJqFWpij4DTOSE91cDxBjiprOH/ QqHsAEgWcEQUK0GcDd3cxw07WkVKRAdKgFXIF24aBdyswqpecw1Vr6slgGMujQjjPu7/ sAevHIKLFQKzMAUPmXSAnaJ8I1sOY+idJRc9JMJQj8sTVNpQmVwDvkLquAfotdx5YzPB oPHg== X-Gm-Message-State: AOAM533Dd0GQ2AjLrqxfqAk8vQM7n9uqtpAJ7NERFJBXflCSteuzwr+O e7Jmhp51kecqMqKR3IhrMRQ= X-Google-Smtp-Source: ABdhPJxSqz/vAuZ6AEGLqN4ye0OElCVvnV8tjWOCYIGPuu8WzxRtm1HV0XA+sgQs862cwzJ5LGM3Aw== X-Received: by 2002:a17:902:e804:b0:142:1c0b:c2a6 with SMTP id u4-20020a170902e80400b001421c0bc2a6mr1247730plg.23.1639437252780; Mon, 13 Dec 2021 15:14:12 -0800 (PST) Received: from google.com ([2620:15c:211:201:7fed:4a3e:d021:bbd0]) by smtp.gmail.com with ESMTPSA id y37sm11087536pga.78.2021.12.13.15.14.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 Dec 2021 15:14:12 -0800 (PST) Date: Mon, 13 Dec 2021 15:14:10 -0800 From: Minchan Kim To: Andrew Morton Cc: Michal Hocko , David Hildenbrand , linux-mm , LKML , Suren Baghdasaryan , John Dias Subject: Re: [PATCH] mm: don't call lru draining in the nested lru_cache_disable Message-ID: References: <20211206221006.946661-1-minchan@kernel.org> <20211206150421.fc06972fac949a5f6bc8b725@linux-foundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Stat-Signature: cieo1itpib3bg3yp9uyabuymbod4zftz Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=gtktlhzd; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none); spf=pass (imf14.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 9A656100004 X-HE-Tag: 1639437252-203369 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Dec 06, 2021 at 03:46:26PM -0800, Minchan Kim wrote: < snip > Hi, Any chance to get review/merge for the testing in next tree? > From 0874e108b4708355d703927716a49670b989e960 Mon Sep 17 00:00:00 2001 > From: Minchan Kim > Date: Mon, 6 Dec 2021 11:59:36 -0800 > Subject: [PATCH v2] mm: don't call lru draining in the nested lru_cache_disable > > lru_cache_disable involves IPIs to drain pagevec of each core, > which sometimes takes quite long time to complete depending > on cpu's business, which makes allocation too slow up to > sveral hundredth milliseconds. Furthermore, the repeated draining > in the alloc_contig_range makes thing worse considering caller > of alloc_contig_range usually tries multiple times in the loop. > > This patch makes the lru_cache_disable aware of the fact the > pagevec was already disabled. With that, user of alloc_contig_range > can disable the lru cache in advance in their context during the > repeated trial so they can avoid the multiple costly draining > in cma allocation. > > Signed-off-by: Minchan Kim > --- > * from v1 - https://lore.kernel.org/lkml/20211206221006.946661-1-minchan@kernel.org/ > * fix lru_cache_disable race - akpm > > include/linux/swap.h | 14 ++------------ > mm/cma.c | 5 +++++ > mm/swap.c | 26 ++++++++++++++++++++++++-- > 3 files changed, 31 insertions(+), 14 deletions(-) > > diff --git a/include/linux/swap.h b/include/linux/swap.h > index ba52f3a3478e..fe18e86a4f13 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -348,19 +348,9 @@ extern void lru_note_cost_page(struct page *); > extern void lru_cache_add(struct page *); > extern void mark_page_accessed(struct page *); > > -extern atomic_t lru_disable_count; > - > -static inline bool lru_cache_disabled(void) > -{ > - return atomic_read(&lru_disable_count); > -} > - > -static inline void lru_cache_enable(void) > -{ > - atomic_dec(&lru_disable_count); > -} > - > +extern bool lru_cache_disabled(void); > extern void lru_cache_disable(void); > +extern void lru_cache_enable(void); > extern void lru_add_drain(void); > extern void lru_add_drain_cpu(int cpu); > extern void lru_add_drain_cpu_zone(struct zone *zone); > diff --git a/mm/cma.c b/mm/cma.c > index 995e15480937..60be555c5b95 100644 > --- a/mm/cma.c > +++ b/mm/cma.c > @@ -30,6 +30,7 @@ > #include > #include > #include > +#include > #include > #include > > @@ -453,6 +454,8 @@ struct page *cma_alloc(struct cma *cma, unsigned long count, > if (bitmap_count > bitmap_maxno) > goto out; > > + lru_cache_disable(); > + > for (;;) { > spin_lock_irq(&cma->lock); > bitmap_no = bitmap_find_next_zero_area_off(cma->bitmap, > @@ -492,6 +495,8 @@ struct page *cma_alloc(struct cma *cma, unsigned long count, > start = bitmap_no + mask + 1; > } > > + lru_cache_enable(); > + > trace_cma_alloc_finish(cma->name, pfn, page, count, align); > > /* > diff --git a/mm/swap.c b/mm/swap.c > index af3cad4e5378..edcfcd6cf38e 100644 > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -847,7 +847,17 @@ void lru_add_drain_all(void) > } > #endif /* CONFIG_SMP */ > > -atomic_t lru_disable_count = ATOMIC_INIT(0); > +static atomic_t lru_disable_count = ATOMIC_INIT(0); > + > +bool lru_cache_disabled(void) > +{ > + return atomic_read(&lru_disable_count) != 0; > +} > + > +void lru_cache_enable(void) > +{ > + atomic_dec(&lru_disable_count); > +} > > /* > * lru_cache_disable() needs to be called before we start compiling > @@ -859,7 +869,17 @@ atomic_t lru_disable_count = ATOMIC_INIT(0); > */ > void lru_cache_disable(void) > { > - atomic_inc(&lru_disable_count); > + static DEFINE_MUTEX(lock); > + > + mutex_lock(&lock); > + /* > + * If someone is already disabled lru_cache, just return with > + * increasing the lru_disable_count. > + */ > + if (atomic_inc_not_zero(&lru_disable_count)) { > + mutex_unlock(&lock); > + return; > + } > #ifdef CONFIG_SMP > /* > * lru_add_drain_all in the force mode will schedule draining on > @@ -873,6 +893,8 @@ void lru_cache_disable(void) > #else > lru_add_and_bh_lrus_drain(); > #endif > + atomic_inc(&lru_disable_count); > + mutex_unlock(&lock); > } > > /** > -- > 2.34.1.400.ga245620fadb-goog >