From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5BF8C433EF for ; Mon, 6 Dec 2021 22:10:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2A5506B0072; Mon, 6 Dec 2021 17:10:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 254BD6B007D; Mon, 6 Dec 2021 17:10:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F6696B0085; Mon, 6 Dec 2021 17:10:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0219.hostedemail.com [216.40.44.219]) by kanga.kvack.org (Postfix) with ESMTP id 008056B0072 for ; Mon, 6 Dec 2021 17:10:21 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B2BEC8249980 for ; Mon, 6 Dec 2021 22:10:11 +0000 (UTC) X-FDA: 78888763422.08.ABE11C7 Received: from mail-pj1-f41.google.com (mail-pj1-f41.google.com [209.85.216.41]) by imf19.hostedemail.com (Postfix) with ESMTP id 51784B00009D for ; Mon, 6 Dec 2021 22:10:11 +0000 (UTC) Received: by mail-pj1-f41.google.com with SMTP id np6-20020a17090b4c4600b001a90b011e06so989184pjb.5 for ; Mon, 06 Dec 2021 14:10:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=XeW+iihcOMuh5RO+7nkAKX72OjB9qqZAScpNNOK5XFQ=; b=DaAeK3F3p4ObcJUU0Lu0c1n5ZtmLnviTSDORy5/Yclc2+HRQTaAyp3ZUgASqfBSII8 t+UVf7iuF1oNSutI88JHL3fjWJBibHyiMKHVdPN6kN28VZ43jwyAxB6C95O/QNOGCv/j QR35Qgh+o8O1/d/pFcDRupYuqbQq1YjcltmJMT0LinGjiyjQ1/bcSJlz7lc9kiZnF/9y 9ItUzE8gp2XMUoPwaggVMOUnVdo/+O0IYtEV60PjLkxHA/OYkgEvO5zMl3fLBh9c3g+j N4P1kNlP2bXZ7sScpdEpok7dBXlkMXh14snfCNqi52aNROquwX3fyc3zlXZ4l/ZLdyYy Lr1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :mime-version:content-transfer-encoding; bh=XeW+iihcOMuh5RO+7nkAKX72OjB9qqZAScpNNOK5XFQ=; b=T1+NZ8lIxpkyYasf23I2ukaA4CTDLgnWIrc73WhwyKvL/Cqxg8yeYG2DldlTUA/cTo u14VbwsW+fKCE2Vw8k8GzGlD18UMD4FXmD75Z4cuaW06T9pUc88/jGI8VGR2J2rCjuXv FfM8OQi/FrildZyFeITOfV0kTxhQPPLRVunRbryQkIX+ZgVEPzW0Ejo3ZQZow28UI/ma 6DER3ixABU36QjIwM+zhKzD61nevBDSVyBQlDGLwAX5eCxsqS2Sm4lHKTyWpIczR4TWi pMEvBj8Ar9AsoOA0yER1FmiWgWZZvroOP+3FdCgn72odwtcAN+UK69SJVvQ/wUx2a3Ky lFMQ== X-Gm-Message-State: AOAM530s8ko9wG525/W5P1OkCgq6a858FIICCFyK6+RiZVsxo1s4Z3L1 cAxv66KKZLHCCaGm0UfDHwA= X-Google-Smtp-Source: ABdhPJy6pEh177crGA06O8FipksFUDTyc6gNvckiyzAOXr1S4zymruNJ50y0JZXv0W6S7N16pUXH0g== X-Received: by 2002:a17:903:186:b0:141:eda2:d5fa with SMTP id z6-20020a170903018600b00141eda2d5famr46312963plg.63.1638828609917; Mon, 06 Dec 2021 14:10:09 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:2d37:bc7d:9c01:7721]) by smtp.gmail.com with ESMTPSA id s31sm11708832pfg.22.2021.12.06.14.10.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Dec 2021 14:10:09 -0800 (PST) From: Minchan Kim To: Andrew Morton Cc: Michal Hocko , David Hildenbrand , linux-mm , LKML , Suren Baghdasaryan , John Dias , Minchan Kim Subject: [PATCH] mm: don't call lru draining in the nested lru_cache_disable Date: Mon, 6 Dec 2021 14:10:06 -0800 Message-Id: <20211206221006.946661-1-minchan@kernel.org> X-Mailer: git-send-email 2.34.1.400.ga245620fadb-goog MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 51784B00009D X-Stat-Signature: b48318e7nwzd3bihqdpubmphnqnwm5ry Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=DaAeK3F3; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none); spf=pass (imf19.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.216.41 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com X-HE-Tag: 1638828611-566170 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: lru_cache_disable involves IPIs to drain pagevec of each core, which sometimes takes quite long time to complete depending on cpu's business, which makes allocation too slow up to sveral hundredth milliseconds. Furthermore, the repeated draining in the alloc_contig_range makes thing worse considering caller of alloc_contig_range usually tries multiple times in the loop. This patch makes the lru_cache_disable aware of the fact the pagevec was already disabled. With that, user of alloc_contig_range can disable the lru cache in advance in their context during the repeated trial so they can avoid the multiple costly draining in cma allocation. Signed-off-by: Minchan Kim --- include/linux/swap.h | 14 ++------------ mm/cma.c | 5 +++++ mm/swap.c | 20 ++++++++++++++++++-- 3 files changed, 25 insertions(+), 14 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index ba52f3a3478e..fe18e86a4f13 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -348,19 +348,9 @@ extern void lru_note_cost_page(struct page *); extern void lru_cache_add(struct page *); extern void mark_page_accessed(struct page *); =20 -extern atomic_t lru_disable_count; - -static inline bool lru_cache_disabled(void) -{ - return atomic_read(&lru_disable_count); -} - -static inline void lru_cache_enable(void) -{ - atomic_dec(&lru_disable_count); -} - +extern bool lru_cache_disabled(void); extern void lru_cache_disable(void); +extern void lru_cache_enable(void); extern void lru_add_drain(void); extern void lru_add_drain_cpu(int cpu); extern void lru_add_drain_cpu_zone(struct zone *zone); diff --git a/mm/cma.c b/mm/cma.c index 995e15480937..60be555c5b95 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -30,6 +30,7 @@ #include #include #include +#include #include #include =20 @@ -453,6 +454,8 @@ struct page *cma_alloc(struct cma *cma, unsigned long= count, if (bitmap_count > bitmap_maxno) goto out; =20 + lru_cache_disable(); + for (;;) { spin_lock_irq(&cma->lock); bitmap_no =3D bitmap_find_next_zero_area_off(cma->bitmap, @@ -492,6 +495,8 @@ struct page *cma_alloc(struct cma *cma, unsigned long= count, start =3D bitmap_no + mask + 1; } =20 + lru_cache_enable(); + trace_cma_alloc_finish(cma->name, pfn, page, count, align); =20 /* diff --git a/mm/swap.c b/mm/swap.c index af3cad4e5378..24bc909e84a9 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -847,7 +847,17 @@ void lru_add_drain_all(void) } #endif /* CONFIG_SMP */ =20 -atomic_t lru_disable_count =3D ATOMIC_INIT(0); +static atomic_t lru_disable_count =3D ATOMIC_INIT(0); + +bool lru_cache_disabled(void) +{ + return atomic_read(&lru_disable_count) !=3D 0; +} + +void lru_cache_enable(void) +{ + atomic_dec(&lru_disable_count); +} =20 /* * lru_cache_disable() needs to be called before we start compiling @@ -859,7 +869,12 @@ atomic_t lru_disable_count =3D ATOMIC_INIT(0); */ void lru_cache_disable(void) { - atomic_inc(&lru_disable_count); + /* + * If someone is already disabled lru_cache, just return with + * increasing the lru_disable_count. + */ + if (atomic_inc_not_zero(&lru_disable_count)) + return; #ifdef CONFIG_SMP /* * lru_add_drain_all in the force mode will schedule draining on @@ -873,6 +888,7 @@ void lru_cache_disable(void) #else lru_add_and_bh_lrus_drain(); #endif + atomic_inc(&lru_disable_count); } =20 /** --=20 2.34.1.400.ga245620fadb-goog