From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5436EC433EF for ; Mon, 6 Dec 2021 23:04:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5C4686B0085; Mon, 6 Dec 2021 18:04:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 572AC6B0087; Mon, 6 Dec 2021 18:04:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 43DD66B0088; Mon, 6 Dec 2021 18:04:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0227.hostedemail.com [216.40.44.227]) by kanga.kvack.org (Postfix) with ESMTP id 35A496B0085 for ; Mon, 6 Dec 2021 18:04:38 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id E0519180EB70A for ; Mon, 6 Dec 2021 23:04:27 +0000 (UTC) X-FDA: 78888900174.21.754D010 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf07.hostedemail.com (Postfix) with ESMTP id 756FE1000098 for ; Mon, 6 Dec 2021 23:04:27 +0000 (UTC) Received: from mail.kernel.org (unknown [198.145.29.99]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 85000B8159D; Mon, 6 Dec 2021 23:04:25 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id C6A81603E8; Mon, 6 Dec 2021 23:04:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1638831864; bh=5R+GpQykZxm/WYoRJ8Nr6M9bGylNawhu7NoDiTQ2Xd4=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=y6UjRhIJuYtG1zmA8txfzTXBKxadhJMgs5+ZBkzMy6ZwylNDVHE3FSBu+zZV5zlQo 1OpgNhVK8E404tZ6+x9WAAItB4c9LMeuewtKmP6B4nCpCl94OIq5DY4zRgA8xYOacA +CzgkanZLkq/OyfphB8vCSyBSe2j918I7tUB2GWE= Date: Mon, 6 Dec 2021 15:04:21 -0800 From: Andrew Morton To: Minchan Kim Cc: Michal Hocko , David Hildenbrand , linux-mm , LKML , Suren Baghdasaryan , John Dias Subject: Re: [PATCH] mm: don't call lru draining in the nested lru_cache_disable Message-Id: <20211206150421.fc06972fac949a5f6bc8b725@linux-foundation.org> In-Reply-To: <20211206221006.946661-1-minchan@kernel.org> References: <20211206221006.946661-1-minchan@kernel.org> X-Mailer: Sylpheed 3.5.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Stat-Signature: 9pequuhcro5pua3ibj65iyyrw4g8jf3w Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=y6UjRhIJ; dmarc=none; spf=pass (imf07.hostedemail.com: domain of akpm@linux-foundation.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 756FE1000098 X-HE-Tag: 1638831867-744119 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, 6 Dec 2021 14:10:06 -0800 Minchan Kim wrote: > lru_cache_disable involves IPIs to drain pagevec of each core, > which sometimes takes quite long time to complete depending > on cpu's business, which makes allocation too slow up to > sveral hundredth milliseconds. Furthermore, the repeated draining > in the alloc_contig_range makes thing worse considering caller > of alloc_contig_range usually tries multiple times in the loop. > > This patch makes the lru_cache_disable aware of the fact the > pagevec was already disabled. With that, user of alloc_contig_range > can disable the lru cache in advance in their context during the > repeated trial so they can avoid the multiple costly draining > in cma allocation. Isn't this racy? > ... > > @@ -859,7 +869,12 @@ atomic_t lru_disable_count = ATOMIC_INIT(0); > */ > void lru_cache_disable(void) > { > - atomic_inc(&lru_disable_count); > + /* > + * If someone is already disabled lru_cache, just return with > + * increasing the lru_disable_count. > + */ > + if (atomic_inc_not_zero(&lru_disable_count)) > + return; > #ifdef CONFIG_SMP > /* > * lru_add_drain_all in the force mode will schedule draining on > @@ -873,6 +888,7 @@ void lru_cache_disable(void) > #else > lru_add_and_bh_lrus_drain(); > #endif There's a window here where lru_disable_count==0 and new pages can get added to lru? > + atomic_inc(&lru_disable_count); > }