From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF8E2C4332F for ; Thu, 14 Dec 2023 17:57:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 64E7E6B0525; Thu, 14 Dec 2023 12:57:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5FDA06B0526; Thu, 14 Dec 2023 12:57:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 477846B0527; Thu, 14 Dec 2023 12:57:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 3313C6B0525 for ; Thu, 14 Dec 2023 12:57:45 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id F3A91C03A3 for ; Thu, 14 Dec 2023 17:57:44 +0000 (UTC) X-FDA: 81566181648.29.9E6DA0E Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.10]) by imf20.hostedemail.com (Postfix) with ESMTP id AEEE21C001B for ; Thu, 14 Dec 2023 17:57:42 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=A8CElVtB; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf20.hostedemail.com: domain of tim.c.chen@linux.intel.com has no SPF policy when checking 198.175.65.10) smtp.mailfrom=tim.c.chen@linux.intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702576663; a=rsa-sha256; cv=none; b=epy52E7IfN/rHSld3ur/AJ3sN8hawDD16F/A4y1drWoTef27P+cP2OCmqfYzyHtGCMf5uO IYHAVRPx7jC1X469mGDGkRtOjRTTaoLt4UdWmM1zEpj2IkY4pb6HJuFJNjIMrafcDb3wl7 GquHO4eHcQjpT8G035gPTLwguXlDZ3c= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=A8CElVtB; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf20.hostedemail.com: domain of tim.c.chen@linux.intel.com has no SPF policy when checking 198.175.65.10) smtp.mailfrom=tim.c.chen@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702576663; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=neTXbCbIfSqZiRto9DSJVSgHTfUhY08kMUpIGVjFVbg=; b=KyHL5Wdm2ZnQZnsnlqluYjBWB0x0+YaYceTuJk89JRgZW39or4eIH8lb3fnNsvGOV+gnzp O8YwJ3FlyJ47WMShkC5+dS3YS9gE46l0e9+3/znki4YZzu9pOgsLTphFglH/9cmaWgm7vd v+XXoYJkdiwddv6E6XNvBVzFsJQUYKA= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1702576663; x=1734112663; h=message-id:subject:from:to:cc:date:in-reply-to: references:content-transfer-encoding:mime-version; bh=fNGLTdSt0U0vp5H92ugUMf6iM40RklxGBua+2WTEdjY=; b=A8CElVtBKzmrVd+yXrwhxUXT7LnFRzi5G53qQhrGxfQiy+0j2MaFge4S 49JWpZ/j9lJgB3LwCciuTXlvnLxW6s2wnw5i1pK+2GTcYYegoxaXVFXpu Kl7vdd4QUfp+TAmjdZpWrO4hV0N17enxZiSTKYK+2NNuQmREsekELb5RT NUU0DDoMPJP4lMfX35kFdVVrIJ556GOr7mHLwWCmTjDiRKfu7u/Lkfn40 CI5wX3/jf60kno+nxOGRVOagfxLYhQpKEtOua0r8MbLHgdYpGdOQqn8Bg xEfWjvpfl6TNt+01gcozC4irNDW0n+vxuDuwhk+kM/Ui2e7P5anVbvKGP A==; X-IronPort-AV: E=McAfee;i="6600,9927,10924"; a="8516310" X-IronPort-AV: E=Sophos;i="6.04,276,1695711600"; d="scan'208";a="8516310" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Dec 2023 09:57:41 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,276,1695711600"; d="scan'208";a="17723053" Received: from priyammi-mobl.amr.corp.intel.com (HELO [10.209.30.248]) ([10.209.30.248]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Dec 2023 09:57:40 -0800 Message-ID: <962b75502b6456a8b698a4ca89d6deedec118ef6.camel@linux.intel.com> Subject: Re: [PATCH] mm: remove redundant lru_add_drain() prior to unmapping pages From: Tim Chen To: Jianfeng Wang , akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Thu, 14 Dec 2023 09:57:39 -0800 In-Reply-To: References: <20231213072805.74201-1-jianfeng.w.wang@oracle.com> <3c7d9b8878220571cb7e0760c3a463951252b762.camel@linux.intel.com> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable User-Agent: Evolution 3.44.4 (3.44.4-2.fc36) MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: AEEE21C001B X-Stat-Signature: hw3su8uh99z3n3rzjyytrpf6f7b9zga4 X-HE-Tag: 1702576662-178049 X-HE-Meta: U2FsdGVkX1/laxZ+u2x0osA3k5AeYEJNwC/ggSnL6n+Z/HTSLeyrkbkxJN9R5tofsK+f8lRYEbH5S4gtcEev7fokrxESlHXDraO8DaZUhLvo82o3Dcln9iz+qbal2yzBSrfFkY6QmkS9eH9jmh0FXRN0WkuniS7tXAtqCvVccZ9Urud3Kww4cFIqqXzZzLdcuzzP8p1guyMEfKYA5HgOXMuG+cWTuVgAEhO5kjoBVNiJg8Wfn0eDDlfq2UrbFawoLzFQziroLds0Mh93Vsev3NZxCiKYxF01LurASG/vvDk9JdkmDyrg5dVgWS4y3K28OBfZ57E1HzNXqQR8b0JqwPXYXDpAdZ8Oni+FDDRuEViSrA9TLDMo/4Dgo7Vc2NIKlbYKWr1/2MnNLGmDuGiGKUlShtZhAuDxDFAM6Mz0c3YXnmXalIYRDt5zoCAuiRvYvpYK8bP9cd62TLkL7Dup2UjVvYj3EnGkZnZmIjj5QuXCQus98s5bxuyvwF5ddMloyvvc7j31Pl3kuuvbSdtGGM6jKBGp0dbg1b/lLASyTzuxVEDuXB3W2loo0rLs0qZlN+/G187on+oforTISGz/hEsjZWGbceiGhvkH/QxDykX/pwQtV7qROIRuiwjOtX7gs+DCJetacoGCTXNffis/ttzeYTdE8Ga0m2JGCtMr9jk0tJldHMeqiUoOcumgNHpg7Ru5RpbltbRzIax0ZgvhIa6tL5dPBC6+On2M6P2wNBQ/FZPpYLtnB6hNeq3Xy5qOABFLJx8o8s4OtttCF/78NN1w2D/ztRxA+gWHXmFEYCg/64xyVKfsr5G3asgeJxX/lH6REmBvdR0jYTin9zBUokcrQ1ktIzBQPQk+MCKXyrFdhSEuFa6Gv3dyrW+BL2+8+6zExsClXPb2789INI0vjkBI2AXUpPs0AdjBsvVfy5BY7bII8MmE8Bp96DD9F56FwAKkPPJqiI3gUtS0eju C2MIv68o 6WjNuzCZ+yn3lU1j7e6jyiiWAqyYvvnoYw+2dRjWNsFMGHjwg5dsnIHhVifapZfuU5ZbkXNpfpUlOyWPKwXan6vIVzckrjCEGADeSTfev3QEHNUyifyu7gwSomvs+Eg0sSwFgr9OqqwWMcFPJSIajOGYdt04nfkgPo11v88I4C7fVfuUZzUUWciGIPRIn9OAG/Ntf1IZhZp7RxE4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, 2023-12-13 at 17:03 -0800, Jianfeng Wang wrote: > On 12/13/23 2:57 PM, Tim Chen wrote: > > On Tue, 2023-12-12 at 23:28 -0800, Jianfeng Wang wrote: > > > When unmapping VMA pages, pages will be gathered in batch and release= d by > > > tlb_finish_mmu() if CONFIG_MMU_GATHER_NO_GATHER is not set. The funct= ion > > > tlb_finish_mmu() is responsible for calling free_pages_and_swap_cache= (), > > > which calls lru_add_drain() to drain cached pages in folio_batch befo= re > > > releasing gathered pages. Thus, it is redundant to call lru_add_drain= () > > > before gathering pages, if CONFIG_MMU_GATHER_NO_GATHER is not set. > > >=20 > > > Remove lru_add_drain() prior to gathering and unmapping pages in > > > exit_mmap() and unmap_region() if CONFIG_MMU_GATHER_NO_GATHER is not = set. > > >=20 > > > Note that the page unmapping process in oom_killer (e.g., in > > > __oom_reap_task_mm()) also uses tlb_finish_mmu() and does not have > > > redundant lru_add_drain(). So, this commit makes the code more consis= tent. > > >=20 > > > Signed-off-by: Jianfeng Wang > > > --- > > > mm/mmap.c | 4 ++++ > > > 1 file changed, 4 insertions(+) > > >=20 > > > diff --git a/mm/mmap.c b/mm/mmap.c > > > index 1971bfffcc03..0451285dee4f 100644 > > > --- a/mm/mmap.c > > > +++ b/mm/mmap.c > > > @@ -2330,7 +2330,9 @@ static void unmap_region(struct mm_struct *mm, = struct ma_state *mas, > > > struct mmu_gather tlb; > > > unsigned long mt_start =3D mas->index; > > > =20 > > > +#ifdef CONFIG_MMU_GATHER_NO_GATHER > >=20 > > In your comment you say skip lru_add_drain() when CONFIG_MMU_GATHER_NO_= GATHER > > is *not* set. So shouldn't this be > >=20 > > #ifndef CONFIG_MMU_GATHER_NO_GATHER ? > >=20 > Hi Tim, >=20 > The mmu_gather feature is used to gather pages produced by unmap_vmas() a= nd > release them in batch in tlb_finish_mmu(). The feature is *on* if > CONFIG_MMU_GATHER_NO_GATHER is *not* set. Note that: tlb_finish_mmu() wil= l call > free_pages_and_swap_cache()/lru_add_drain() only when the feature is on. Thanks for the explanation. Looking at the code, lru_add_drain() is executed for #ifndef CONFIG_MMU_GAT= HER_NO_GATHER in tlb_finish_mmu(). So the logic of your patch is fine. =C2=A0 The #ifndef CONFIG_MMU_GATHER_NO_GATHER means mmu_gather feature is on. The double negative throws me off on in my first= read of your commit log. Suggest that you add a comment in code to make it easier for future code maintenence: /* defer lru_add_drain() to tlb_finish_mmu() for ifndef CONFIG_MMU_GATHER_N= O_GATHER */ Is your change of skipping the extra lru_add_drain() motivated by some perf= ormance reason in a workload? Wonder whether it is worth adding an extra ifdef in the code= . Tim >=20 > Yes, this commit aims to skip lru_add_drain() when CONFIG_MMU_GATHER_NO_G= ATHER > is *not* set (i.e. when the mmu_gather feature is on) because it is redun= dant.=20 >=20 > If CONFIG_MMU_GATHER_NO_GATHER is set, pages will be released in unmap_vm= as(). > tlb_finish_mmu() will not call lru_add_drain(). So, it is still necessary= to > keep the lru_add_drain() call to clear cached pages before unmap_vmas(), = as > folio_batchs hold a reference count for pages in them. >=20 > The same applies to the other case. >=20 > Thanks, > - Jianfeng >=20 > > > lru_add_drain(); > > > +#endif > > > tlb_gather_mmu(&tlb, mm); > > > update_hiwater_rss(mm); > > > unmap_vmas(&tlb, mas, vma, start, end, tree_end, mm_wr_locked); > > > @@ -3300,7 +3302,9 @@ void exit_mmap(struct mm_struct *mm) > > > return; > > > } > > > =20 > > > +#ifdef CONFIG_MMU_GATHER_NO_GATHER > >=20 > > same question as above. > >=20 > > > lru_add_drain(); > > > +#endif > > > flush_cache_mm(mm); > > > tlb_gather_mmu_fullmm(&tlb, mm); > > > /* update_hiwater_rss(mm) here? but nobody should be looking */ > >=20