From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B964E77188 for ; Thu, 19 Dec 2024 03:14:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A8F866B0083; Wed, 18 Dec 2024 22:14:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A3FBB6B0085; Wed, 18 Dec 2024 22:14:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 907B36B0088; Wed, 18 Dec 2024 22:14:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 717EA6B0083 for ; Wed, 18 Dec 2024 22:14:56 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 0464D121372 for ; Thu, 19 Dec 2024 03:14:55 +0000 (UTC) X-FDA: 82910239860.09.4602C77 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf16.hostedemail.com (Postfix) with ESMTP id 406FA180007 for ; Thu, 19 Dec 2024 03:14:20 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf16.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734578057; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OSZAIm9bli8O0FeR5qfPlRrcxl3av2TXpjJz5FchAyM=; b=IX0eBgkc8LoQ4paYR6+9cZXfuHIgRPdABcvguedNJswiD5GzQ699ZisX76nb4QI0ak84Hm Y/gk/FOhG/96DzG8wfRun7JeNZRywOQn5uuf6XomQdI6aRMp3LTZnzlMFtt9S4LxRZSQ/G TBAWBGTVls07+iOFovMrI4lFPpQEwSA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734578057; a=rsa-sha256; cv=none; b=TwP8eoYkvFbikUx/lrr0S8v/jTnlSDQY1HpUDyhYyFli61ugrowYWloOEzy7NzqQqiUWHu joNw3cIlhLwTr7cRzGwX+D4tSt3tNjF0FUgmBRtE2yzTgfnG8kvr0/Nx3Iu454VPslgJhq kn9HocPoNGBpASDgX2K4RI/ZwuJgQb0= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf16.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1tO6yg-0000000074c-1znL; Wed, 18 Dec 2024 22:13:30 -0500 Message-ID: Subject: Re: [PATCH] mm: add maybe_lru_add_drain() that only drains when threshold is exceeded From: Rik van Riel To: Shakeel Butt Cc: Andrew Morton , "Huang, Ying" , Chris Li , Ryan Roberts , David Hildenbrand , "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-team@meta.com Date: Wed, 18 Dec 2024 22:13:30 -0500 In-Reply-To: <43o2dqigz6cap75h7y25jz6qbdzoinyq3ntxx4sm5cn3y4dddm@mwyjjygrxhmm> References: <20241218115604.7e56bedb@fangorn> <43o2dqigz6cap75h7y25jz6qbdzoinyq3ntxx4sm5cn3y4dddm@mwyjjygrxhmm> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable User-Agent: Evolution 3.54.1 (3.54.1-1.fc41) MIME-Version: 1.0 X-Rspamd-Queue-Id: 406FA180007 X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: j8pehya67rxd1s34xonwa5w8wssyzzag X-HE-Tag: 1734578060-902658 X-HE-Meta: U2FsdGVkX1+nh5s+/sJIpct25zU50p6De87YPly9pfu5yaFHBXTfiUJldj/HJ640JYgnYWLb7Sc4LhyHFtxTzDKE7LpL0hbkR38uWzMtENZMVoeCESTus1rHGn9/s9JW5pOAx9VRTYzMkLxKNNHM9uUU/23omS4YthsiB91Gvorf1nBUB1jHp04f7FSdjQsQHELKJIL0fZ1k0dElcOPRt41xiKhXY1AY+jRFftibZDaUdYyXXDeu9hAg0QQJXdKgYrWlKmNOulXRqG89F9ygTmTcVs1qlwVo5jz4Jw/2U/RP2t32H+jAq+f0hydWSnzejr436jTsIReRhIHOUq3zAts7vj9FA2ZRBKzP5R9GMWwc8Emwr0XsEa4hMMGUJTG4VL7qSV7gHGe3il/iaPKL59pWxJpPw0d84e/iTkg5TyJw89lSuoEn5YPWUAvDMxdkOBYET5uNQQufnnPaxoeC4cA7TEbJ17MMIwAMHJcPCKTHQNuZhBcjnIN02wAMtcTFggKxwj46xbI5c4niFb8KErvA8gRTI2pXdGu3dUqVA6AiJ2AxREKOPFiLAggh0dVOmkprALi6GlRiHsYCXThQYftruEGcpNsIsEDFiac0qSOwAWbmeJIcQihmn5spyPf3Ob+dA7Ap2Os0PhREf9xSIYQCVCZEx5IZZy6ThZ0oVfSrx0Jfr9/3evcx5CSzPM0vSEavKAqHiy8ualLBTFmJP12FJpAG0DAz2qVplBYOFT4v2cE6BFYVw1ND6VfIQDAHPUVPamTECz5+HbQ3SycC+YNpQ4oHQsbN9dOIQE4zNIPVIkgVj9OM04tJI24egq+AIpyOGpe4b38TfPRa+JTDEBofhtJPR6dzCIi8vinCWNz0K5HKBjrcz1/CHhEpO/a5UsgF6rlPSom494FNIdbJpeYp3IAjkKqSUwyapy26VyYoTy3O4ywRQGG155V+O4THeCMmbOeU2b17gCDzWTS NRMgYgiJ lIZwi2Udt5NOIwN00IcaTlAywFY419Ls8/lK80dmIJOjYASctrGuDUiSHNQfS2lKQg7Higkvq2TuxaWNeZh+WLwxrk/9tPy9nN79GDgLLc3ZcFuH1SCckzl6kF+ph37yT+B1Oqk1ItsFCIcfB3NmIRImskH3RR83kng6nB6k6BpZhwc2iq6e0NcvdB2/eK9tv6CHZDwUJAFV1Ymw9g+tzG7RdcWJWdZjlMwN8s3YzUO6LH11Mftud/+bCqEvhYngkYHjgZjfoqMEnzQZnWIM2HVpBTJib91ke0No61ThsQg7uhYlGYWksYqeakw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, 2024-12-18 at 12:20 -0800, Shakeel Butt wrote: > On Wed, Dec 18, 2024 at 11:56:04AM -0500, Rik van Riel wrote: > [...] > > =C2=A0 > > +static bool should_lru_add_drain(void) > > +{ > > + struct cpu_fbatches *fbatches =3D > > this_cpu_ptr(&cpu_fbatches); >=20 > You will need either a local_lock or preempt_disable to access the > per > cpu batches. >=20 Why is that? Can the per-cpu batches disappear on us while we're trying to access them, without that local_lock or preempt_disable? I'm not trying to protect against accidentally reading the wrong CPU's numbers, since we could be preempted and migrated to another CPU immediately after returning from should_lru_add_drain, but do want to keep things safe against other potential issues. > > + int pending =3D folio_batch_count(&fbatches->lru_add); > > + pending +=3D folio_batch_count(&fbatches->lru_deactivate); > > + pending +=3D folio_batch_count(&fbatches- > > >lru_deactivate_file); > > + pending +=3D folio_batch_count(&fbatches->lru_lazyfree); > > + > > + /* Don't bother draining unless we have several pages > > pending. */ > > + return pending > SWAP_CLUSTER_MAX; > > +} > > + > > +void maybe_lru_add_drain(void) >=20 > Later it might also make sense to see if other users of > lru_add_drain() > should be fine with maybe_lru_add_drain() as well. Agreed. I think there are a few other users where this could make sense, including munmap. --=20 All Rights Reversed.