From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F0C5C02182 for ; Thu, 23 Jan 2025 18:17:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 939E728000D; Thu, 23 Jan 2025 13:17:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8E91128000C; Thu, 23 Jan 2025 13:17:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7BAA628000D; Thu, 23 Jan 2025 13:17:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 5BFA928000C for ; Thu, 23 Jan 2025 13:17:29 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id F1B2E1403A6 for ; Thu, 23 Jan 2025 18:17:28 +0000 (UTC) X-FDA: 83039524176.14.366B520 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf02.hostedemail.com (Postfix) with ESMTP id 5733E80013 for ; Thu, 23 Jan 2025 18:17:26 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=j6h0uqZR; dmarc=none; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737656247; a=rsa-sha256; cv=none; b=ceoAZE9SfrCUWH2yTvGZTpAQsH4Y39/25zstmt05GYWcvREL19Ls8VNjidHSbMeb7obIZQ lw8l9kOfeFvmYdC87mFoTkuDEdBOgGVFR+cikQAoJN1mxLdAH/XpH4FYgYMidnsf/So9vY 8Yzqawku4hP6Yf2R2GrRQI+XzTY4TVo= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=j6h0uqZR; dmarc=none; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737656247; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6RlamGXDFH1gohTV3W5UR1gIegITWX6ZOpHuLjIBVPg=; b=EzL9Tg9vPB43WOhk9iATnqdTeIZ1MxQqTjc69ki1psrzmzuC+Aj1sSPsm77bzIIwM0q9Jv vNusEBfr3I+i9a6tizPP70M0MUFfp+MRSn9z29Nsfmq1cGDbX6FVfj+qW0K+BCkWAm8fAq sPE4T5x1Uqk9h2nUMsB2g5mG+/HXv6A= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=6RlamGXDFH1gohTV3W5UR1gIegITWX6ZOpHuLjIBVPg=; b=j6h0uqZRYImrGCU87MimI8WcxT GLTlpjZKMAxSTqLg+kSZ5GXwNWpfDqTevuSnHxqkW7Bz92KDQvX+8H6NR231b4kpjLAQty1q8EdfD qtz2ryB72Y554q+nQGGfyrSc6JuoSieNPR3CzGCJiJcEpaKheFhqKbNj551LtfJgJwsOYrlVbfhs4 tsAPdY2N03Dj4Jdkc1ivwkpt/2+HTtFWroU2kL9PRUVwOArY5h2scdTvSribMxMStmXiObQjTG5jQ WuLvPWTfjTD/cOOsILq8v/0o0SX6yUVS81g8mayEOjH5k+Y1hDKB9vBp3D0U0AgkXyiyYkAnfdZNU oeIjHGQg==; Received: from willy by casper.infradead.org with local (Exim 4.98 #2 (Red Hat Linux)) id 1tb1lb-00000009ujM-2Daq; Thu, 23 Jan 2025 18:17:23 +0000 Date: Thu, 23 Jan 2025 18:17:23 +0000 From: Matthew Wilcox To: Vinay Banakar Cc: SeongJae Park , Bharata B Rao , linux-mm@kvack.org, linux-kernel@vger.kernel.org, akpm@linux-foundation.org, mgorman@suse.de, Wei Xu , Greg Thelen Subject: Re: [PATCH] mm: Optimize TLB flushes during page reclaim Message-ID: References: <20250122200545.43513-1-sj@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspam-User: X-Rspamd-Queue-Id: 5733E80013 X-Rspamd-Server: rspam10 X-Stat-Signature: 513s7wmr5dardxtxn8a1kkj9w6n1i54r X-HE-Tag: 1737656246-990256 X-HE-Meta: U2FsdGVkX1+hn+fVA/O9YvH3jqD/Ppzf3iF1nCT9Ux5QoIJNmbtxXdY4UPbAZoTu5Aa3xe0S4gNF4c8K/3a+qC5HFW/Gxw60zrznq9LrdJrjqnr26JX0xZekKN7DiHAxXtrfnQ5HC9F71eH89130Xxy71Ioy3RESw7ZJqUZ8Pk/77jciZJPyJVlgD5JaHJAvMXws2H3YofVTgEgfYj/V3dELjyVNBTYZBFd5l4m4sDKEtmlpVDfOUiarCVavTZjAJOvOdZeT/rNreuqpzm43qcXnMBuShD4IkU8jRu+Evjcp1czVJwMLHxMPjCkY9TkDznYhWV3EvgVSHwAgLD2224eSAGjPsLqXUlXFZhcZihykn1S/IpSBWj5tsTOShqs7SuYL7TBEs/CpV6xe8kCYl4Y9OkP6/G/7098ycFF1WmzmYVNc9CHYThqXujLYPL14WactTODLyKjrZSsF2/DhTkr2jQ/fTKnJYj6ZK1GzhQ9X2OY+OVDzJ1rUQZcAYv/N14SjEDCPdpth+OI6iGtl08o7umQw+oIFwrN9SrU7Qftx9oJ/BzgKbcG6ANmJ2xDefQ0gaTSNasRpIy2vn8mOVLw/Oe5DHb/xcX/st5Ic0npRLvC0Y8GN6zvNIrtNeeNjE0xfuMH+bujrkIFB5R8ivZyVwkCL3Ug4HEXac2xUYGHLK5cieoqLMFZvltIkV/6L/6nw+8kEBzzsdXIAqQkuyfGLs+Gni5pwAHK8QiV9X+YGSuXEQ8tBAEPMqE3Y8TUa59K8YtmNPkkGwSEFfn1Y7dP46iwd3rqwS9lHJzkPcTVyb8WGCGUmvdwGrE+k9LYY3J9+09FU6+7ST72lPXn9S5Mlo1DWAoDWJ5Ris5ZBv4pFrZVSo9m5CDWPxA0C57nO6NS2PtvB6AeBy6VQ+I1CnEISX9G1XG12fNjLMh14m9sCrD+JL+0RBK/S+7FpQIU3p8yRBTd4s+v7ZzELW3Q +NWNIshC EPK3bQH0ILc5FmbgJVlyLYbn4aYg+JCnaTKkUlSenYb+eMsESOWy8XrTUyWYWWhHgAA2dKWj7wrQ/VKlveEFJSure2IpddebxIH64sOSHEMVzDXt1skJS7qmr7FGejHnCt7miDs/qe04hCiF3HWmOgfeWcabSh3qjYykCQR8GH3rnoX8fyU9BKANy3zsrn/sy1PFTmodIcMzl0dZY9eduWo0f/V2kirsqsV5PKjpsTB+N9ZLCIUd+v1jutg6CAOaNN2mPW/WM/0QIVPSVIrQTA4hsr5lwkwZ7hYko0aLrJu9rrqxaqjKZtQPKz8zp7ZtKS6bd5iFncz9C2rc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Jan 23, 2025 at 11:11:13AM -0600, Vinay Banakar wrote: > Another option would be to modify shrink_folio_list() to force batch > flushes for up to N pages (512) at a time, rather than relying on > callers to do the batching via folio_list. If you really want to improve performance, consider converting shrink_folio_list() to take a folio_batch. I did this for page freeing: https://lore.kernel.org/all/20240227174254.710559-1-willy@infradead.org/ and we did in fact see a regression (due to shrinking the batch size from 32 to 15). We then increased the batch size from 15 to 31 and saw a 12.5% performance improvement on the page_fault2 benchmark over the original batch size of 32. Commit 9cecde80aae0 and https://lore.kernel.org/oe-lkp/202403151058.7048f6a8-oliver.sang@intel.com/ have some more details. I think you should be able to see a noticable improvement by going from a list with 512 entries on it to a batch of 31 entries.