From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1C4FC3DA6E for ; Fri, 5 Jan 2024 17:00:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 28B8F6B0256; Fri, 5 Jan 2024 12:00:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 23C4F8D0018; Fri, 5 Jan 2024 12:00:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 12A5E6B025F; Fri, 5 Jan 2024 12:00:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 0358C6B0256 for ; Fri, 5 Jan 2024 12:00:52 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id BAF771205AE for ; Fri, 5 Jan 2024 17:00:51 +0000 (UTC) X-FDA: 81645871902.26.33E95E0 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf29.hostedemail.com (Postfix) with ESMTP id 40A8E120035 for ; Fri, 5 Jan 2024 17:00:46 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=vEaVnnTG; dmarc=none; spf=none (imf29.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704474048; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LaFjzl8Gb1lOvXgoulNqVhlKi/yd1utaBmYCRp0ybNk=; b=kueLx5V9QmM5ATnf68AKGHSLFzWdt7dxSYGaB+00jbzP0DPIM9v/bXBE2lypzD52TLjFpM ToQ3jVpyyUuDPaZax2N+V2gIN1M0EnzUcH5YrHXclz4sFSlmVQwaE7DBJyXPXtglNjoiml bunR/YNYWOdT9WMtiB+ZprtpbtL5i/w= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=vEaVnnTG; dmarc=none; spf=none (imf29.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704474048; a=rsa-sha256; cv=none; b=U3+ho9qqD2R5bmFSFtqtfIiCIrkAoayDkANhMicAwpcWv5SK8YbsFHJxKXhWkzU7+luruf lVhAqb4CwZvZ73OFtQYeAfcDMWQEtnKC6YZvoTZu6u0DvoBRDgs8cxdonv8qN8sjwAb1MY UU7tlSMohjTW1+gLALPQ4cv2tTxf92I= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=LaFjzl8Gb1lOvXgoulNqVhlKi/yd1utaBmYCRp0ybNk=; b=vEaVnnTG4dOs+tD8t+Wq4Fmv+C /Dayf3kW90echCLKe65nMCdU2sI8nX/Zy5lUwQ2M3AEHJLoJhwgXideWOTQ3lcgpL+Jh1bMWPY6Q3 Hu18Pp/NvBDOU17jE9Xz4N+6UyNRa9qaAChLec4iTBvcW56IplISEk+YFHKSrmUz4yWjq/zPve8ll FdaR2dPcCXa5CM6LtnEn7oKHK3LL9uv88BiQXCWUurkj9tAiUg6Z7jJm/xiQUeItVkobuGauXHg9N Qw1M1n0+jVYd0CA5bIe7/PPxAqU+jibT4o9TkbyG/8kpYxvsQnH9gfBh9wF/WzvWLySI/4oAFDPCu 4E+73G2g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1rLnYp-000fae-Ag; Fri, 05 Jan 2024 17:00:43 +0000 Date: Fri, 5 Jan 2024 17:00:43 +0000 From: Matthew Wilcox To: linux-mm@kvack.org Cc: Mel Gorman Subject: Re: [RFC PATCH 11/14] mm: Free folios in a batch in shrink_folio_list() Message-ID: References: <20230825135918.4164671-1-willy@infradead.org> <20230825135918.4164671-12-willy@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 40A8E120035 X-Stat-Signature: ijmskbi5csedtdookanpt3tg8dwq4hq9 X-Rspam-User: X-HE-Tag: 1704474046-920117 X-HE-Meta: U2FsdGVkX1/oJgJsOXax7yeguBLZt59Y11eSZZS1kFwaspFe3rrDvosiQMEVXQ+umHnpfb1vpRnDUwKtDtkO1mad5BIDQahJm7L4t1S4ibwUjvuvnR91LQ0GIkwePWdF9yeDDExC5pXbefkQKdcKY3GYR8Woh7voGR2fCxwAjpN85/4vIU5n7iRmjNhugtOSNho0acuVLXBPBMh7qMocW2HzNwYF/yy3FOLfSPj0L2e7OIRU7BssBLMq/qu8dpAVCgABOySWyikIa9nRB3axCKBji9gEFHjt0EjchpJuE3VG1ij5wC0+VjWUa0+v0ZU+vbUqiu9i6wOhR2mvSbYgQcr1W9Ljae7ltm9oLjO/Lzo6Y0QVKVfeKtwAaAxVKfseQTHmDnYW++O8fjC0u2VLjVR0b4X2AEU8rnlGVAj8Z9iZJKgE/R3L96JIUxvY/gW+7isZP7Xv2Rww9LVvC3FDvNAyMs5GxJkXGumcMWd/1V5UdNYWWKZas6q50ayiTFNhflcEk42Mgaww9AP47brFeVGwf3lzlO0b+dP3hksandb8zOD5D1ZG6ER34kAnE60Y3NEKMBQ36xxq+eD9yrLMLJNOfKPaER/CV29B3UURRmjEIiAuNpBVKGiXflt2fDR7bVxqR6ZZ7j9WeoA+iqPVmI+Gq4HTt5VlpObW97kLB4QiNKsxYtCKJAJMDk0Prq/cG9k0pw4uQFFLdrZLQn8oStlIdUHkUgVlMOYd4JarYY2YTziWTgQtZqbO1mOnG9HoaAWQ0lAh7/MhfTilSfmNz89o8BUgq8eojecGFqVPOatsm2HbiVuDPVRe1WX64YhUwTohpG+J7iiXCpr0uYr8bf1jIXO1dLnyR5d7WEpKIBVdQKKnfZJ1hqR92PixsMwibrS0bJ+SbmS9Cw8KDF2dPwWLDYMH6/UCAVWXkftxty3guOhmrRm5WKVb+FR51D5h3gbUuSOgLKOeLXmXEtT E/jaRo8h d1sfH+Wlb95Ue3dbcbGXZF6qnO1e5lN/AwAbPjibAIeiNjiIx9V96PNAR+zkscDEk8Ifyny8no/RFVmgfo5XIkTn+Pwvcio/Jc/ogAD6QSuj/Nr7f+rjGCoMa+kFDk41xwq5Htlj1LRvSGbPIDRUangiewPNh/7+kRSCthPkSQXwMtuWBOTSgcZ7q5A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Sep 04, 2023 at 04:43:22AM +0100, Matthew Wilcox wrote: > On Fri, Aug 25, 2023 at 02:59:15PM +0100, Matthew Wilcox (Oracle) wrote: > > Use free_unref_page_batch() to free the folios. This may increase > > the numer of IPIs from calling try_to_unmap_flush() more often, > > but that's going to be very workload-dependent. > > I'd like to propose this as a replacement for this patch. Queue the > mapped folios up so we can flush them all in one go. Free the unmapped > ones, and the mapped ones after the flush. Any reaction to this patch? I'm putting together a v2 for posting after the merge window, and I got no feedback on whether the former version or this one is better. > It does change the ordering of mem_cgroup_uncharge_folios() and > the page flush. I think that's OK. This is only build-tested; > something has messed up my laptop and I can no longer launch VMs. > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 6f13394b112e..526d5bb84622 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1706,14 +1706,16 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, > struct pglist_data *pgdat, struct scan_control *sc, > struct reclaim_stat *stat, bool ignore_references) > { > + struct folio_batch free_folios; > LIST_HEAD(ret_folios); > - LIST_HEAD(free_folios); > + LIST_HEAD(mapped_folios); > LIST_HEAD(demote_folios); > unsigned int nr_reclaimed = 0; > unsigned int pgactivate = 0; > bool do_demote_pass; > struct swap_iocb *plug = NULL; > > + folio_batch_init(&free_folios); > memset(stat, 0, sizeof(*stat)); > cond_resched(); > do_demote_pass = can_demote(pgdat->node_id, sc); > @@ -1723,7 +1725,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, > struct address_space *mapping; > struct folio *folio; > enum folio_references references = FOLIOREF_RECLAIM; > - bool dirty, writeback; > + bool dirty, writeback, mapped = false; > unsigned int nr_pages; > > cond_resched(); > @@ -1957,6 +1959,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, > stat->nr_lazyfree_fail += nr_pages; > goto activate_locked; > } > + mapped = true; > } > > /* > @@ -2111,14 +2114,12 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, > */ > nr_reclaimed += nr_pages; > > - /* > - * Is there need to periodically free_folio_list? It would > - * appear not as the counts should be low > - */ > - if (unlikely(folio_test_large(folio))) > - destroy_large_folio(folio); > - else > - list_add(&folio->lru, &free_folios); > + if (mapped) { > + list_add(&folio->lru, &mapped_folios); > + } else if (folio_batch_add(&free_folios, folio) == 0) { > + mem_cgroup_uncharge_folios(&free_folios); > + free_unref_folios(&free_folios); > + } > continue; > > activate_locked_split: > @@ -2182,9 +2183,22 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, > > pgactivate = stat->nr_activate[0] + stat->nr_activate[1]; > > - mem_cgroup_uncharge_list(&free_folios); > try_to_unmap_flush(); > - free_unref_page_list(&free_folios); > + while (!list_empty(&mapped_folios)) { > + struct folio *folio = list_first_entry(&mapped_folios, > + struct folio, lru); > + > + list_del(&folio->lru); > + if (folio_batch_add(&free_folios, folio) > 0) > + continue; > + mem_cgroup_uncharge_folios(&free_folios); > + free_unref_folios(&free_folios); > + } > + > + if (free_folios.nr) { > + mem_cgroup_uncharge_folios(&free_folios); > + free_unref_folios(&free_folios); > + } > > list_splice(&ret_folios, folio_list); > count_vm_events(PGACTIVATE, pgactivate); >