From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E003EC83F2C for ; Mon, 4 Sep 2023 03:43:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C7EAF8E001E; Sun, 3 Sep 2023 23:43:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C2ED18E001C; Sun, 3 Sep 2023 23:43:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B1D958E001E; Sun, 3 Sep 2023 23:43:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id A25FC8E001C for ; Sun, 3 Sep 2023 23:43:32 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 6248C1C9CD8 for ; Mon, 4 Sep 2023 03:43:32 +0000 (UTC) X-FDA: 81197520264.19.F72E97B Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 0812DC0002 for ; Mon, 4 Sep 2023 03:43:25 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=fYVxI58m; dmarc=none; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693799010; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=iur7BIcm59tf1h9av0MiIkLSBJRND3YOVSSzJxHVYdw=; b=foCV4TLFw3vNS5TAiKdBHuLt1maLwAUFE5ROPN0/eXeycLqo6y1aixRGiaMJP8DXEwSFn5 Qxmn9mYgew9uuIRvbFniKSOWOwB1WTjlUH9jp7/W7omVOs85WdP8AUCyhX+g9yZgKHdlye cxK0JXJ33fIo4xb7evqqlN3s+dKWpYg= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=fYVxI58m; dmarc=none; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693799010; a=rsa-sha256; cv=none; b=wZfhHDA6e4qZSvlNSao9sWIJdvczpEZirIuL6MGQ2s0rzjx9uvLbmiK4Qq/lZwUjX2/scj Y9Wqdvtpc0CfwQCXu8YY7UPhTKdCTcG/VDFHEP0EgUmggxWJvxtH/8A9heBc6A5T0gsLk6 HQY0Cn70K4y9ZsdIIRkMWhN93YegYiQ= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=iur7BIcm59tf1h9av0MiIkLSBJRND3YOVSSzJxHVYdw=; b=fYVxI58mPGOFpJt02xOUM+Qyij ueyqsi+lhp01aZnVeQjTza4iVar2F0XCA8FcRfxhQ5KaWgaL4pSlS5zVf2OlXiPdXNS0sOAcerhY0 vkcqiQ04+DbzMMVwRwOXsqxcLKwq0IdEuoR4VNyXepf5u4NUqpvQrq4Du7mnNRqnIsX+pH5W+VU4f hJRX2VxK06jCBkwfFee2D6tWazqrV9R7/O/Jv/5NzCIZVPCvRW6QivLo38x1AU8HX2iMMJWSBwTEx LMEe/iO/BVEDqLzXkvi9bQ7GTd5ZuHXnWPzio+ju0NPBiRf1RTxcXnv+KY+U3h7zw/sGv19fAfHRQ 5KCJjvmA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qd0Ul-00Bw6Q-0w; Mon, 04 Sep 2023 03:43:23 +0000 Date: Mon, 4 Sep 2023 04:43:22 +0100 From: Matthew Wilcox To: linux-mm@kvack.org Cc: Mel Gorman Subject: Re: [RFC PATCH 11/14] mm: Free folios in a batch in shrink_folio_list() Message-ID: References: <20230825135918.4164671-1-willy@infradead.org> <20230825135918.4164671-12-willy@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230825135918.4164671-12-willy@infradead.org> X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 0812DC0002 X-Stat-Signature: xz1ehrbuuypr4m9m5h8za15ntsfxn5wm X-HE-Tag: 1693799005-170680 X-HE-Meta: U2FsdGVkX1+tuDip8fZHy85anacsP36LF7JbBTrdqCveoNhuYAjcJeOmtfGPf3SyKk2WBt3JwMocXvlgDreXiqABEPweU81DMUIjIY6dILlT5j9LPp7/bCz7dXVc7MbdiiWg+/+8V+ftoqT8QfplXDO1PJbnJ0TbOXXbQtqVjCsWYM2IsmWmmrcp/OFOdwrIeCzs3SdftUumibYA+j5rgl2hnUk9jbLh/aYieBqiyO9YISkYDLOA91MhnuCgvOW6ssY4ctkq5py8Gype8pdKsAE31QHnSPYXeFobYW/n5qTM+ARg8qwqKDd1ZxvAm5TW8p9vqG0V4xb/Oke6eH34t3tm/h/GO9M+5wYmgi+U9oDuyYmPfEP67/ER+bp7msWGAk41sQb41Wvs4fFBqRoE8Ur+xodv2yutt/tJrY4b40NBkXym6BswXAPQ8Noqi031degRwT8KSe9VeuaLUCqU+fOUE6zbuxfV8h2ul2dcHPEbVFyfjGusvB+/b68w0Vz7YeKLK41hGWK1A3XHURFXz7aLLmDx5B6n2dIpFEFVHp6moBajcp7dVNo3qmiFj8eD3IGdjQ7ZjIVYwFQXKXdnSER2hLnSgCS/llKCI/ir4mn+AHBdJt4mE9HOzK6HBh4MFaZ5zc1YGixgCEyIMSqRDwKaK9uEzczyhr9/D2CT2ufEl0zFnoAzCJfge693xxYgIQffW5LbvbKu45mOdpUcA/9+q0eYnvqgUcXSH6aAYy72Gk7YvHOvlAbl4/LimBbJ7jlWvNpvWhhO5eE/kpBEMC/rJbw714Q8WGs3v2JM+PPmvAVW/I2lSnfm1rQwCxHNt3mlc77YnQ+tdWSwwwj2BtjA/SeUzk1om+T0DQeTNhPfRJclqnwHYsPxASqwIg10NkWKrcdLxZLcY3xqj6KjXjdJVzTnkhCL6tAMImDPObe1au5rR0G46q0reDtRh6gsw8qWHrqrAloWzaPxAbD ccZEM4Sb BJLUE14v7iiwrr6HPZkzeqF8Xm6ocrlzWpJZg8coIGydcbQvhj6Cgi7H10lTnBLOiEIB/ubBUZAPBJSXaaq2+chZxXnpKakEhmIZhFW5WND9bkPHcVnR9OJHJPXWe6yeywsXw+/lq7bVzl2RpWz6br1zPBE0RBkXWXLnIpXfx1l+MO3sAX7phYUnkJQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Aug 25, 2023 at 02:59:15PM +0100, Matthew Wilcox (Oracle) wrote: > Use free_unref_page_batch() to free the folios. This may increase > the numer of IPIs from calling try_to_unmap_flush() more often, > but that's going to be very workload-dependent. I'd like to propose this as a replacement for this patch. Queue the mapped folios up so we can flush them all in one go. Free the unmapped ones, and the mapped ones after the flush. It does change the ordering of mem_cgroup_uncharge_folios() and the page flush. I think that's OK. This is only build-tested; something has messed up my laptop and I can no longer launch VMs. diff --git a/mm/vmscan.c b/mm/vmscan.c index 6f13394b112e..526d5bb84622 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1706,14 +1706,16 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, struct pglist_data *pgdat, struct scan_control *sc, struct reclaim_stat *stat, bool ignore_references) { + struct folio_batch free_folios; LIST_HEAD(ret_folios); - LIST_HEAD(free_folios); + LIST_HEAD(mapped_folios); LIST_HEAD(demote_folios); unsigned int nr_reclaimed = 0; unsigned int pgactivate = 0; bool do_demote_pass; struct swap_iocb *plug = NULL; + folio_batch_init(&free_folios); memset(stat, 0, sizeof(*stat)); cond_resched(); do_demote_pass = can_demote(pgdat->node_id, sc); @@ -1723,7 +1725,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, struct address_space *mapping; struct folio *folio; enum folio_references references = FOLIOREF_RECLAIM; - bool dirty, writeback; + bool dirty, writeback, mapped = false; unsigned int nr_pages; cond_resched(); @@ -1957,6 +1959,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, stat->nr_lazyfree_fail += nr_pages; goto activate_locked; } + mapped = true; } /* @@ -2111,14 +2114,12 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, */ nr_reclaimed += nr_pages; - /* - * Is there need to periodically free_folio_list? It would - * appear not as the counts should be low - */ - if (unlikely(folio_test_large(folio))) - destroy_large_folio(folio); - else - list_add(&folio->lru, &free_folios); + if (mapped) { + list_add(&folio->lru, &mapped_folios); + } else if (folio_batch_add(&free_folios, folio) == 0) { + mem_cgroup_uncharge_folios(&free_folios); + free_unref_folios(&free_folios); + } continue; activate_locked_split: @@ -2182,9 +2183,22 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, pgactivate = stat->nr_activate[0] + stat->nr_activate[1]; - mem_cgroup_uncharge_list(&free_folios); try_to_unmap_flush(); - free_unref_page_list(&free_folios); + while (!list_empty(&mapped_folios)) { + struct folio *folio = list_first_entry(&mapped_folios, + struct folio, lru); + + list_del(&folio->lru); + if (folio_batch_add(&free_folios, folio) > 0) + continue; + mem_cgroup_uncharge_folios(&free_folios); + free_unref_folios(&free_folios); + } + + if (free_folios.nr) { + mem_cgroup_uncharge_folios(&free_folios); + free_unref_folios(&free_folios); + } list_splice(&ret_folios, folio_list); count_vm_events(PGACTIVATE, pgactivate);