From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FEB3D10F57 for ; Mon, 18 Nov 2024 04:03:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DE19C6B00A5; Sun, 17 Nov 2024 23:03:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D91236B00A9; Sun, 17 Nov 2024 23:03:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C33856B00AC; Sun, 17 Nov 2024 23:03:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 9FA066B00A5 for ; Sun, 17 Nov 2024 23:03:38 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 3AD31120124 for ; Mon, 18 Nov 2024 04:03:38 +0000 (UTC) X-FDA: 82797870582.05.DB075D2 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id E97D7A000F for ; Mon, 18 Nov 2024 04:03:00 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Urob7bVg; dmarc=none; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731902551; a=rsa-sha256; cv=none; b=Nkwdhw5/s7JERkCvEQl5mRUaiBcWWn1FLH6CkfEDhtD2NOj+OqouH7fBsjz2czG1qyxqVR Zchww+0HrogWKomNECtyQrl9xR2mAWQDgPRaDJDTkPQcrGEuyXbbuLeatJlNHXZPzIOkj6 g+B38Dy8xgrX1XzwOahdt2lZX6fFm2I= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Urob7bVg; dmarc=none; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731902551; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=boSi3xL7VHuzHznVUUgFDFV80UsWJg2EYGUK1On7fzw=; b=R/gnI/1H+Em5WP0SmK7CeyQEvOqywwM6Bcyb2qkgUpWYyWh5XsQ/xuMWAEQN1LwlyusYAO Gr6RFjuB1T3+dPCbXN5NxTu0A9X9nffwED/bQi2efB9wlQoyDOEFlZb0jScI3z1JJZKHo9 doQ/FJ6rCgia30RdJM3TJgz3N7K2thw= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=boSi3xL7VHuzHznVUUgFDFV80UsWJg2EYGUK1On7fzw=; b=Urob7bVg3BZ/R/LRt0nMuI51xt DFTOd4yr6ZB8hD3AWIi4adqmPrOjGfEhyIGDN01gHbE+dxDnRb24Mtg4wGGnszVjxznoXl2NHPk1t gC7iMnTyR5Y8afI6DKblS8iLqr9iFv/3FDuX/hu8T1P2cWnZ7BCVHMjb7iLd5J8JQM/NfSVru3ltl ElZilL5gFpsC/LSiX64Fg7CyvTUXw/zRQGsiuUaVrZRn82Ur2mB51pzL2iGAK7OMx0yZ8k+wMiesv ICyt970f6KwRb+vwJj2ZesGTTV/2vm+jBTWuYZPtYtZYwKoKnsABqVbHUUk4Dp3lEQD6G3kRYP0uV q9mg3EsA==; Received: from willy by casper.infradead.org with local (Exim 4.98 #2 (Red Hat Linux)) id 1tCsz1-00000002Y00-06LG; Mon, 18 Nov 2024 04:03:27 +0000 Date: Mon, 18 Nov 2024 04:03:26 +0000 From: Matthew Wilcox To: Chen Ridong Cc: akpm@linux-foundation.org, mhocko@suse.com, hannes@cmpxchg.org, yosryahmed@google.com, yuzhao@google.com, david@redhat.com, ryan.roberts@arm.com, baohua@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chenridong@huawei.com, wangweiyang2@huawei.com, xieym_ict@hotmail.com Subject: Re: [RFC PATCH v2 1/1] mm/vmscan: move the written-back folios to the tail of LRU after shrinking Message-ID: References: <20241116091658.1983491-1-chenridong@huaweicloud.com> <20241116091658.1983491-2-chenridong@huaweicloud.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20241116091658.1983491-2-chenridong@huaweicloud.com> X-Rspam-User: X-Rspamd-Queue-Id: E97D7A000F X-Rspamd-Server: rspam11 X-Stat-Signature: 7tb5qmuixaex44m89phks6ztnfd3wiqi X-HE-Tag: 1731902580-667263 X-HE-Meta: U2FsdGVkX1/ZMQ4lJ1KwEuBt13yxF/LfM3t2RrcBB03zoHMvmv++gcZMwA1/BLXrDaELfMgXhLQAnufkR0oX3r5TQtwOwk/xhgomZbs24739BuzXZUeVkI5JZExYI24MMvnDYfAuBRC/WECJ8TIbsO4AdukqQ2yLwllh/Qtn0il+D0wJE1q8GWPOL1lc4h+tIqOYo0UQNRE79Z586HZ+LK3L7ATPOWNf5Oj0cxobPMEYMhtQ9fduH0+8bbYHm6kNngqOqKRboZUoUqgwdT7ktl2sIBD5RE04+culmQF31VNPc9XqsD9Ivn7ay10zvmMy5qEMPiAejsl49mV1hIfYptyng3eIWAgvug8nVjwthOq0qpIeL2P4gHBd63hVA2/PnvcTXF5eY4doGpdtNsgP+7MCBoKUIQCEHLb1evExwlw6NtvjjxUb75IwYcnk+YNHtH47AzApP73YRbiY36+0d63LPZ5oC5Lm8GmZ/dRUxxcGZzyZiEaVGELP+jKc5EoF2VQc7G6NBe7l+fAMDh1fIr0uX7ilreIbwg68kgcE8PhedlhwOXU+pNBJf17mCRq2xRBUr1KB2pappQiDDrbBNbpABqhcDAY5G0Z+urXoplRNlFsLutDpsQguEd/bg9/GqkPf3CnZw8o0OKWL8Fjw0OiR/YLCa2J6mSRtbr/KObBWMyUZo/yuGFA30WhkPRtMwHHX9jFiwd70P5VH3noNCaaMrUy1o4L44bm0MSb/k7erD0GeWRx1+CMepLQV6UtDEGcwp3eECYEfl7txrLmRoTtB5hEKhujRSiaTRtQOOVErfywcoHUOqGD6nt1/cnQhH0glk+iQyxtBilfuFKTyAtPa3udQptjOQ+bMQk71D7N8J9okv2QREm4ylRXoYTkPVdgWHrfl4DRZDDLLZ1FG1HOLCA/WlrhkuBmG2frUS3niC8NuwhKKRbSoedj0rECwGGWo9DDtz2ufuzwrsbk e2Tir7zw b9/27HuCpJ2QqfE5cbsmYo9913eG2DnVxzcYj2B/FSfWHeX+Ge9bLv44WpDAzcAWcUBNR4mg8mvECP4FrQqHGk7NiXm6iA5HzxEyh7nlQHBd2uzO0gmXez5zDKwyP9WXDoG9CfE0NQnSr/l8YSDgrNGA6zfSehp3+OlN0DnWY8A6az+gpFwzy0w1C3+9mNBtrWteaJxWgI7cuFvyfk1t42DdKsN7829t39p8lP8E8JPNOi6qhcTvo19eY+g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sat, Nov 16, 2024 at 09:16:58AM +0000, Chen Ridong wrote: > 2. In shrink_page_list function, if folioN is THP(2M), it may be splited > and added to swap cache folio by folio. After adding to swap cache, > it will submit io to writeback folio to swap, which is asynchronous. > When shrink_page_list is finished, the isolated folios list will be > moved back to the head of inactive lru. The inactive lru may just look > like this, with 512 filioes have been move to the head of inactive lru. I was hoping that we'd be able to stop splitting the folio when adding to the swap cache. Ideally. we'd add the whole 2MB and write it back as a single unit. This is going to become much more important with memdescs. We'd have to allocate 512 struct folios to do this, which would be about 10 4kB pages, and if we're trying to swap out memory, we're probably low on memory. So I don't like this solution you have at all because it doesn't help us get to the solution we're going to need in about a year's time.