From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C722C27C4F for ; Tue, 11 Jun 2024 03:31:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 162DD6B0092; Mon, 10 Jun 2024 23:31:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 112116B0095; Mon, 10 Jun 2024 23:31:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0023E6B0096; Mon, 10 Jun 2024 23:31:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D7F3F6B0092 for ; Mon, 10 Jun 2024 23:31:48 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 70ED71409AC for ; Tue, 11 Jun 2024 03:31:48 +0000 (UTC) X-FDA: 82217183496.19.5C8E83A Received: from out30-119.freemail.mail.aliyun.com (out30-119.freemail.mail.aliyun.com [115.124.30.119]) by imf12.hostedemail.com (Postfix) with ESMTP id 46C7440010 for ; Tue, 11 Jun 2024 03:31:44 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=I2aPRffr; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf12.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.119 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718076706; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=V4mjA+2MP8zAV5Ww2U2WGaYSldk2hCBukxkFLeMugYE=; b=sMdnxoXbUhM9c6RHamp32JwtS+xO4kVw/VlqUA36O2BIprKMmXaYUayqyh1nyfbA87HHsQ B/8T5R7MArh3S7mqWu92eTdwPraaVqU8QcfgLY1RdHx50jlOovmutkBKa99WiHvyjWMxvq X9cUsjCN/TcWW/FA10F4abgfe1T0nXc= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=I2aPRffr; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf12.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.119 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718076706; a=rsa-sha256; cv=none; b=uj2gsO4dZjTCvDlmg21nY2S8tWIYvU9RMSW3o6wO5zEjxDuxC5nVyXxOvlck8GbcZx6R8O tH2qV3kz9uhNWs6vrHCMJLFrvZlwL9c3UvM+9x6lPqA716IcM1cEBfComxchlWcGoKV3yV jp/THxHCG17I/bS1EvxvK95veveiu7A= DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1718076702; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=V4mjA+2MP8zAV5Ww2U2WGaYSldk2hCBukxkFLeMugYE=; b=I2aPRffrjG9dbXOPHaHWqnno7EJbmUhee1shKj2rJBYPhOs8JjLqoUEhC25swY0ZX73BHbGI2C5ltsXCsPR+ZksDZqMhxdg9EagDsHWBtrOG8CpMP+1eGE9pRgyVIT9bzyqEoYBiz06WtmG8d+3AAEBvTTGHQleYd8D1Tf8GmwE= X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R571e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033037067113;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=16;SR=0;TI=SMTPD_---0W8EoriA_1718076700; Received: from 30.97.56.68(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W8EoriA_1718076700) by smtp.aliyun-inc.com; Tue, 11 Jun 2024 11:31:41 +0800 Message-ID: <99ba4e0d-ef36-4516-a275-014cf5eb22fd@linux.alibaba.com> Date: Tue, 11 Jun 2024 11:31:39 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 5/7] mm: add new 'orders' parameter for find_get_entries() and find_lock_entries() To: Daniel Gomez Cc: "akpm@linux-foundation.org" , "hughd@google.com" , "willy@infradead.org" , "david@redhat.com" , "wangkefeng.wang@huawei.com" , "chrisl@kernel.org" , "ying.huang@intel.com" , "21cnbao@gmail.com" <21cnbao@gmail.com>, "ryan.roberts@arm.com" , "shy828301@gmail.com" , "ziy@nvidia.com" , "ioworker0@gmail.com" , Pankaj Raghav , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" References: <5304c4c54868336985b396d2c46132c2e0cdf803.1717673614.git.baolin.wang@linux.alibaba.com> From: Baolin Wang In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 46C7440010 X-Stat-Signature: qbfcsomqic5rxohyjo3q5xm6gqp363h6 X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1718076704-292046 X-HE-Meta: U2FsdGVkX19mE8urLjKgkWQ+efgaD3vSEHy7HuXcQbtqwXW0zNdwZD3iAu0gGokX8OOprfuQUhVLy2FpITb/Xgzkm+KtDMlw2KOK97GMowALeRVeLNfdGfWkX9V9bAGhlZ3B5OwpOM6fXjn3VoyVmUeC0jHjZD1yhnFmml/uW5p8SOcam+IBqUT1RSTKEkCngnASAb8bcMb7YyNkJvGwXrA5L6zVeng7+pK0jgylPtfXW1hZwauyTX3hIVkPqb52RxS2B5LEZ0F6gVvmbjGOX3k3030/uovQ3hYBH5z6j3ST2rmEN1DH9z4xZj/cZYAKuqNoJ8uYcMcxVDVa5VuI/4FFpSQX98oPos16eVs5rer8BCgp5Gg8Tr7qQf1XH1llRPW74n4ZSBCm5QG3A9LnW+h0bsTOqI2EBE3gM/fLU77CHEOhobIlTo+Hemw05WUcHwvM4oDya96qR5XzjDFvXw8NTwEUCPCH+xHN3HEcIMV2lGkgA6NpmXko6wpylm4bvTJqH5P22p6eObREH8nBO724F/E6Q6B03xf7jFyEVzTAPHlY2lxBWuZHhnagM2UIDNQsequULuX3qhGRRdlxt7lWAWTs2mDlRX2LXW+cJxA3mSYyHCaFyuKJJzNftw/grBtP1mQrdBPLybr1qmY0TmWpLYhUTSkwiHn7WO2IXiiXcFkYGAYkeBmmbP4nGBuypMSnHIXg92oVyhE/LCVJWCHZliW3a24XmE3acy3ISovr8s/1RmMbEUyB0nqY8iCn0Cye4+su7ecao6c02Mi0chnwwNz3khhivTcKPZejbxHeic2Cmof7bhXamIGZjYFfq2m6eq9nI1L1u2tvhPKKBvd9JwQE8oli4T9O2rrjO5RWuDPd+radSG2LWSMEQ1pEiLWmCyt75RI2HGjF8paMNW5UJMP2nCTpNhFvJksPMqNUpUNAZ1rVeF4J6E8IjZxbjesvo0X72vqE/tXALgi kjUqm1ks f6XalOl/JRsPNZVNpoK7NuFKnLpnRQri4GyFyXISb3dGodMWfvKlwqCwuRHI1t+S52Smhhu23+Vrcqc//7yW4YuBR+47CaKziG1W6x/TWAIaiVCcqxDUxPXuztR+fPeovc0kJX5i1w1s/fJgp+WFXQcEj/CDY1PNpuyGOi6s9M3azfl8pydKcvyAJNdbMofMlU8h8HYQ9+CqFLAnP47AfmQ1SARWi1Q88fX/LXdS/jnTcEWG/MRxJRPIaCDyOcjy0MjRKwp6jvM6UKtetoHcs3dLbZt+R3BSUI6cOqb7Lc8bI6LD5tLDZWtdskdm67smhwyyMga5jmlaMU8BCKAO9DOc3w3abtMpTNgWr2sWMkewomVnGHl3cqbLkQSqBzJ0PktOHA+wFYWZbHCWcif5g8fQYlAFbqsLZkckQP9ht/Qx5miOltyVi8WjjfOjmxsoMoquLPCRx9r0oTg4d8In7MjWkdUpvvmHoH2MjmdaGtBsVO+RdIQRDS15yXiz+J2VuIqDVTVi1eeaaNBw12PwBIJED+LiSoNXzsqyjAd5r12qwnrA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/6/10 23:23, Daniel Gomez wrote: > Hi Baolin, > > On Thu, Jun 06, 2024 at 07:58:55PM +0800, Baolin Wang wrote: >> In the following patches, shmem will support the swap out of large folios, >> which means the shmem mappings may contain large order swap entries, so an >> 'orders' array is added for find_get_entries() and find_lock_entries() to >> obtain the order size of shmem swap entries, which will help in the release >> of shmem large folio swap entries. >> >> Signed-off-by: Baolin Wang >> --- >> mm/filemap.c | 27 +++++++++++++++++++++++++-- >> mm/internal.h | 4 ++-- >> mm/shmem.c | 17 +++++++++-------- >> mm/truncate.c | 8 ++++---- >> 4 files changed, 40 insertions(+), 16 deletions(-) >> >> diff --git a/mm/filemap.c b/mm/filemap.c >> index 37061aafd191..47fcd9ee6012 100644 >> --- a/mm/filemap.c >> +++ b/mm/filemap.c >> @@ -2036,14 +2036,24 @@ static inline struct folio *find_get_entry(struct xa_state *xas, pgoff_t max, >> * Return: The number of entries which were found. >> */ >> unsigned find_get_entries(struct address_space *mapping, pgoff_t *start, >> - pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices) >> + pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices, >> + int *orders) >> { >> XA_STATE(xas, &mapping->i_pages, *start); >> struct folio *folio; >> + int order; >> >> rcu_read_lock(); >> while ((folio = find_get_entry(&xas, end, XA_PRESENT)) != NULL) { >> indices[fbatch->nr] = xas.xa_index; >> + if (orders) { >> + if (!xa_is_value(folio)) >> + order = folio_order(folio); >> + else >> + order = xa_get_order(xas.xa, xas.xa_index); >> + >> + orders[fbatch->nr] = order; >> + } >> if (!folio_batch_add(fbatch, folio)) >> break; >> } >> @@ -2056,6 +2066,8 @@ unsigned find_get_entries(struct address_space *mapping, pgoff_t *start, >> folio = fbatch->folios[idx]; >> if (!xa_is_value(folio)) >> nr = folio_nr_pages(folio); >> + else if (orders) >> + nr = 1 << orders[idx]; >> *start = indices[idx] + nr; >> } >> return folio_batch_count(fbatch); >> @@ -2082,10 +2094,12 @@ unsigned find_get_entries(struct address_space *mapping, pgoff_t *start, >> * Return: The number of entries which were found. >> */ >> unsigned find_lock_entries(struct address_space *mapping, pgoff_t *start, >> - pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices) >> + pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices, >> + int *orders) >> { >> XA_STATE(xas, &mapping->i_pages, *start); >> struct folio *folio; >> + int order; >> >> rcu_read_lock(); >> while ((folio = find_get_entry(&xas, end, XA_PRESENT))) { >> @@ -2099,9 +2113,16 @@ unsigned find_lock_entries(struct address_space *mapping, pgoff_t *start, >> if (folio->mapping != mapping || >> folio_test_writeback(folio)) >> goto unlock; >> + if (orders) >> + order = folio_order(folio); >> VM_BUG_ON_FOLIO(!folio_contains(folio, xas.xa_index), >> folio); >> + } else if (orders) { >> + order = xa_get_order(xas.xa, xas.xa_index); >> } >> + >> + if (orders) >> + orders[fbatch->nr] = order; >> indices[fbatch->nr] = xas.xa_index; >> if (!folio_batch_add(fbatch, folio)) >> break; >> @@ -2120,6 +2141,8 @@ unsigned find_lock_entries(struct address_space *mapping, pgoff_t *start, >> folio = fbatch->folios[idx]; >> if (!xa_is_value(folio)) >> nr = folio_nr_pages(folio); >> + else if (orders) >> + nr = 1 << orders[idx]; >> *start = indices[idx] + nr; >> } >> return folio_batch_count(fbatch); >> diff --git a/mm/internal.h b/mm/internal.h >> index 3419c329b3bc..0b5adb6c33cc 100644 >> --- a/mm/internal.h >> +++ b/mm/internal.h >> @@ -339,9 +339,9 @@ static inline void force_page_cache_readahead(struct address_space *mapping, >> } >> >> unsigned find_lock_entries(struct address_space *mapping, pgoff_t *start, >> - pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices); >> + pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices, int *orders); >> unsigned find_get_entries(struct address_space *mapping, pgoff_t *start, >> - pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices); >> + pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices, int *orders); >> void filemap_free_folio(struct address_space *mapping, struct folio *folio); >> int truncate_inode_folio(struct address_space *mapping, struct folio *folio); >> bool truncate_inode_partial_folio(struct folio *folio, loff_t start, >> diff --git a/mm/shmem.c b/mm/shmem.c >> index 0ac71580decb..28ba603d87b8 100644 >> --- a/mm/shmem.c >> +++ b/mm/shmem.c >> @@ -840,14 +840,14 @@ static void shmem_delete_from_page_cache(struct folio *folio, void *radswap) >> * Remove swap entry from page cache, free the swap and its page cache. >> */ >> static int shmem_free_swap(struct address_space *mapping, >> - pgoff_t index, void *radswap) >> + pgoff_t index, void *radswap, int order) >> { >> void *old; > > Matthew Wilcox suggested [1] returning the number of pages freed in shmem_free_swap(). > > [1] https://lore.kernel.org/all/ZQRf2pGWurrE0uO+@casper.infradead.org/ > > Which I submitted here: > https://lore.kernel.org/all/20231028211518.3424020-5-da.gomez@samsung.com/ > > Do you agree with the suggestion? If so, could we update my patch to use > free_swap_and_cache_nr() or include that here? Yes, this looks good to me. But we still need some modification for find_lock_entries() and find_get_entries() to update the '*start' correctly. I will include your changes into this patch in next version. Thanks.