From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6AD4CC83F05 for ; Sun, 6 Jul 2025 03:35:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A63666B03F7; Sat, 5 Jul 2025 23:35:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A14276B03F8; Sat, 5 Jul 2025 23:35:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 929996B03F9; Sat, 5 Jul 2025 23:35:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 80C006B03F7 for ; Sat, 5 Jul 2025 23:35:40 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 1889E16232D for ; Sun, 6 Jul 2025 03:35:40 +0000 (UTC) X-FDA: 83632425240.01.E949692 Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) by imf10.hostedemail.com (Postfix) with ESMTP id E1EE1C000D for ; Sun, 6 Jul 2025 03:35:36 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=iHHNdgNG; spf=pass (imf10.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.132 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1751772938; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ry7sk0Q8RX4Rog7LIxYwem8OESuZgL7xJk2VRGHdz4U=; b=SdTzDeqmPahNzIeLZvr5z5vniqusmQ/ovRsY5MToQ3Z2xJUHu1VgZKs5zCas/R2/ItS3mG gIsWhzsnKHC12D7zbfohAu01y+Itr9a2fFEXWgdYDOUlPXENERtKJFOb5RkLh/mq4UdgyM 0y8dN5drkaBKGn2I4+0+4e+GKl6USvI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1751772938; a=rsa-sha256; cv=none; b=38TdVr7RKDeNzResK9BZeWsx7C45MxVR0doEJRiGCKcwPbTO0Z3WR6wteouAIAdrX1cJvC lVfcUtDqXsxUjoHPWYNjgcxIHBfVGTS5opT/qb7f2foP7BNh1U3qxOdwY/366YeyF4Je0q wWzhAPhnIMHg5Blr5uEKAFTvGMlwU44= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=iHHNdgNG; spf=pass (imf10.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.132 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1751772933; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=ry7sk0Q8RX4Rog7LIxYwem8OESuZgL7xJk2VRGHdz4U=; b=iHHNdgNGpR08hpRGGyAqd3xavMKv77XA3dI6uv5DrTO6pWvtqV0Gw6flCSl8SuF8SqzqTF1NS9rYb1LcDewc8yEKyAfA5qeINJrItuetGQpJFyw9Jjdu+tWGlezy7hNh9Yn79DfmFvJ2sSNLw/GgtbMHYBBO3WjVc1jEAGbFrhM= Received: from 30.134.69.216(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WhjEL7l_1751772931 cluster:ay36) by smtp.aliyun-inc.com; Sun, 06 Jul 2025 11:35:32 +0800 Message-ID: <452cad4b-e0c7-4792-9272-69199fa52a55@linux.alibaba.com> Date: Sun, 6 Jul 2025 11:35:30 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v4 4/9] mm/shmem, swap: tidy up swap entry splitting To: Kairui Song , linux-mm@kvack.org Cc: Andrew Morton , Hugh Dickins , Matthew Wilcox , Kemeng Shi , Chris Li , Nhat Pham , Baoquan He , Barry Song , linux-kernel@vger.kernel.org References: <20250704181748.63181-1-ryncsn@gmail.com> <20250704181748.63181-5-ryncsn@gmail.com> From: Baolin Wang In-Reply-To: <20250704181748.63181-5-ryncsn@gmail.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: E1EE1C000D X-Stat-Signature: try3trf1icn5nr73ame57zaf6fzzh8to X-HE-Tag: 1751772936-249377 X-HE-Meta: U2FsdGVkX1/GIyDScNjRV0hJzg7EA+ZJ239XLyUcrz28BwxLH7FqdbsC3u2lVNy0ZkqO4pAarc3EWE6oCqmZtUXrAuufT/msU5vxvQ9XmDT7EXstNMQvLvhGVjptvfXydmRyaAm/JoXh0rjQ0Z0KB1F4lCiXb/+VqaJ3H5yaLeVB3BpMBq0uRgeIDpIX5FpQIM9GpDD1eN3LRJpn1jjD0wciUEKj4YFnlG7GcH6M8sFBZalfXXn3Nb9Fesn57a45C8ZjYvGkT1Uq3DeDPTsUIV1oZmX9fIjCA423kJuJ0PbdH6gu/pVHUKqvTdwm6PqfUHATZJ6R2MmPdRVpObw5y/Gk7D6SvqIVfoPRLZcMIQvTkXodAII2AiB56khjeVV1ShMaChTH7rMNgFuCM7+jpXIbjMomstn8iMfSDQbayv4O59Pz9XeXnhDd5a5atRot+2guC1pkmjQLdnVwFPxaTUCavoLGW7v0Bz1dNvW699dPIfHzHdekfAtT8gtv+KeTFB9JGbBUILK/eO2S47GESUOm/fE3Pu1kx0h+Rb6XCRd6P/iMXDiCtbKw4sClPl6g5sDdqI2EJ1Fl81MHtZijSKI6OAMQLXqwMQwEkeuufHcwficiZL4GZsX5iE6zt8OHcAQajha2UY7YchGVnYZXIJ0jrn9gWO4yUJRhQvCjK23OMMLcjXwNUEIzhN/5yrLPszq4QnMWL3GNXDeDyQIXRHR+G7q+rkCx657nbbC2Nmi8mwtsrwWMgupLQw3eKiW6zS1yWizaqNKFJql00WIM/3hCAsU3JoHZMoqHOzWiGyurmRChb61AzhJxyA8u/kwox3kkXeksfi7a3gqID/lph9EiUYyksGRabH984abquob9WPNJ2sZGQihvQBPtN7gkKcW2cArZ+/v7LFelDKBYvYJ/YTDx2zLS//237O+OdHXcc3DxL3uOODqeCKkfLyODCKOmn1zkae1SqVYoVGD 4zLnMl3Z ZZhoAV2Hu5QCHhQPVTtH+65vucplAoArYktT6VTir7TbOWSra0WBG6DFEuiTJ+zubhaslm09O7sXBndIXSKrMemtl5XrCIjyGjPSIYPXncbqhuPvt7HLk3xfTZR+snkbLeNuoVrAM1ysnklCDxl2mnEZVTOciAHPjKZUkXRNrodMH9cfoRDRPlRq1MdQSkKQE5nn+cmFAgN+qZR+XuJZOi5vcXxnY3D76Fk45Q7bIC+fEjWfCzfIT9YV1RoRHWFQLt8XxisFlLVt1axnmRrY2kNlqMvpz1zAWvNRLNuKqjgEavTcVPV2SVKvLlP0vFk02LHwMKac0+qK+idY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2025/7/5 02:17, Kairui Song wrote: > From: Kairui Song > > Instead of keeping different paths of splitting the entry before the > swap in start, move the entry splitting after the swapin has put > the folio in swap cache (or set the SWAP_HAS_CACHE bit). This way > we only need one place and one unified way to split the large entry. > Whenever swapin brought in a folio smaller than the shmem swap entry, > split the entry and recalculate the entry and index for verification. > > This removes duplicated codes and function calls, reduces LOC, > and the split is less racy as it's guarded by swap cache now. So it > will have a lower chance of repeated faults due to raced split. > The compiler is also able to optimize the coder further: > > bloat-o-meter results with GCC 14: > > With DEBUG_SECTION_MISMATCH (-fno-inline-functions-called-once): > ./scripts/bloat-o-meter mm/shmem.o.old mm/shmem.o > add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-82 (-82) > Function old new delta > shmem_swapin_folio 2361 2279 -82 > Total: Before=33151, After=33069, chg -0.25% > > With !DEBUG_SECTION_MISMATCH: > ./scripts/bloat-o-meter mm/shmem.o.old mm/shmem.o > add/remove: 0/1 grow/shrink: 1/0 up/down: 949/-750 (199) > Function old new delta > shmem_swapin_folio 2878 3827 +949 > shmem_split_large_entry.isra 750 - -750 > Total: Before=33086, After=33285, chg +0.60% > > Since shmem_split_large_entry is only called in one place now. The > compiler will either generate more compact code, or inlined it for > better performance. > > Signed-off-by: Kairui Song > --- > mm/shmem.c | 53 +++++++++++++++++++++-------------------------------- > 1 file changed, 21 insertions(+), 32 deletions(-) > > diff --git a/mm/shmem.c b/mm/shmem.c > index e43becfa04b3..217264315842 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -2266,14 +2266,15 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, > struct address_space *mapping = inode->i_mapping; > struct mm_struct *fault_mm = vma ? vma->vm_mm : NULL; > struct shmem_inode_info *info = SHMEM_I(inode); > + swp_entry_t swap, index_entry; > struct swap_info_struct *si; > struct folio *folio = NULL; > bool skip_swapcache = false; > - swp_entry_t swap; > int error, nr_pages, order, split_order; > + pgoff_t offset; > > VM_BUG_ON(!*foliop || !xa_is_value(*foliop)); > - swap = radix_to_swp_entry(*foliop); > + swap = index_entry = radix_to_swp_entry(*foliop); > *foliop = NULL; > > if (is_poisoned_swp_entry(swap)) > @@ -2321,46 +2322,35 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, > } > > /* > - * Now swap device can only swap in order 0 folio, then we > - * should split the large swap entry stored in the pagecache > - * if necessary. > - */ > - split_order = shmem_split_large_entry(inode, index, swap, gfp); > - if (split_order < 0) { > - error = split_order; > - goto failed; > - } > - > - /* > - * If the large swap entry has already been split, it is > + * Now swap device can only swap in order 0 folio, it is > * necessary to recalculate the new swap entry based on > - * the old order alignment. > + * the offset, as the swapin index might be unalgined. > */ > - if (split_order > 0) { > - pgoff_t offset = index - round_down(index, 1 << split_order); > - > + if (order) { > + offset = index - round_down(index, 1 << order); > swap = swp_entry(swp_type(swap), swp_offset(swap) + offset); > } > > - /* Here we actually start the io */ > folio = shmem_swapin_cluster(swap, gfp, info, index); > if (!folio) { > error = -ENOMEM; > goto failed; > } > - } else if (order > folio_order(folio)) { > + } > +alloced: > + if (order > folio_order(folio)) { > /* > - * Swap readahead may swap in order 0 folios into swapcache > + * Swapin may get smaller folios due to various reasons: > + * It may fallback to order 0 due to memory pressure or race, > + * swap readahead may swap in order 0 folios into swapcache > * asynchronously, while the shmem mapping can still stores > * large swap entries. In such cases, we should split the > * large swap entry to prevent possible data corruption. > */ > - split_order = shmem_split_large_entry(inode, index, swap, gfp); > + split_order = shmem_split_large_entry(inode, index, index_entry, gfp); > if (split_order < 0) { > - folio_put(folio); > - folio = NULL; > error = split_order; > - goto failed; > + goto failed_nolock; > } > > /* > @@ -2369,15 +2359,13 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, > * the old order alignment. > */ > if (split_order > 0) { > - pgoff_t offset = index - round_down(index, 1 << split_order); > - > + offset = index - round_down(index, 1 << split_order); > swap = swp_entry(swp_type(swap), swp_offset(swap) + offset); Obviously, you should use the original swap value 'index_entry' to calculate the new swap value. With the following fix, you can add: Reviewed-by: Baolin Wang Tested-by: Baolin Wang diff --git a/mm/shmem.c b/mm/shmem.c index d530df550f7f..1e8422ac863e 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2361,7 +2361,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, */ if (split_order > 0) { offset = index - round_down(index, 1 << split_order); - swap = swp_entry(swp_type(swap), swp_offset(swap) + offset); + swap = swp_entry(swp_type(swap), swp_offset(index_swap) + offset); } } else if (order < folio_order(folio)) { swap.val = round_down(swap.val, 1 << folio_order(folio));