From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C44EC83F27 for ; Wed, 16 Jul 2025 07:10:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A26F46B0092; Wed, 16 Jul 2025 03:10:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9FF476B0093; Wed, 16 Jul 2025 03:10:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9145C6B0095; Wed, 16 Jul 2025 03:10:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 7DB9A6B0092 for ; Wed, 16 Jul 2025 03:10:03 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 1A66812BE3F for ; Wed, 16 Jul 2025 07:10:03 +0000 (UTC) X-FDA: 83669253486.24.4F01182 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf09.hostedemail.com (Postfix) with ESMTP id 50FDF14000A for ; Wed, 16 Jul 2025 07:10:01 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="DbQm/y+R"; dmarc=pass (policy=quarantine) header.from=redhat.com; spf=pass (imf09.hostedemail.com: domain of bhe@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=bhe@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1752649801; a=rsa-sha256; cv=none; b=Whoiiv0tw/WfBj1fCgQS+gPZ2DEmukPKE9rdOsmbHFXydnvMiLX8o2ytEDxumPko+xrSSY UsyE+TO9Y94GSOwMW/lw1g03tr6IN3hfvMsAdmQgc4YzZA+FU3l8blVlZCOJnTw6bOYbqe CArciariLKubLusT74eHTiPiTGe6dgA= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="DbQm/y+R"; dmarc=pass (policy=quarantine) header.from=redhat.com; spf=pass (imf09.hostedemail.com: domain of bhe@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=bhe@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1752649801; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QkrGeaLtfkyhuYs4DJLfNNM1RGd1ZIds2xpUAYiTE04=; b=AC3ssaX2BeCmhaYQ8iksyE4CnY35sgHA9YlxU8W9zwccRDZfbWiD1U8yVyzCbZwmcDKyIv YHwm4FGhMC0iIlaqoAgtoZYxXdcRQsfb5vm+jFXWggN//lttG/69+c7HivVKTt+aLu7k5c yeW70KwbTyl22A7ps6VsttD7yKDrKC0= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1752649800; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=QkrGeaLtfkyhuYs4DJLfNNM1RGd1ZIds2xpUAYiTE04=; b=DbQm/y+RSgoq7b9dxf7cFn8Cx7kigyoznxxUEnK5BfCJ+q1LVthfRXZqEPwlLg76ZKewB0 QiYljLJ/Qlz1nAid6bIjRs1wIy9Y1/3kRRWHwlRvd8veUu+wAeQACIJz7rK7u4+r2BaLQ8 VZ+uHudiRKYk3a3Fye35o51kWTlX5IY= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-122-90ItctHzNwyXJ70pwng09A-1; Wed, 16 Jul 2025 03:09:57 -0400 X-MC-Unique: 90ItctHzNwyXJ70pwng09A-1 X-Mimecast-MFC-AGG-ID: 90ItctHzNwyXJ70pwng09A_1752649795 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id AAC131800C31; Wed, 16 Jul 2025 07:09:53 +0000 (UTC) Received: from localhost (unknown [10.72.112.156]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id C04D118002AF; Wed, 16 Jul 2025 07:09:51 +0000 (UTC) Date: Wed, 16 Jul 2025 15:09:47 +0800 From: Baoquan He To: Kairui Song Cc: linux-mm@kvack.org, Andrew Morton , Hugh Dickins , Baolin Wang , Matthew Wilcox , Kemeng Shi , Chris Li , Nhat Pham , Barry Song , linux-kernel@vger.kernel.org Subject: Re: [PATCH v5 4/8] mm/shmem, swap: tidy up swap entry splitting Message-ID: References: <20250710033706.71042-1-ryncsn@gmail.com> <20250710033706.71042-5-ryncsn@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250710033706.71042-5-ryncsn@gmail.com> X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 50FDF14000A X-Stat-Signature: ds1e3ebacsrohced9qp8g8pikxr1rdsr X-HE-Tag: 1752649801-333049 X-HE-Meta: U2FsdGVkX1+7WQZccCeqouMMTvltfLt5/nZZ+LyqRhF1LFXMTS60SRc2kBkxPrpTZ7T7Ww4ZCkWUKx5Cxm+PY+Bsxr5L2zIrj0aLP8hN+VZV37PIResQmaP0+GAGecV7pHPOqE3CeZaseaWhuGm3kysN61fWRdnvVYPgk02QziW8BQ3zTxbD+0c3eIuC01+0X0cgAHobHpe0f1XhJTH5ZD7U85Ewwqkx73VP4ZPR8zdxrFIawmGi2VEWz9938kZv9X7I+rwPmHQCu/+EhkrlgXpjvujsNJ49trhwlQmFs0NM+XzCD60YtWLQu4Bc3v6qQKawope9l7sLCf5F0Cw9YhrVuABOy0jR/+ftAPt7/57B6lqXk5dogHWGkAmYc4spW2CGUTVKy7kvbZvUZS0wBu+OK00sr/N8qG2FedNFYjzBQAEsFp3/Hy2V9nQ8kgn0y9fSVYhcpNmqXKyjzrLNKgO5+LBTZ8/2jRPE2GXSFD9l+obR7YQ0nocQGgIHkKeBC8kwaQyXQQhUsvTRS6NoNggQT4/X3Ey/zurhm8B3UPUJ//oKods9lg5GyoUJdfCJlmZNCr4gRFDUC2w5ZXdtBYIvsJ3i4fGNwYr3amSu14q589FC0pxgL3oCI7RFcF55wxGSY5ibLmNyqc52QwNDFISoj9Hio0KEBlutrmesaw/aJez9Fc71vSomECPfq93qUtJlL5yQq15AoJhKEFSWcf8wDBCwrdnSSKFXO8FEhnA/ODL1FkWkrqg5amEZFS825HONvmrDza1UVg0MsIz50GhZsA7ZJryaCogQsF303pGJ+4ruG5WIHeltc6b/feytLOjQIEgZqhVa3JIJZK6CCPabkn17qLH8/Rf1PdvZszTb7XuL+kHLScRkmnVu2HjDHLglgiocpsQreGbTfJbZMOXVOd+mZmZsNzwGaMKpdw8G/Vs+fBM9Z4vsqk+ll5OTVL20Ake42NoJBfG1kNv qLbpya+2 kVXKy0mjmJML9CAWyer83uHsx8tZ/rmvHdGAT6bwxu23PIlV4UBJy11RzZoPpZstxRSVzuhtVtDZvGo61fSwvGzAMOfZxqbxdmAq2riDxmg0BphEVwzeowwmCbmMiYEr6WSXmeCr0hVy1cfrYdTVLpEm9LiIfChIWXK4/G3ord9bI7+Zcoc6xSUv6KrVKPbi9jFrysFmTfhzs5kSMTFJZiL5XkegPjR2IB8L+7QAmaF6Et1FC6DLxd9xcqsxdF7MQpcq4aCmwb3oC1HvvBczXC79EDowv+dqOuE4HqI1SNkdLConKkoJnGundyCLxO5HW8+dDcPvwnlPC6tp8IfqUhsnIfa/vAToVN1kazFf1FhhjQeuCzloAheD2Zsb+pbCDH1D449ul1yJJjgu5r4C19RpnbQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 07/10/25 at 11:37am, Kairui Song wrote: ......snip... > @@ -2321,46 +2323,35 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, > } > > /* > - * Now swap device can only swap in order 0 folio, then we > - * should split the large swap entry stored in the pagecache > - * if necessary. > - */ > - split_order = shmem_split_large_entry(inode, index, swap, gfp); > - if (split_order < 0) { > - error = split_order; > - goto failed; > - } > - > - /* > - * If the large swap entry has already been split, it is > + * Now swap device can only swap in order 0 folio, it is > * necessary to recalculate the new swap entry based on > - * the old order alignment. > + * the offset, as the swapin index might be unalgined. > */ > - if (split_order > 0) { > - pgoff_t offset = index - round_down(index, 1 << split_order); > - > + if (order) { > + offset = index - round_down(index, 1 << order); > swap = swp_entry(swp_type(swap), swp_offset(swap) + offset); > } > > - /* Here we actually start the io */ > folio = shmem_swapin_cluster(swap, gfp, info, index); > if (!folio) { > error = -ENOMEM; > goto failed; > } > - } else if (order > folio_order(folio)) { > + } > +alloced: Here, only synchronous device handling will jump to label 'alloced', while its folio is allocated with order. Maybe we should move the label down below these if else conditional checking and handling? Anyway, this is an intermediary patch and code will be changed, not strong opinion. > + if (order > folio_order(folio)) { > /* > - * Swap readahead may swap in order 0 folios into swapcache > + * Swapin may get smaller folios due to various reasons: > + * It may fallback to order 0 due to memory pressure or race, > + * swap readahead may swap in order 0 folios into swapcache > * asynchronously, while the shmem mapping can still stores > * large swap entries. In such cases, we should split the > * large swap entry to prevent possible data corruption. > */ > - split_order = shmem_split_large_entry(inode, index, swap, gfp); > + split_order = shmem_split_large_entry(inode, index, index_entry, gfp); > if (split_order < 0) { > - folio_put(folio); > - folio = NULL; > error = split_order; > - goto failed; > + goto failed_nolock; > } > > /* ...snip...