From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52EE0C83F0F for ; Thu, 10 Jul 2025 03:37:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E4E016B00A5; Wed, 9 Jul 2025 23:37:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DFE456B00A7; Wed, 9 Jul 2025 23:37:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CED7E6B00A9; Wed, 9 Jul 2025 23:37:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id BB32C6B00A5 for ; Wed, 9 Jul 2025 23:37:56 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 7A8C258C83 for ; Thu, 10 Jul 2025 03:37:56 +0000 (UTC) X-FDA: 83646946152.30.31B06EC Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by imf13.hostedemail.com (Postfix) with ESMTP id 988A320005 for ; Thu, 10 Jul 2025 03:37:54 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=FL1TAMCH; spf=pass (imf13.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.179 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1752118674; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6c/3UTjwUomIURBx5sx1cWF1vLKF6eZKQiVovtdE0rk=; b=46U40YgIMFct+btrN0r8Qd7fHviGcaldNDXneJ+94FML+lM/CcUvW6aK+BYSG/6R2p0B3t QSiTVpKbOGefjrQsNR/ESNEgM5pt6r8XFQdSPd22l9nHN7/oclX3sF+KVn4UVb8AoB54oA U0Wba6lXgIlvhafIO27IuboopJXV/WQ= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=FL1TAMCH; spf=pass (imf13.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.179 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1752118674; a=rsa-sha256; cv=none; b=aAFOcmbsAHOGNcY0XXIhoCmzxb57UNCoFhtxu//GLUe04COLKUQacjOWphMTcwDcy5ykzk wkAcL23C0pTC4VB2ompdWcY4BTGq5Fv+RAeEtnhoyjdGySzFc+zJl3EsFMwmZ+f6piE1c3 va0rm9cFJGe/COecq16vpA4xCe9NxAA= Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-237e6963f63so3213425ad.2 for ; Wed, 09 Jul 2025 20:37:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752118672; x=1752723472; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=6c/3UTjwUomIURBx5sx1cWF1vLKF6eZKQiVovtdE0rk=; b=FL1TAMCHZrqfxEhO4N4xyl9oevGLr761hnyHGxQ5iFBjyoNcOmR3X0jygSPEPsMubz t2JBSOUxKcRJw8FD49ravUuZg67LK5gScz+kkbalbGIDWirOoZzq5ivTSRqssaki5cxJ d77kpTWRj/Oi4LIwMKMCwOyZYUOW3NAF7Yig1FXNMZkaAd74YJPcINBicyVAjw0FtUns VKLJqQIbtoMI5nNtM9UgwxsQQJXg4X7P5x9OBNCHPV15ldIATqY827V2X+yYngFALNDG B0A9ymg27UgTRGCTJzmlJdwaMIittzu9dCdjwEYcFg1ectWVMlcOehf5SuWjvRRlf2Bg /Bag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752118672; x=1752723472; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=6c/3UTjwUomIURBx5sx1cWF1vLKF6eZKQiVovtdE0rk=; b=ZdE5IcCATog9Pm6AEgSmyRyITGIct1kqKWA8rNqpx2xzihCH1qkhTYpYenpvEUEUNb ZWrhvvu7b9B3paIhUfreiCzuFcGiquslM5nmkOXdTc5gKNbW0Qzq0eCN3GXNMSd9Vrwi 7MJvEak8DAvNQFYdDdr4lMakUgKLYaNkwAO66bq0ERreJpSX7o51WSYolXz3Y8Ph9Hzk PBJKlSk3hJD4F+p7mQdFdoyr78X3pCUy90WJhcWg35utZHsm2UjQ6TUeE14ksz52OdNq fGGgQa93RGPZ1sOXITjs9m5OoI2S2uuVymIvOsYmoPmwBc19E/MRzJ4LDnRQZMoWZfX0 lVXA== X-Gm-Message-State: AOJu0YxCkiICck1f0chuCLW4/tuk+PPz961RGpgR19CFnMrMzFAqYlYg KaOwtR4vSjqvePFlpEP6r2EiCnd53zaOz48AIvs087NK3WaLbXx+Ag/h2lcmXzz+woQ= X-Gm-Gg: ASbGncv66aBBbP1Ghh7vsOZkY9EqXXbTUZS9/i/8TpOe7KCVVXujQuhwndv5uT/JPAT sAtQ53BXU5o7D2Pu/MHciTP+lJoT+GgnYo77huLJibtN/ISHetidkEWCgmrGfoZvLubcKszeBXa R0WxHSjVFZMLUnSscue2mTEGPwT1rR2o3vAM7FvZj1/+6pV9pCn+wDD9KI6yTF7laC16bt6S7qt 40xFhhSqIO16lHCuhpaxJe9nznODXclI6bV3iC6dfanpev4YtQPAkiQOTR5IYpiiAaQ4WcvQtL4 9/T3kWZkas7Iclx4wBOihxLT5GqeVteFJzJIgw/aL8oFB7wrQGEd6zbfNgeW9S2P05b0gGQ7jXi 0 X-Google-Smtp-Source: AGHT+IFo+hUv+hE7jl8qtNFOz6BmwhbqfgrE8NbpYioKLZ+rFaC6z2kjXG11U2fuldmQTgRk7Yr+Sw== X-Received: by 2002:a17:903:234d:b0:234:d7b2:2ac4 with SMTP id d9443c01a7336-23de481b3b4mr15669525ad.17.1752118672195; Wed, 09 Jul 2025 20:37:52 -0700 (PDT) Received: from KASONG-MC4 ([43.132.141.20]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-31c300689aasm3716320a91.13.2025.07.09.20.37.48 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 09 Jul 2025 20:37:51 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Hugh Dickins , Baolin Wang , Matthew Wilcox , Kemeng Shi , Chris Li , Nhat Pham , Baoquan He , Barry Song , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v5 5/8] mm/shmem, swap: never use swap cache and readahead for SWP_SYNCHRONOUS_IO Date: Thu, 10 Jul 2025 11:37:03 +0800 Message-ID: <20250710033706.71042-6-ryncsn@gmail.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250710033706.71042-1-ryncsn@gmail.com> References: <20250710033706.71042-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 988A320005 X-Stat-Signature: 9fxiom7p4nn4u9ofiye6e5imr6ca59ey X-Rspam-User: X-Rspamd-Server: rspam07 X-HE-Tag: 1752118674-605594 X-HE-Meta: U2FsdGVkX1/uqH+S6tChknRRYD4UpAqchjwjA8c+1GtxRZMQT00AoH1Z4Z07syTYQQhwvps4bvHy1ACjwABWEb9zE5Org6j90qwYSdx6PM66u/hZLMCjKKbTTZ90joE/l0AkqXNwCdXzC2midj9KZiu24q7Ezb9/BROtfSERGjOlP2kBfP796x0oE5uCayXcrCbunky+pWVP11ayPb1EPTIub4S/I5f3H6TrVuxE4cRMnIkpyzUgNP7LBK7qamjzkSjM/RxQxl5APmr3LAK1MoJo/KwDvkwuOKddejkW/upCJKufrhWezDvHT9+J+oBTuLAUi9IF8ujcp/K5c8F5uw8HscfZueMibItWgnpX4qqXuiTNyYl0n/YsxqoHiSTZN/+k2ETcKn2IK3NU6if+YaEw8UpR+c3BRAfBZ5ke79XnbDxjad46Xxu6ulQcEwJAhGf8LemtixWoZK2ibwLsrizZMmViy/9iaaHjzQZ2XxZOIFplXu+QaoZlpAg7Al32hvwXakDFiRkl4mXPNR6DKZ5kqvtUnh5tC8mgbbbUB2x+QFMwKakeTjIg4N2/9I/Ehn2M4KEGbFnddRf52X6iWpqsqlx2+MC2Kom6D++uhGu55P5018xXG65ZebE6GSdy/JtiT/LmaHhCan1Vh+4ayqhMX05khV3kRpy/qv1zPJ4AsMC61QJTfnOhTpqhHij4aAKfdbie+OsNCxWHQ4v/YdGu9OBYb8yww7AOwxxsoVr9yS8S2B40SL949GCIA6sWqSq5KCNgMVg2xkifEOuTTJwkts+yPj4OJBCIAOGm09/2Jy2xhPPAOICpuELcywrV6k4tJoHdQSA7GUEO63eO/RC8P7Ve48ghQFGtFeUZzMVGASmj73D2I1IeLUTujLpcEcejJLjVEUpY48pkP+/geRrStVHaQTr71JualaLZVnBbyhd+/OTUIv8wQq55ffphaV5UhlKenyWMkbLCvMD njehWkp/ AqoWxvSTXC1eqjaTaCbPD+X5lMFyhrn9/NU8j74RH9LJe7QIr4/538mWv8uohGpQpy93cNUeHunrUqlFg0cDoxiFUGAES5Wsdm7tTprC7TjKq0QgV7uH5cL2rKMppShhZe95lApGYJt7mzJy47w9xr5fJOIH6+rsAW/mK8I459uDhfvB+y+rC0o3Qf28UswiuvPR2Li+1246URYH9qDY1OQ1rPAJ0lGLeN1W2lsavmHg/f9Rx9IO/P2vGqiThMJjROgH5My0j+DbtgZYWG8nI9HPq4taulqxEv55CJRAoml1iFZk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song For SWP_SYNCHRONOUS_IO devices, if a cache bypassing THP swapin failed due to reasons like memory pressure, partially conflicting swap cache or ZSWAP enabled, shmem will fallback to cached order 0 swapin. Right now the swap cache still has a non-trivial overhead, and readahead is not helpful for SWP_SYNCHRONOUS_IO devices, so we should always skip the readahead and swap cache even if the swapin falls back to order 0. So handle the fallback logic without falling back to the cached read. Signed-off-by: Kairui Song --- mm/shmem.c | 41 ++++++++++++++++++++++++++++------------- 1 file changed, 28 insertions(+), 13 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 97db1097f7de..847e6f128485 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1982,6 +1982,7 @@ static struct folio *shmem_swap_alloc_folio(struct inode *inode, struct shmem_inode_info *info = SHMEM_I(inode); int nr_pages = 1 << order; struct folio *new; + gfp_t alloc_gfp; void *shadow; /* @@ -1989,6 +1990,7 @@ static struct folio *shmem_swap_alloc_folio(struct inode *inode, * limit chance of success with further cpuset and node constraints. */ gfp &= ~GFP_CONSTRAINT_MASK; + alloc_gfp = gfp; if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) { if (WARN_ON_ONCE(order)) return ERR_PTR(-EINVAL); @@ -2003,19 +2005,22 @@ static struct folio *shmem_swap_alloc_folio(struct inode *inode, if ((vma && unlikely(userfaultfd_armed(vma))) || !zswap_never_enabled() || non_swapcache_batch(entry, nr_pages) != nr_pages) - return ERR_PTR(-EINVAL); + goto fallback; - gfp = limit_gfp_mask(vma_thp_gfp_mask(vma), gfp); + alloc_gfp = limit_gfp_mask(vma_thp_gfp_mask(vma), gfp); + } +retry: + new = shmem_alloc_folio(alloc_gfp, order, info, index); + if (!new) { + new = ERR_PTR(-ENOMEM); + goto fallback; } - - new = shmem_alloc_folio(gfp, order, info, index); - if (!new) - return ERR_PTR(-ENOMEM); if (mem_cgroup_swapin_charge_folio(new, vma ? vma->vm_mm : NULL, - gfp, entry)) { + alloc_gfp, entry)) { folio_put(new); - return ERR_PTR(-ENOMEM); + new = ERR_PTR(-ENOMEM); + goto fallback; } /* @@ -2030,7 +2035,9 @@ static struct folio *shmem_swap_alloc_folio(struct inode *inode, */ if (swapcache_prepare(entry, nr_pages)) { folio_put(new); - return ERR_PTR(-EEXIST); + new = ERR_PTR(-EEXIST); + /* Try smaller folio to avoid cache conflict */ + goto fallback; } __folio_set_locked(new); @@ -2044,6 +2051,15 @@ static struct folio *shmem_swap_alloc_folio(struct inode *inode, folio_add_lru(new); swap_read_folio(new, NULL); return new; +fallback: + /* Order 0 swapin failed, nothing to fallback to, abort */ + if (!order) + return new; + entry.val += index - round_down(index, nr_pages); + alloc_gfp = gfp; + nr_pages = 1; + order = 0; + goto retry; } /* @@ -2313,13 +2329,12 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, } /* - * Fallback to swapin order-0 folio unless the swap entry - * already exists. + * Direct swapin handled order 0 fallback already, + * if it failed, abort. */ error = PTR_ERR(folio); folio = NULL; - if (error == -EEXIST) - goto failed; + goto failed; } /* -- 2.50.0