From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B357C83F03 for ; Fri, 4 Jul 2025 18:18:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 27AEE6B8061; Fri, 4 Jul 2025 14:18:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1DE2F6B8059; Fri, 4 Jul 2025 14:18:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 07E6B6B8061; Fri, 4 Jul 2025 14:18:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id E9B0F6B8059 for ; Fri, 4 Jul 2025 14:18:15 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 9C5AA10793F for ; Fri, 4 Jul 2025 18:18:15 +0000 (UTC) X-FDA: 83627391750.20.F484FE4 Received: from mail-qk1-f172.google.com (mail-qk1-f172.google.com [209.85.222.172]) by imf22.hostedemail.com (Postfix) with ESMTP id C8527C000E for ; Fri, 4 Jul 2025 18:18:13 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=O61HHlXc; spf=pass (imf22.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.222.172 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1751653093; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=oWhZ3pxhN1lXqqOVvU26zrhwf3trxDJnUjGeu1jb+Qg=; b=rugCg2yLcjzreQmcRXpzPgcid2H7ftdZkmAtOTx+Cm2xLNJzMFh4c7SBOTf51feZBlpket QBxQqv1Gxcp8sSoNOra/YBTaKsQn72MQWppt3qTKvVwoch3/m3jtLMCH1G32Pg2LmltkWG HTlq4GSkGvvHO+vqchRnV5MqUQYwUMg= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=O61HHlXc; spf=pass (imf22.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.222.172 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1751653093; a=rsa-sha256; cv=none; b=VjC+b6WHfLjrTridU2zvvEOIphN3avnxYxRbwqV5U/1+59Qls3q4aitwvGiglSUk4Ai0y8 R6rBTg/8ArJaYNF6tEe2dMrpG3TFI6n8nY8vKKGI8D5XI8O1OHU15OMDStMcHt+xY+DJb3 h3aJSO0gEHC3JveyceWYlS07+md9yVg= Received: by mail-qk1-f172.google.com with SMTP id af79cd13be357-7d2107eb668so213672385a.1 for ; Fri, 04 Jul 2025 11:18:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1751653092; x=1752257892; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=oWhZ3pxhN1lXqqOVvU26zrhwf3trxDJnUjGeu1jb+Qg=; b=O61HHlXcWV++g3+zm5ueMR5yzii/PWo8KW4YsZPZwce7uLkFxh+bjc/bSheQ11/a38 HNi99M0bsUfErh2VXqXfvHqUXZGolEt4Qs/msSptbXpw2nxC2GDxPwkrJdfq5MSqAEpc sbWL9tReVBbH+y893MCTRXHKOiYMlqzmtvA5vcCFhNPP6lxSlceN6rlhO2FSIyJWJp8X SgkRh8ste1PhLc/9a1V9nM2elOnixR4yyyCW34VgoVAq4OAgsI+xnO/IQVVdci2xpUrl l83c9ZB6qveErlKQLJxiS+ILd4R0r5vzs1oNdvGMXYclDqfU0oah823gsp3eNeFe9snk OotQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751653092; x=1752257892; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=oWhZ3pxhN1lXqqOVvU26zrhwf3trxDJnUjGeu1jb+Qg=; b=SZf9HPxKO91NnPjZupAsY/ANbZuSTSwCIxMxxjZA9SKhmLEe9/J8byybCrRh6nqKMa 47DOXC+Tcs8seFEIB1JNW5TaXldMcQIDpCjs2VmwEmqVh7rJGznezzFPYKQ9M7FJ33Ud N1rcyjFMRIu1coyJomaMVgS4m3GksG457Q540DxXqKfTRr+idbjNK+0Cz20pdQSMETvg s8apDt8a8aErWNGGjkzmI+DBTqMt1RF3sp31/1mo6fpUDgWF7mOhux+SOcNPnH7vey60 DaWRiiuRSu9ypdkrfklksD14cBPn/lXqnfv6j9DkadB3Y6mHFo/Uf8U+i7AjThSdFgq5 khTA== X-Gm-Message-State: AOJu0YyZpGSv1yWuRVVs1bvlUJwhGL7MK9TKoJJ5D2J+ORMC2mfRXh99 p5zccvtnzUBToxdfCoLqWyRgXBDkxm0lUxGsL9BFk8fytPN0+yDx88+25We5H6bBO4g= X-Gm-Gg: ASbGncta1zQMpxBisqdNtYl2GNSQglxU01f+rPi3dYKjkd4Woi1CJplFCZU5oKyo3Y8 yX797hqauydtjVrCIKerjFKUpCXiNDeFZtIhIsfvZXN7DbWw3sAkp7ya47IMTwWGCEwEhjbHoQW m3KOff9F+184woS90OlztxhDZIEWAClB87kkzaiWZczK566ARjeQ5NGhq9W2gX2Z4i5+2Qcj2y9 YL0MPVhCvpLgRmDLtUH+DystoaCwQAxqYIAmoJpi0yCplckpXc4eEw1Oq8PNf8Dl74IWYDvu55P xxMfqcAYD+fx7RDBhSoSs7lF8QGsDksSzZ2zTc0F1zRhgPf7CP6ZCMuwIizDrzKUJ3M= X-Google-Smtp-Source: AGHT+IFjFtRePaENXASuLHvAEpQN3EzqUwdcJQ4E3pI38WDkXZL5SEVcCYTGBTbX24Z+GJefpfkG0w== X-Received: by 2002:a05:620a:4403:b0:7d4:57b7:bc12 with SMTP id af79cd13be357-7d5ddaa7cb8mr405533485a.10.1751653091890; Fri, 04 Jul 2025 11:18:11 -0700 (PDT) Received: from KASONG-MC4 ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7d5dbe7c188sm183300585a.59.2025.07.04.11.18.07 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Fri, 04 Jul 2025 11:18:11 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Hugh Dickins , Baolin Wang , Matthew Wilcox , Kemeng Shi , Chris Li , Nhat Pham , Baoquan He , Barry Song , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v4 3/9] mm/shmem, swap: tidy up THP swapin checks Date: Sat, 5 Jul 2025 02:17:42 +0800 Message-ID: <20250704181748.63181-4-ryncsn@gmail.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250704181748.63181-1-ryncsn@gmail.com> References: <20250704181748.63181-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: C8527C000E X-Stat-Signature: roxqemyww3z35qgdobw3uz5i57kyw74q X-Rspam-User: X-Rspamd-Server: rspam05 X-HE-Tag: 1751653093-556280 X-HE-Meta: U2FsdGVkX1/fQLbf6ILoCtb8cIFqdAT4MgZD5U44FLDL0+MRPtaghoEa7qylk06vP4dgo7Ph2w6JezzeuLEtFCX9yQPa7ZZ254IBL/vXoEHfNJN6h3spLJzkccs9NuaRFDCfsvwaLF5iPEEKJQ8rg/3O7WqdopzLl1nDFWm74O5LKRB1nyj/n9Bnt5qzBXbJriznQxnt5DjgY95QqpnqDUKLkRdURaoZeUfz4IprRGLb07cBPbo/xGbSooGF563ToNZhOkRXW+uHb+WiffEWHDv12efolmZ5MS8WHp5HiYK+hkfDMSuCG+urdI1kg0sxROhyoZLR+mRhWK+KPA+yz99xXiB/hS5gGBqOYAPKn5L3gKikekdt/u4HeR6QPKftwP2FsEJTtqpzWLlYJ+TnF567cM10khqtHXML/DGSR0BpTLtf3/80CQqbhXYVzI4OptOTRQaecGajrqlbyMjExBtMh1ftRM106SwDTPoLiBvD0O1ZZtTDksg0vi4DVYW+irftZG+kzz6VBd5CjyDUZZ+L6D0ED0E//Q9Td2ZHfR8JryCv/ywrJAE7DLaYIftatagFfzke9yEKW3xr7susjHFwebkgCNtZ2TfhmIomev0eMIqL1gOyfjELU2JEhGqR37tA0QbC4zD8vTrnRhe7wD+IO21MJbD2x1FdSo26O1huOrPrAWeo+BIaa39FlY+dwS66u42PwZ++VCrNICZ5WcbUkJaXZAaL0wxx5DeoRwFxUoB3wUf3IZqVBhAWZiswSiQbe43UVZRZmeZvLDtOv5Jqq27ZYLyD8FUJk/R/50hGEDRFoxG6Nf4cDy30JJ12FG3hhWpF3mKuIdoYs4UBKg8TYs6UynOiTW6klpItVZ+ySlOAhJAIzMpfKRTl4eYlYuWrlOTUM19XYule1feWibSJiF3R2j/R52VKCSSvJcBAP3IMXgb7+TaeP2pC/kmKLiHWPfVmmJ7Lv2zOA4H TSlzfbqB IkzQs8vFA80vcUj7dZwsTq0MNPub/E+iUBuogpXpM/jiYODtasMISbB1ErQdgTlgP2mB8lzmQF7QS966GhD+JSU1fHnZ/ccQYdGr3T+n6jODscXir/fWVa6ETwTYbpX9z6V9yWP1sfZLlV6z717K6SzOCEydM0oe6X2O+5YQlRFyv222m2wSSuqN+KcZMoXiDDpUxdMpy/NPuKcYhIygyUh4wPqz4eTCDN4cuAO6LnejZ7z0OdLJX2SJuvzzz2IVsv1eUHQaolhnvQYwM9uAjHXLR+2179ee3hVinOLfs3Ua3+pvubuSfN1xRZnVAXqn81qVLQsUvzlwxX+xvDLyYp9nroruOsmwhKJz26l0oWt5YfVTSdusNkrdIdO5TD16d+22TZ7/pgDDFIkD21XaxijwUOEgkKu87iuDNQeBjyd6rstVWUmqnhZ3QX6loBNAwprU5swvQMLrivWHZ8EOO16+priJzUpXcL+aCQlbTo+QHx7tS5AGXRi5Kms3/AJ7X7qBbLnR+eZREQ/g7QT3gJAxnSUHS/lnnr+oCh1wUv82fWobw8jZDQvr+4kJZ4OHZi8WkXv6ATmSQQJ0vMg0q5fV1sDj715TDqIuJ18+TcbsJLk9s5KWFeQ1QmS4kW9R38WZPSS+Inhg92WWl3rBjp6jJpnZW+XJjpvjRfUbXE2Rpf7A= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Move all THP swapin related checks under CONFIG_TRANSPARENT_HUGEPAGE, so they will be trimmed off by the compiler if not needed. And add a WARN if shmem sees a order > 0 entry when CONFIG_TRANSPARENT_HUGEPAGE is disabled, that should never happen unless things went very wrong. There should be no observable feature change except the new added WARN. Signed-off-by: Kairui Song Reviewed-by: Baolin Wang --- mm/shmem.c | 39 ++++++++++++++++++--------------------- 1 file changed, 18 insertions(+), 21 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 033dc7a3435d..e43becfa04b3 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1980,26 +1980,38 @@ static struct folio *shmem_swap_alloc_folio(struct inode *inode, swp_entry_t entry, int order, gfp_t gfp) { struct shmem_inode_info *info = SHMEM_I(inode); + int nr_pages = 1 << order; struct folio *new; void *shadow; - int nr_pages; /* * We have arrived here because our zones are constrained, so don't * limit chance of success with further cpuset and node constraints. */ gfp &= ~GFP_CONSTRAINT_MASK; - if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && order > 0) { - gfp_t huge_gfp = vma_thp_gfp_mask(vma); + if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) { + if (WARN_ON_ONCE(order)) + return ERR_PTR(-EINVAL); + } else if (order) { + /* + * If uffd is active for the vma, we need per-page fault + * fidelity to maintain the uffd semantics, then fallback + * to swapin order-0 folio, as well as for zswap case. + * Any existing sub folio in the swap cache also blocks + * mTHP swapin. + */ + if ((vma && unlikely(userfaultfd_armed(vma))) || + !zswap_never_enabled() || + non_swapcache_batch(entry, nr_pages) != nr_pages) + return ERR_PTR(-EINVAL); - gfp = limit_gfp_mask(huge_gfp, gfp); + gfp = limit_gfp_mask(vma_thp_gfp_mask(vma), gfp); } new = shmem_alloc_folio(gfp, order, info, index); if (!new) return ERR_PTR(-ENOMEM); - nr_pages = folio_nr_pages(new); if (mem_cgroup_swapin_charge_folio(new, vma ? vma->vm_mm : NULL, gfp, entry)) { folio_put(new); @@ -2283,9 +2295,6 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, /* Look it up and read it in.. */ folio = swap_cache_get_folio(swap, NULL, 0); if (!folio) { - int nr_pages = 1 << order; - bool fallback_order0 = false; - /* Or update major stats only when swapin succeeds?? */ if (fault_type) { *fault_type |= VM_FAULT_MAJOR; @@ -2293,20 +2302,8 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, count_memcg_event_mm(fault_mm, PGMAJFAULT); } - /* - * If uffd is active for the vma, we need per-page fault - * fidelity to maintain the uffd semantics, then fallback - * to swapin order-0 folio, as well as for zswap case. - * Any existing sub folio in the swap cache also blocks - * mTHP swapin. - */ - if (order > 0 && ((vma && unlikely(userfaultfd_armed(vma))) || - !zswap_never_enabled() || - non_swapcache_batch(swap, nr_pages) != nr_pages)) - fallback_order0 = true; - /* Skip swapcache for synchronous device. */ - if (!fallback_order0 && data_race(si->flags & SWP_SYNCHRONOUS_IO)) { + if (data_race(si->flags & SWP_SYNCHRONOUS_IO)) { folio = shmem_swap_alloc_folio(inode, vma, index, swap, order, gfp); if (!IS_ERR(folio)) { skip_swapcache = true; -- 2.50.0