From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B29B4C83F0F for ; Thu, 10 Jul 2025 03:37:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 574626B00A2; Wed, 9 Jul 2025 23:37:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 54C576B00A3; Wed, 9 Jul 2025 23:37:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 413966B00A4; Wed, 9 Jul 2025 23:37:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 2E5886B00A2 for ; Wed, 9 Jul 2025 23:37:49 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id E7797140646 for ; Thu, 10 Jul 2025 03:37:48 +0000 (UTC) X-FDA: 83646945816.02.996A337 Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com [209.85.215.178]) by imf15.hostedemail.com (Postfix) with ESMTP id 10A55A0004 for ; Thu, 10 Jul 2025 03:37:46 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=bofhpgB9; spf=pass (imf15.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.215.178 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1752118667; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=f9uQDZMSoLo04nYJSt1eoke5Qxicz761lSMKdLpc5Nw=; b=gRo7qdm+tufE87zjF3B4gTO8N0uflk81AUiQ4bb4q2RKC1pD5ZRFcJDumdBmBo1Dy9aDU0 kt+fZ0W5nXRLAJ+Wzbp3pWWQfZc42ecxC6g6rrLS1OylHV2i9pyokfRkD7XE95+vnV8332 jXpbZlFJUOnSjEewfSNxSwZQs9HozKc= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=bofhpgB9; spf=pass (imf15.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.215.178 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1752118667; a=rsa-sha256; cv=none; b=TJgf3xOIvgUVXXu4UBThZqDjRDdFi8eOWAmTuo94hxR8fKi39TwUlWfWbC0Qk93WU8ak9H r/OOSQgTyInXEmE+1EEuWRvi8wDsAGYqjT/ROmYoxIZZt8zAurjLkXjQLf8PFno3VF2+Ck Ir7tOphn+VJz/UQF0FzOPMED4pme2XI= Received: by mail-pg1-f178.google.com with SMTP id 41be03b00d2f7-b0b2d0b2843so514613a12.2 for ; Wed, 09 Jul 2025 20:37:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752118665; x=1752723465; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=f9uQDZMSoLo04nYJSt1eoke5Qxicz761lSMKdLpc5Nw=; b=bofhpgB96SuT25bWVxKi4qVol5S6dRH6zbtOE3XbIZZKiR5w7nMDbYLDRjYvaX/Ivw 1wDM33ihiEAaQASxEydTIACS7FN3FHtkKAGfLDAO4Jy6aA+T/OKKxovp0IvuShbHwDJ2 EtiKZKCbj6oELardHT8DPxvIs9M2+4MxASfaXLMpbRlWpicCIwrj6m0NJlKaB5NNfgYk f/bbhX+0/lOBuR8/GX8wQOYu/WWHserjVUdZGVvO04mlZrNE5t2LRAX66q/fRy+/gxK8 QH0dsNOUPOj2SKQ1s32ZhUY15qZV9cQThM/KJOUP48foSaE6O+GpAp5a8Vcl34zoOEjw Nttw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752118665; x=1752723465; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=f9uQDZMSoLo04nYJSt1eoke5Qxicz761lSMKdLpc5Nw=; b=GYNtnn8Y3ZwZEl6d7AUra9i0XkNlcF6Ba2NYiYGndACSi9AJQ9IwvjAXK+yqbT2QJ6 KA4XNzf5uq5gZg+CvryWfgwbzb6v4ccfNkoana+jxPD/M8wqYe2K1cOFp8GsABE+DVI7 Z+5B8Ye2d7prbw4kJdacRXAylPPuU0LxxRa9N7RxSCxaL9ecdVux62RCymTUsht/xg/K f/d4+WAfqwYLnOYVEMyZRUwBnZIrYZR6vBlQkAejPNz3C7hMydWjrVZIQYhkKyFuMT4y P6ovLkj33JTWm1OLbKRY/Ts82CWLdJyAfutRQnEmUxPw4spSK8YHOk20CvGqkfeOXKsS /1TA== X-Gm-Message-State: AOJu0YzXER1NNuCWlkG1Es9YiCMYxYjoGabovCwy+S88P3yINaccP8tU +i0TfGPKycbwIzl1jrDI/7s834J8Kba1pdDjS7I/IGL5oRNm/7moMGvSH/XYhx0n9ic= X-Gm-Gg: ASbGncv7e82w4S/HRhBy/W9bAg/L+BJrBajanul/78eYllafuknD+54bM7bsmQYDuJW F1IQjCtAljWKVmyZmkiXonnRlaGa5e8oI7cZ47Sli8xgzgeOVhQ87xaQyM/8dGs5z8zm+Puim6I SZC/W2qc6gmnhu4qao5RVw70f/C2tbFV9AIAHAXPGcZcpVEdhU9y7bVfpW77nFWSo+te4pUIlR4 UrQGfMsgHr7FasuQcAXSLiWsK2G5YbW6UKeA8CoP3gfIeIlTEosSFnjk5KKRLLpm1Nw1cIHn8Vk amolpmipMcSNSpaXJ0/XBJL+raQFCut+brFJIkwL1kDe0Yg08/kS7lahgFFSLk4lnYvPRtkKo+x G0xwI/lAy8fs= X-Google-Smtp-Source: AGHT+IEHC3Eha4I792pKQPxkmiPL3iEbJLksc3wQUkP0zzrkb6DMFrqYCIKr7K2zai39OVzBjU82Zg== X-Received: by 2002:a17:90b:17cf:b0:2f8:34df:5652 with SMTP id 98e67ed59e1d1-31c2fdb9d5amr7391461a91.21.1752118664930; Wed, 09 Jul 2025 20:37:44 -0700 (PDT) Received: from KASONG-MC4 ([43.132.141.20]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-31c300689aasm3716320a91.13.2025.07.09.20.37.41 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 09 Jul 2025 20:37:44 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Hugh Dickins , Baolin Wang , Matthew Wilcox , Kemeng Shi , Chris Li , Nhat Pham , Baoquan He , Barry Song , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v5 3/8] mm/shmem, swap: tidy up THP swapin checks Date: Thu, 10 Jul 2025 11:37:01 +0800 Message-ID: <20250710033706.71042-4-ryncsn@gmail.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250710033706.71042-1-ryncsn@gmail.com> References: <20250710033706.71042-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Queue-Id: 10A55A0004 X-Rspamd-Server: rspam09 X-Stat-Signature: u7a5aq5xgphnjoi7rmmzsrqdwy914n1s X-HE-Tag: 1752118666-512029 X-HE-Meta: U2FsdGVkX1/MDpLNg2QibqrqFGzQZyaV5JzGbU0R54+rps0twmd9NDuvRxRtQSDD6XhZOv5gdL2dAfF2Q1Z2Ro8+4AWJi2pAjk5r5U1ULufo6KUH1NVldxTDTV+o665GUU9BPW0x7eo63nsNRafipZ/0H9VJe01ttnrR9XhAFECMk9xeKJAZjJAafHVU4osKdrPQlji//2GODwjpgZmOlbEcKIldxPCcAulG/QhhkFGW1urDRfeUB4pw7Yt4OTklpWiaNxXFGZWfOSiM6cnbWTKqxVoqhjMc6ft/JCulsiZngnIHJaxgSaQ0Tb4oEYYEaNWzJPxt9MdZFgqyf4jLmbV0TRSDzIm7+vlXIaKRkjwLFk9vKGK8uWvSDXW2hCgMwtDQDNaF1lwiJ8GgfBBrXv/seV8YQxYa9JF3hxI/iymOJEdYs11XUvEh/9JVx7yWrKjgqEMHHjyI3n7DHnx74dkv/kuQ9eH4L6QU5zydwHOMZ3wwDLNO5Jx7Yh0Ue4er3h0t0jzZhY8dGNrgsvdFy4QJPDXyYutFe3YxpG9ZMTAIIrk47Iu5xA9TER4XtOrjD88a4xEUuRDYs7/eZ/WalVZb1+ruWLu3QxgLDhDukLuqI5jiJ07/SfX6vGN4A5smXHMxXqhN3p2frQ1TE3ed/3lnrjOp3FByE5xwlSVq8lvuUxQgL767wTOlXqXXkBApgHWnWG2JkKE0JUFR4Ibd6vh/ui8zp/7bgVANzCP/Q5EVfD1WhGs5pc3ztCwousI//+ahr4JKj2FAOnfJZAeblFi/qvtV5m+qa1tDUt3Utlzj+HsEMpDyWC8wIaujljQ1G6ns8OvvcxKjNa0F8SFh6YjAKWsFO2/RQITziSScDaL9EDeWY+speQ6auNzYRBLgaff4dDZBvW+CJiXRFj7oirKDMyKj6poFos0hJ0ecaWSvKbGoAkKNVmG7YG9iAYt6pXS6flxmOEFx/xpoDqH h/IZh1fE epg4tX49FWjW/EXS8KQEgfV+8QPojR8cdrp8sh+h1Jdk12TJqW0C7BE3SXcTuqsb9VOOJGij66EsxbqVIDiVBo10PfiNU5eijvc8hgdrmsBpZ6k2ieLwjBAARRH+pTnhVchcgHuDMaX/HK5xTzEJCLIpRsGz5MndoBHM0lHqat9e3r8mnOEotFWrUN56sSVXroZjIB5zsLZt0KSo+H91RDmqkFkZ6VImRHI+/3w2uVmTSfXkbKwZdtCzmx3LJgDv5AIFcFUlO1Q4iB4IjTmGNEaROXPvvO3ssXPWM2DJtKT9DzBlGcSpZQwvFB4yzyHO7Qe7nwxQfbWC+zuzdEsJ/i0pMzurgPUHcI4x+T9FuLUcvZ5Vkbd031eWQa/orvlh53RzgzcGBz9HaQT40GpYscK11ypE2j4JjbgFI2TXCnpv3m678/qMmH+AflYZo093TUhbniM8/YGX7LgXHJubhfamFIcqpRgfHYymS3uVc0EsMD1WFrMPd2L5DRqXA0XOkIXALWi9MAiiisCj8oYKl9INI9M5npDvTrnetzXeof9ePrJSaz5qxJorhuZY0KZKqBlhwQAnDZ9E4tALnRQ2FTE4Ard7cSRl6JNmXPW6xd/n6GEq1S6NrCTKMekfzUpEntxCJEXH6kJCRmR5tZSLUlYF9gWAwoxe8UQJ32YIZ19vRm9s= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Move all THP swapin related checks under CONFIG_TRANSPARENT_HUGEPAGE, so they will be trimmed off by the compiler if not needed. And add a WARN if shmem sees a order > 0 entry when CONFIG_TRANSPARENT_HUGEPAGE is disabled, that should never happen unless things went very wrong. There should be no observable feature change except the new added WARN. Signed-off-by: Kairui Song Reviewed-by: Baolin Wang --- mm/shmem.c | 39 ++++++++++++++++++--------------------- 1 file changed, 18 insertions(+), 21 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 85ecc6709b5f..d8c872ab3570 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1980,26 +1980,38 @@ static struct folio *shmem_swap_alloc_folio(struct inode *inode, swp_entry_t entry, int order, gfp_t gfp) { struct shmem_inode_info *info = SHMEM_I(inode); + int nr_pages = 1 << order; struct folio *new; void *shadow; - int nr_pages; /* * We have arrived here because our zones are constrained, so don't * limit chance of success with further cpuset and node constraints. */ gfp &= ~GFP_CONSTRAINT_MASK; - if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && order > 0) { - gfp_t huge_gfp = vma_thp_gfp_mask(vma); + if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) { + if (WARN_ON_ONCE(order)) + return ERR_PTR(-EINVAL); + } else if (order) { + /* + * If uffd is active for the vma, we need per-page fault + * fidelity to maintain the uffd semantics, then fallback + * to swapin order-0 folio, as well as for zswap case. + * Any existing sub folio in the swap cache also blocks + * mTHP swapin. + */ + if ((vma && unlikely(userfaultfd_armed(vma))) || + !zswap_never_enabled() || + non_swapcache_batch(entry, nr_pages) != nr_pages) + return ERR_PTR(-EINVAL); - gfp = limit_gfp_mask(huge_gfp, gfp); + gfp = limit_gfp_mask(vma_thp_gfp_mask(vma), gfp); } new = shmem_alloc_folio(gfp, order, info, index); if (!new) return ERR_PTR(-ENOMEM); - nr_pages = folio_nr_pages(new); if (mem_cgroup_swapin_charge_folio(new, vma ? vma->vm_mm : NULL, gfp, entry)) { folio_put(new); @@ -2283,9 +2295,6 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, /* Look it up and read it in.. */ folio = swap_cache_get_folio(swap, NULL, 0); if (!folio) { - int nr_pages = 1 << order; - bool fallback_order0 = false; - /* Or update major stats only when swapin succeeds?? */ if (fault_type) { *fault_type |= VM_FAULT_MAJOR; @@ -2293,20 +2302,8 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, count_memcg_event_mm(fault_mm, PGMAJFAULT); } - /* - * If uffd is active for the vma, we need per-page fault - * fidelity to maintain the uffd semantics, then fallback - * to swapin order-0 folio, as well as for zswap case. - * Any existing sub folio in the swap cache also blocks - * mTHP swapin. - */ - if (order > 0 && ((vma && unlikely(userfaultfd_armed(vma))) || - !zswap_never_enabled() || - non_swapcache_batch(swap, nr_pages) != nr_pages)) - fallback_order0 = true; - /* Skip swapcache for synchronous device. */ - if (!fallback_order0 && data_race(si->flags & SWP_SYNCHRONOUS_IO)) { + if (data_race(si->flags & SWP_SYNCHRONOUS_IO)) { folio = shmem_swap_alloc_folio(inode, vma, index, swap, order, gfp); if (!IS_ERR(folio)) { skip_swapcache = true; -- 2.50.0