From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86B90C7EE31 for ; Fri, 27 Jun 2025 06:22:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 272B28D0012; Fri, 27 Jun 2025 02:22:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2242B8D0001; Fri, 27 Jun 2025 02:22:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0EA978D0012; Fri, 27 Jun 2025 02:22:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id EDEAC8D0001 for ; Fri, 27 Jun 2025 02:22:56 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id A48ED1A0579 for ; Fri, 27 Jun 2025 06:22:56 +0000 (UTC) X-FDA: 83600187552.23.8E23645 Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) by imf29.hostedemail.com (Postfix) with ESMTP id C3C05120002 for ; Fri, 27 Jun 2025 06:22:54 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=RiqjRhrd; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf29.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.171 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1751005374; a=rsa-sha256; cv=none; b=wpwLKcjQpu1kIyCfqgl9S8AuKBJ7eMnLL4QjjnlGQHKPnPTu9HhYenaOXPC+E5LVB5QHVz w6PJ3MPLFJcOpQdKC4olguFX3N4oZq86ra+vk/aoo/Ua2CF/nUeXvVEPDtJavd0/Gz5WSj 9fSkTPngeH4sQEqbqdZqqendOriDe4Y= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=RiqjRhrd; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf29.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.171 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1751005374; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=50Y0XWJGwJinRQammbhrqeExSwEhDWKAQx/TaIcN54Y=; b=8kKSycYEcJUq4gcgFA1lj0XdxuN1uzKphljTPtFIjF1gQqpGECw1kXxjP4e75CqzD5CU6r 7aikJSzxapyEAyXgWckSUDA5dwnWYQEhaJAbe/rfxQQsRJrBQDiKc2knmQP7w8it/lROYR nCGBaOWDsdk9PBxfv2TDzJL+0ped0sU= Received: by mail-pf1-f171.google.com with SMTP id d2e1a72fcca58-742c3d06de3so2156604b3a.0 for ; Thu, 26 Jun 2025 23:22:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1751005373; x=1751610173; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=50Y0XWJGwJinRQammbhrqeExSwEhDWKAQx/TaIcN54Y=; b=RiqjRhrd79bXmEA1R7mHQIzZFAwjXw4Azu1HYFp3HeOn48OHmj2bfYa4SggzNJsJ7A mHORYILc5tpGKAQ6r9qBYyEO8ScTkMJuq9ducfpZPVVk4/XRTdiolSPiA0kqQUkVfh8v Q5y3P+9P58kx65VYHZygbHwYE0FnXZI9xxdcS//im/dz16FG4c2U2vdt7wTN0ZDZfGl8 GZ+l1zatE3d7IHFJJPcHpAnno9aknC9y6cu8D2cVQAlIWHPnwi6NKtrIoYkCq8fFERvn yt/+0Eg8Sl9ocfhN/YoKZ1caIEnF7SPlk4dbq25/3FI7tONwjgUOKMNwadMmhJot4CbN fA/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751005373; x=1751610173; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=50Y0XWJGwJinRQammbhrqeExSwEhDWKAQx/TaIcN54Y=; b=uzlCSY58ByK8wmzAgOxnU6T2azzB21rIeEVv9AYkphy9Rq2Qb7ZAcztax9+VApbke0 8ASAMoPPzIQhnSIy6NCqjDa14Ix5Y/ZmHBL0gIm/9FU7x9L8TzAe4jmy7ffYs7D1O0r1 gdgbkj1YkRIPslif8GxRUzIGG+37HkODwXey7lywdoUu67nH7cuS/Nbm5gXUdISyrEkK 5EXLYCGymVzWygycXEoT3ioewLognVaQNxeaPgHf4OOqdyb9FVKsBr/3o0PWyLRkloc2 m2jlKgZvcMy65xKZnDgqY3grWFHuHuJju176sXrUrW+6AeiKEbxcDgGN8zAnFcqvtq0O N1lg== X-Gm-Message-State: AOJu0YxGkIlwlS51oxRD8uaEa25ANw3lX03xE46cTNm1DKDA95grrKel V5x89AUNzfGQSqltOqAlZl11Pe8cYiWNmecnBcSjtPSXygQYNv4V7DetwgHnq7vn5wk= X-Gm-Gg: ASbGncssfmMQ6XlB04Ek7EUXBMHzQBG9XTdCTuyhVzmkrq4l+hx2X69QokrvVbISPAx TcYqCHdsjNkCLOLdCtvR5uRwKkh/w0CzGZsbx45r7PTSrXRdepwCIRkX6Zi18kXYb/Iqnl5VOw9 bLbPyZLoCWMIRgmWGhjdOdA7xVfW9evpeBIRPlBEYAirxR84KBlPJzweE2de3JFyIXfTCBCSUKo LUSnFiz6LdemZVPMLYNIXQ/hsEbpnD1r75XwXzuVcAzzqkh7dsXy9W0yJmbcCiuZp/C1wzw3A1S VcVxSvncSi3hPSlqMRF59EMiAlwymwPNiMsaGV8eepolx2s0Asz/TlTRIsvUEciH6JhrCjEwzCz X X-Google-Smtp-Source: AGHT+IEqfY010ingzvwEXrROI72JiTGh0L7tXCs5uWwB+6W3VxTF2TUNYLCfZ3OCYchE45voeAupYA== X-Received: by 2002:a05:6a00:238b:b0:746:24c9:c92e with SMTP id d2e1a72fcca58-74af6ef81a9mr2980203b3a.8.1751005372580; Thu, 26 Jun 2025 23:22:52 -0700 (PDT) Received: from KASONG-MC4 ([43.132.141.21]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-74af5409cb6sm1456212b3a.23.2025.06.26.23.22.49 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 26 Jun 2025 23:22:51 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Hugh Dickins , Baolin Wang , Matthew Wilcox , Kemeng Shi , Chris Li , Nhat Pham , Baoquan He , Barry Song , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v3 3/7] mm/shmem, swap: tidy up THP swapin checks Date: Fri, 27 Jun 2025 14:20:16 +0800 Message-ID: <20250627062020.534-4-ryncsn@gmail.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250627062020.534-1-ryncsn@gmail.com> References: <20250627062020.534-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Stat-Signature: 66z8zntr5hh6sn8ff7ctciwgx99q64ot X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: C3C05120002 X-HE-Tag: 1751005374-184564 X-HE-Meta: U2FsdGVkX1+lMQPu7jc0UD+SzUqIpWaaSdGu6IT8PInKyGmFlp1BCAbSJrYSJirLfnSGm5x7U9U524auqxblERgZP2sI6HNOLUD48im8XyiVxHblAWE3299XfYLqjMzX2xHkB02OAXwY0+NgU5w59BtwLiUevGnBsPEcXlgsGDh2qij7VJIkA54Wud24xSL360lRlaM0gOLwO8KA0DBR7lUV3iR6cqSfamvF26KRK0V56vx86TvG/x7dXYsS2b+lq1vH3x6fdgU/t93beovnVQts+kbv/VsfEF+c/etbbDMYhLSbO1tWAIjxsmOKjT2Dk4B2tNkN3OxBN6rVxAUK/kgNv1xpjlpjToCFTqQgmhjfYBMtHfjjQ3ZM7YFgimyUhsUbiWyqFd5bBVOoOfPjlK63YiGioKtDvxtZdmruU3GX1+hPnEdFzmDo7ciYJtQj/lvdEQ9uiVepUiJAi74XJg52i4Sz9X5hkGv6tUoW0cKu174BxV34d6wr/CvBFpvSEJXDCIUqN1NnzD8e6CJhrln2QqZ4I0loL8BoyNfxnUzfF8zz1hqkHIMENt4KdP/F/WcwuisRaN7R4ASzxRssmM/RKs+qF1+Uevfa6peuj/WynPgXq7rgNVZqA9kbko6HgXiyL0Ihcbf2WP5IVZ6aQaA6QNJEoOXFTJAQwH5DUUvje1+7Gay7xoTmwa1haCRW8uAT9/BkVUFlT3Ew6i2GmOGnJ9u9ilhXpJ+yHaBUymHyRmfmE6e5xx1wRaCJNhWen6CQcEzzKzqz8BEOfmQZj2zCC1urQlqUUMaONOpX2QxhJVUK4vcxJjKXJ0vpt1jkGwcjg7RHk/aincSGpyj+2pwZcwGnXUrej1TWgVUO7BW7zxaRW0/M/D+HypU61ypdQH7H/byOvwmwFV8ptUM++Ku/wf62YZhDm0tMFg/wu7HWaocEo0VyK88TkvNPCGf9S/Ve+D2wCwr2MAucwKi H7d/lrNd KGCYqawmNUN21fF82tDVYNk6CFy0CW6OTvkhFg0c1xRLE0alDY2gihsEApieswkiKNagSiukxaSOOwZnoWaMne3dC0qwp7vmLS7IYqk7RNO0Hm9ARL5RWV0Uy4pHFWm4auf6cFCsUuPVUHA5DdN4TrGuX15R8S6U9S28gHPpdTy7XFocYS4ekq4/iJ3zz6lbQHEmkAAmvMveo/IRvBHyWpX9LIdfA02tut9ZrICWbZize6pBama4FhVCrRvJf2gXyCBXR12MAknghh4oXAqrYi6UCmIGxDryswcttoU9nlnM5oN+/SdpgqPK6+9hnmgHPkTjfG7btLftUNOcewUs8RSNzaq9bEl2c9k3sY3PQvRCKab5lnoVAuRsbNxWVU7SmJxxj6XQBYk3GOkaSJCV3VHqMqBTF4KnG9FL2R6cA/FMJc9EmWHBMIKU9rcc7IvHkme4q9CMGe/loP7QV0Rn9CRWTIk2r6Q2yGtvBHpWcvQLq+/YPZGmp+8MNrt8QxY/IiVXO+e1hdtFj3aIlcpVR7Dz9+tIOarhy6kxZMKfNpHRzSNRTF0+HaDz4LU0Lwyrs+sVXO6Ql4kbtAU67HgQdNfpL5n4R+Q9o9tNpDY684pRxI9A= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Move all THP swapin related checks under CONFIG_TRANSPARENT_HUGEPAGE, so they will be trimmed off by the compiler if not needed. And add a WARN if shmem sees a order > 0 entry when CONFIG_TRANSPARENT_HUGEPAGE is disabled, that should never happen unless things went very wrong. There should be no observable feature change except the new added WARN. Signed-off-by: Kairui Song --- mm/shmem.c | 42 ++++++++++++++++++++---------------------- 1 file changed, 20 insertions(+), 22 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 033dc7a3435d..f85a985167c5 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1980,26 +1980,39 @@ static struct folio *shmem_swap_alloc_folio(struct inode *inode, swp_entry_t entry, int order, gfp_t gfp) { struct shmem_inode_info *info = SHMEM_I(inode); + int nr_pages = 1 << order; struct folio *new; void *shadow; - int nr_pages; /* * We have arrived here because our zones are constrained, so don't * limit chance of success with further cpuset and node constraints. */ gfp &= ~GFP_CONSTRAINT_MASK; - if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && order > 0) { - gfp_t huge_gfp = vma_thp_gfp_mask(vma); - - gfp = limit_gfp_mask(huge_gfp, gfp); + if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) { + if (WARN_ON_ONCE(order)) + return ERR_PTR(-EINVAL); + } else if (order) { + /* + * If uffd is active for the vma, we need per-page fault + * fidelity to maintain the uffd semantics, then fallback + * to swapin order-0 folio, as well as for zswap case. + * Any existing sub folio in the swap cache also blocks + * mTHP swapin. + */ + if ((vma && unlikely(userfaultfd_armed(vma))) || + !zswap_never_enabled() || + non_swapcache_batch(entry, nr_pages) != nr_pages) { + return ERR_PTR(-EINVAL); + } else { + gfp = limit_gfp_mask(vma_thp_gfp_mask(vma), gfp); + } } new = shmem_alloc_folio(gfp, order, info, index); if (!new) return ERR_PTR(-ENOMEM); - nr_pages = folio_nr_pages(new); if (mem_cgroup_swapin_charge_folio(new, vma ? vma->vm_mm : NULL, gfp, entry)) { folio_put(new); @@ -2283,9 +2296,6 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, /* Look it up and read it in.. */ folio = swap_cache_get_folio(swap, NULL, 0); if (!folio) { - int nr_pages = 1 << order; - bool fallback_order0 = false; - /* Or update major stats only when swapin succeeds?? */ if (fault_type) { *fault_type |= VM_FAULT_MAJOR; @@ -2293,20 +2303,8 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, count_memcg_event_mm(fault_mm, PGMAJFAULT); } - /* - * If uffd is active for the vma, we need per-page fault - * fidelity to maintain the uffd semantics, then fallback - * to swapin order-0 folio, as well as for zswap case. - * Any existing sub folio in the swap cache also blocks - * mTHP swapin. - */ - if (order > 0 && ((vma && unlikely(userfaultfd_armed(vma))) || - !zswap_never_enabled() || - non_swapcache_batch(swap, nr_pages) != nr_pages)) - fallback_order0 = true; - /* Skip swapcache for synchronous device. */ - if (!fallback_order0 && data_race(si->flags & SWP_SYNCHRONOUS_IO)) { + if (data_race(si->flags & SWP_SYNCHRONOUS_IO)) { folio = shmem_swap_alloc_folio(inode, vma, index, swap, order, gfp); if (!IS_ERR(folio)) { skip_swapcache = true; -- 2.50.0