From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2175C7EE2A for ; Mon, 30 Jun 2025 04:47:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 30A056B0099; Mon, 30 Jun 2025 00:47:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2BAF06B009A; Mon, 30 Jun 2025 00:47:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1F73D6B009B; Mon, 30 Jun 2025 00:47:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 0A15F6B0099 for ; Mon, 30 Jun 2025 00:47:53 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 815F7B9150 for ; Mon, 30 Jun 2025 04:47:52 +0000 (UTC) X-FDA: 83610834384.30.6675364 Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) by imf20.hostedemail.com (Postfix) with ESMTP id 37E331C000C for ; Mon, 30 Jun 2025 04:47:48 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=CLlczm4F; spf=pass (imf20.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.132 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1751258870; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WYD0HqxSrvUGTem+8mYuwxu0D2LEMCE4OO4THT1vKBI=; b=8RdWBNjvdblOv7bFxBDS46gMMGR6tK5JeaVpJstQz3O+Wlt7OQE8RgTobeqUmY33X2Rp1N A5Tyzc48XiNW3OnDgUMsMOIHVHTqfonHF/sOJUoX3ZQ3eTSb7AonyP7cvKU2dT/gRBnBTh ylJ2mOt9FPeY7BYm6GsFEXNFhi39fu0= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=CLlczm4F; spf=pass (imf20.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.132 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1751258870; a=rsa-sha256; cv=none; b=NQG5g1VIaDdsZxI3IF5t13oqPTIGVwsvdaoOqjkDPvUl9U0l3hymG6Ubp5pDc635AIZ2FD topu9RKR7FA2n71GXBHBEG7C93Tv93/lS4vyjwXqVzUtjXFkwJkPz5oly4jESmVNl+YXWm rR+OpW9Q1+No1zx9C+WIBNeuMjG72N4= DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1751258866; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=WYD0HqxSrvUGTem+8mYuwxu0D2LEMCE4OO4THT1vKBI=; b=CLlczm4FUfhHmOOcP/7b8bsu0LeZ5PbKpa1hOUDKjOXCSRtczvA7CteEuN3SINfsMxd8UlgEipBEovrkMIPVw2IU+4hAdH57t3hVjaLCHIl5PgDN00DitQsnxs9brhsdz7EFKVx8qxyYGLITo2ETwNsKgNHIdFHttkQ/oq3A0PI= Received: from 30.74.144.137(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0Wg3hrCB_1751258864 cluster:ay36) by smtp.aliyun-inc.com; Mon, 30 Jun 2025 12:47:44 +0800 Message-ID: <00b95366-aa42-4051-9457-04a009aedbb2@linux.alibaba.com> Date: Mon, 30 Jun 2025 12:47:43 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 3/7] mm/shmem, swap: tidy up THP swapin checks To: Kairui Song , linux-mm@kvack.org Cc: Andrew Morton , Hugh Dickins , Matthew Wilcox , Kemeng Shi , Chris Li , Nhat Pham , Baoquan He , Barry Song , linux-kernel@vger.kernel.org References: <20250627062020.534-1-ryncsn@gmail.com> <20250627062020.534-4-ryncsn@gmail.com> From: Baolin Wang In-Reply-To: <20250627062020.534-4-ryncsn@gmail.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Stat-Signature: oz8m14t437tn7d5ts5wgbq3oh15g15gd X-Rspamd-Queue-Id: 37E331C000C X-Rspamd-Server: rspam11 X-Rspam-User: X-HE-Tag: 1751258868-734501 X-HE-Meta: U2FsdGVkX19cYSu0efcfHpk/WjJsLftQSoUJuAgkK4LeJcE0FuIp+KRhX91SFEtIlZix39zjll7itOvHJDWoeTr1x+QsMWtTv0gl/i2fS6cTDxHCpYey0W9g0oUOtJVgd0j0NQhDle9rWl1k51qFAwzT8U62aDVEzr15Ge5Y7EBKMlXP6JA4KJiW7jJ0bEW9Xrgyyc2I2sMLUk1mxBEEwuTLXJIqpCx8q4wyJTvyswnu5hy/ovRBFGm/uSeteYWX1aHey11PWZlyBGX/35a+RGHJboitU3kmmUayjQRqGwJ6xdR4cDmhhyBMXZUEcf/hJF+hpKBkHn83E/lJ7fqYl8Md5jKNppRRR4LUxOJ2Jd9whVqOpnvZo/Q9DwbyVldlU6pAmwQDLTuwp/B+849gvrraMtTQp4vbcYyHQH3bxtcC9tOic6LMjw8Kv9ZuIvRFLM/vpRLgKuwjL+bcEbfcK8q/oUoddf5DnxkDYK0Le4qma0qSXkwK80uQ7MHabeq1QHfq2+wmoyZeYZVWXIIRQDU7pDxgCW4jp884jmTwbdwGuFlK3MG5t8BBMaxcJ65VfvmiVajBtHLH0a70EHi1loxgIj1G89KeJhaQy9c0oIb7Xg1rl7ss0dxDGtu/1TyBympszvMv7wOjqbAi1+jz5pa0RucdNPt2KPKP8wJ9gGqiqnxoXlPh3L3EtWUScDbBai6Rj9vmocNjESAEvDN3zcrRBDcRGw9V7179Eku26F284tdyLAn1VyFB3t9FptgfVeAEmt8uOJx01l1iWCLiGXkvY/Ym5iD6FXUfDGZWr/tp2v+Mb4eVuUU4WWcCfx/iS2ahH30dYr3UKTgVWM02cHM10tVJq5nubzVkil7rhpH+e6RxaS1oVgKLd62b1H9sOBVVm+thSmPKl3TOP8iHNkXQ06a4CKLbxn/hMevaD4akcBT8gJpBZqsstzRqzTEjmu2RTbEFeuihbuOUZhA 1tUg/Hat KuedFsK1RLh51G8d+mgmRSkAzaPBj8l+tzmOLuPABXyHtiianVwmsOkxaWMHdiAbz/1XQbY3lQv03GD2Y0vdn0MsEJK4x12IlZyGEzKoedPByxBzMh12TLeAds7CM9JxuNURwEgEMf3aFvTOGn9HpFVJQxAycjZhAa8LAUxG5XqFs1QJpeWy+kiUgTI3Lwi39lNjw6fsoDsGcZBKDu1vWTRRYGaFqkfNfppw9mYC+WsxJWmkAtT75nsu1FhPTJD9t4gR+JrGmiGdPZZJpYtLF/VZZvLmaZSWjCPiJ9TYXvNkdiy2Af7xhXWhAvwa3SkBAXHiUBFXm7LIH2ZBTVvXy195p1w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2025/6/27 14:20, Kairui Song wrote: > From: Kairui Song > > Move all THP swapin related checks under CONFIG_TRANSPARENT_HUGEPAGE, > so they will be trimmed off by the compiler if not needed. > > And add a WARN if shmem sees a order > 0 entry when > CONFIG_TRANSPARENT_HUGEPAGE is disabled, that should never happen unless > things went very wrong. > > There should be no observable feature change except the new added WARN. > > Signed-off-by: Kairui Song LGTM. Thanks. Reviewed-by: Baolin Wang > --- > mm/shmem.c | 42 ++++++++++++++++++++---------------------- > 1 file changed, 20 insertions(+), 22 deletions(-) > > diff --git a/mm/shmem.c b/mm/shmem.c > index 033dc7a3435d..f85a985167c5 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1980,26 +1980,39 @@ static struct folio *shmem_swap_alloc_folio(struct inode *inode, > swp_entry_t entry, int order, gfp_t gfp) > { > struct shmem_inode_info *info = SHMEM_I(inode); > + int nr_pages = 1 << order; > struct folio *new; > void *shadow; > - int nr_pages; > > /* > * We have arrived here because our zones are constrained, so don't > * limit chance of success with further cpuset and node constraints. > */ > gfp &= ~GFP_CONSTRAINT_MASK; > - if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && order > 0) { > - gfp_t huge_gfp = vma_thp_gfp_mask(vma); > - > - gfp = limit_gfp_mask(huge_gfp, gfp); > + if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) { > + if (WARN_ON_ONCE(order)) > + return ERR_PTR(-EINVAL); > + } else if (order) { > + /* > + * If uffd is active for the vma, we need per-page fault > + * fidelity to maintain the uffd semantics, then fallback > + * to swapin order-0 folio, as well as for zswap case. > + * Any existing sub folio in the swap cache also blocks > + * mTHP swapin. > + */ > + if ((vma && unlikely(userfaultfd_armed(vma))) || > + !zswap_never_enabled() || > + non_swapcache_batch(entry, nr_pages) != nr_pages) { > + return ERR_PTR(-EINVAL); > + } else { > + gfp = limit_gfp_mask(vma_thp_gfp_mask(vma), gfp); > + } > } > > new = shmem_alloc_folio(gfp, order, info, index); > if (!new) > return ERR_PTR(-ENOMEM); > > - nr_pages = folio_nr_pages(new); > if (mem_cgroup_swapin_charge_folio(new, vma ? vma->vm_mm : NULL, > gfp, entry)) { > folio_put(new); > @@ -2283,9 +2296,6 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, > /* Look it up and read it in.. */ > folio = swap_cache_get_folio(swap, NULL, 0); > if (!folio) { > - int nr_pages = 1 << order; > - bool fallback_order0 = false; > - > /* Or update major stats only when swapin succeeds?? */ > if (fault_type) { > *fault_type |= VM_FAULT_MAJOR; > @@ -2293,20 +2303,8 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, > count_memcg_event_mm(fault_mm, PGMAJFAULT); > } > > - /* > - * If uffd is active for the vma, we need per-page fault > - * fidelity to maintain the uffd semantics, then fallback > - * to swapin order-0 folio, as well as for zswap case. > - * Any existing sub folio in the swap cache also blocks > - * mTHP swapin. > - */ > - if (order > 0 && ((vma && unlikely(userfaultfd_armed(vma))) || > - !zswap_never_enabled() || > - non_swapcache_batch(swap, nr_pages) != nr_pages)) > - fallback_order0 = true; > - > /* Skip swapcache for synchronous device. */ > - if (!fallback_order0 && data_race(si->flags & SWP_SYNCHRONOUS_IO)) { > + if (data_race(si->flags & SWP_SYNCHRONOUS_IO)) { > folio = shmem_swap_alloc_folio(inode, vma, index, swap, order, gfp); > if (!IS_ERR(folio)) { > skip_swapcache = true;