From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84ADBC3DA4A for ; Tue, 20 Aug 2024 21:30:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 045AA6B0083; Tue, 20 Aug 2024 17:30:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F378C6B0088; Tue, 20 Aug 2024 17:30:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E27376B0089; Tue, 20 Aug 2024 17:30:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C38626B0083 for ; Tue, 20 Aug 2024 17:30:36 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 5345AA7CAB for ; Tue, 20 Aug 2024 21:30:36 +0000 (UTC) X-FDA: 82473918072.02.43E0C80 Received: from mail-vk1-f180.google.com (mail-vk1-f180.google.com [209.85.221.180]) by imf21.hostedemail.com (Postfix) with ESMTP id 4FACA1C0018 for ; Tue, 20 Aug 2024 21:30:34 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.180 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=quarantine) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724189395; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=T5MbvUEZo2RanWzcPSim4NJ31aZ7A+5iQj3+e1UdIIE=; b=EN/akMkXfGK/h2IGxxO3TZ6B/bDtVpeJ7iJjCfKg6GB7iAOXXtFDg8ExRv7avJiY/2RbSS hZ9rx2I2yl1U3pn3AI2u8ivo1YdqQ8Cb3rmPqJHxGdhGbvv4wufAeGcfID3Fa4jpue88ng 7r96YVTovj0VUqK2iGon+zjR2k/phvU= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.180 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=quarantine) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724189395; a=rsa-sha256; cv=none; b=qrfxp4Me+HMmCMX7VlXD4i9AZQm7LTBJkawMRfPp6e8dkJa7/ocSUC5sGzu2PDtSB6OL7D /hBmIR4ePJkz2asL+3R7ZRfvK4Yhr+Lnbzd2bjeMY4mUFKnRGGc7KHyp8X7yCpCRQPAJ4c I8k7Yn5i9Qp70prpVrzcx0hL/oJHBTE= Received: by mail-vk1-f180.google.com with SMTP id 71dfb90a1353d-4f51c1f9372so2333981e0c.2 for ; Tue, 20 Aug 2024 14:30:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724189433; x=1724794233; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=T5MbvUEZo2RanWzcPSim4NJ31aZ7A+5iQj3+e1UdIIE=; b=iJAqRQC9GUFlT1EzgUXI918RlCr/8NRgKjq71kJue+wobIgG1zG90F8YLQYBZUlzRU g8566iQf+SOi6cGU74iGR+IA4ikcogLY3aSfhJVdxqhmcEGYK6Vh4evAI8RM/utpmUc7 O1nEAenWPTGYy4wGFyehiNpo7B7k1IP2VqvXCJZdEOY65CP+IDdrNMs7hPKqrFqxVBcZ Oj9li/WFTPcU4Fotq4NmoK6PqKX1TQJOMAGvE6BEpLTaB59PcXhC5aVySnaxcP48BHzJ +r+Jzs0JHXgI4AO4vHcwE7gRAknknK0w/LV+EXGrE3gHAvCvGnSnoQ8o5zlzEwxdCh3L W3qA== X-Forwarded-Encrypted: i=1; AJvYcCVGZU8DtDUtjebGkVkRIJxzzvHqvipcm8x/kn5CkXhLOr5sYcyeCAhXqjyJ/Dsjj/895BropfSiww==@kvack.org X-Gm-Message-State: AOJu0YznieV1qdBybIfssBrsFfjyOuUQ7iUfPHhbdBoVBN19Zn5Ffoh7 uIWxijiQjF8bhmMcr7Vx5GTi4vpmqn/UV6Wlp5V6Zp8jR0AUjIz2bYlbcTD2+9KQJ7vxShkoU4M n3WSs+mCZo+DMcXIgHHnGYtUzMow= X-Google-Smtp-Source: AGHT+IGS1cSsfYG4SOT7yH5qcDXTg2Z5iUpMCjN0fGTIGWP4vGyQH5TLoU/ZJtMssIUYrTB3BJn2WNp4oI9Z1jNmoMI= X-Received: by 2002:a05:6122:4698:b0:4fc:e713:6572 with SMTP id 71dfb90a1353d-4fcf1b98922mr912204e0c.11.1724189433182; Tue, 20 Aug 2024 14:30:33 -0700 (PDT) MIME-Version: 1.0 References: <20240819023145.2415299-1-usamaarif642@gmail.com> <20240819023145.2415299-5-usamaarif642@gmail.com> <9a58e794-2156-4a9f-a383-1cdfc07eee5e@gmail.com> <953d398d-58be-41c6-bf30-4c9df597de77@gmail.com> In-Reply-To: <953d398d-58be-41c6-bf30-4c9df597de77@gmail.com> From: Barry Song Date: Wed, 21 Aug 2024 09:30:21 +1200 Message-ID: Subject: Re: [PATCH v4 4/6] mm: Introduce a pageflag for partially mapped folios To: Usama Arif Cc: akpm@linux-foundation.org, linux-mm@kvack.org, hannes@cmpxchg.org, riel@surriel.com, shakeel.butt@linux.dev, roman.gushchin@linux.dev, yuzhao@google.com, david@redhat.com, ryan.roberts@arm.com, rppt@kernel.org, willy@infradead.org, cerasuolodomenico@gmail.com, ryncsn@gmail.com, corbet@lwn.net, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kernel-team@meta.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam03 X-Rspam-User: X-Stat-Signature: 149whpm3x6t5nz4tjme6ps5kcw6ybny1 X-Rspamd-Queue-Id: 4FACA1C0018 X-Rspamd-Pre-Result: action=add header; module=dmarc; Action set by DMARC X-Rspam: Yes X-HE-Tag: 1724189434-533900 X-HE-Meta: U2FsdGVkX1/xNOqRgAaoSx7qbR8LjP+b5sH+U83SQT3AaF0sHKjX66iXlaGT0ODPukH2gh4KOOap0GLWa6tjdu83D43MseoBpTV5n/+n2TTg3JgIBQz6tJQVj5dXmvzlSOh4a3vCL73LAaW5WAhx5MIw8qy31DLDN52ezmm/x9r+XE2ho3uUKAq7tu0taiXn5Cs6AS9j+xunE4e+kb2flD5q9MDTw2jU/qShBGJigt88sV2Bs2cqo1LSBKNOuOttapR2Qe1KasBSlcbgbFS/ivx52prvedx/As4YMKfdaxrp94+msqvAZ1FBHF7wjqFirK7dL9ecY3Z5X73eTJSTT1nZ9gWjHbk84ihB9gbuNrFX54YvP+8baQNJCjeTWOZa4wHHztsPDbs+itXHkzIQdJccP1b9mPE5M6gYcifAT4iuVY67PqIAzbQo14HwDhtGP5BWTyy2xzVx8JMUm6xlk8G27jlPWPwKS7uJQDZLx9C+LfkxmHpSQgdW5RnSnuAS0v+0RDK7ddbsCY5fduB0EK7flcUYjG9ER9QxxTrkq4GJVL5psVa9es011LDHb3GcqzQVKSXjPsKzYfNDMAwzQ4G5ockvDAMKvQCIl/qpdJNnTGYKCnBnPepzHBIMvVl8mhl/vmQ/irp8xCWTMTUHKZj2HR9YK6q3aN8Qd3/Ih2ssSqiHcb18q2lsnoJ6z/VZwuLKW4Oujd6W/p26XgYoG+XSSvpRK1+XfP6IP6H3V32AHclia/BaaD5iM00GG//ecJ6apKU39g7LgebddIwZrjHdiylODvxeLYYt1PjCNBt0YcEd8pJfbKpWgxVRLSWiAcpjO6kQkQIKBXLGINy+u3S0FHrEyc4Nqh3HixenIN8r1Yz7wpI0JfQcqRBqUIZjfO7TS7LPuIJYYFppnXUHdEsu1duETzNLNdXzTpSJyXTZTOLI8BvYLx0+l7hTBowN43V6ZpJ42WSfdoirLqR vgxLOEDo J8Wp3JQDw88Ag5GhNO/atcWJQHYUPBLxGn+LyAjWP832JD1LsKsOmp+4Kmsfi2Zd+TGqCl3IEcrCd0ueKZXiNyrBt5eSQK0tX5Ox/O9lDC9amPQD+tvesIPIgU4zLLiLL71+jgyaYXLdKetVlEk5d/6yhaIc2+DINBTZ6otpi4Z6MVjAbIsjblFSd+1GYY0MOccWlmns6xl3R6z668FyJGmqklpeSHpO0fh+PIBDT+bI8KDdRlOfv8zYk6I6/UqwtyBLbmWu4Hc0nLer1FpYbbx74OvNx28g25XVuK6cphDDE+sDtnsTJUTl4/g7i8+Jd7Tbo1N9YkNEPrbs0JnoLJek9v4Y8bcYYYxYe85CUgzb1/CueSoaOIT8CKw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Aug 21, 2024 at 7:35=E2=80=AFAM Usama Arif = wrote: > > > > On 19/08/2024 22:55, Barry Song wrote: > > On Tue, Aug 20, 2024 at 9:34=E2=80=AFAM Barry Song = wrote: > >> > >> On Tue, Aug 20, 2024 at 8:16=E2=80=AFAM Usama Arif wrote: > >>> > >>> > >>> > >>> On 19/08/2024 20:00, Barry Song wrote: > >>>> On Tue, Aug 20, 2024 at 2:17=E2=80=AFAM Usama Arif wrote: > >>>>> > >>>>> > >>>>> > >>>>> On 19/08/2024 09:29, Barry Song wrote: > >>>>>> Hi Usama, > >>>>>> > >>>>>> I feel it is much better now! thanks! > >>>>>> > >>>>>> On Mon, Aug 19, 2024 at 2:31=E2=80=AFPM Usama Arif wrote: > >>>>>>> > >>>>>>> Currently folio->_deferred_list is used to keep track of > >>>>>>> partially_mapped folios that are going to be split under memory > >>>>>>> pressure. In the next patch, all THPs that are faulted in and col= lapsed > >>>>>>> by khugepaged are also going to be tracked using _deferred_list. > >>>>>>> > >>>>>>> This patch introduces a pageflag to be able to distinguish betwee= n > >>>>>>> partially mapped folios and others in the deferred_list at split = time in > >>>>>>> deferred_split_scan. Its needed as __folio_remove_rmap decrements > >>>>>>> _mapcount, _large_mapcount and _entire_mapcount, hence it won't b= e > >>>>>>> possible to distinguish between partially mapped folios and other= s in > >>>>>>> deferred_split_scan. > >>>>>>> > >>>>>>> Eventhough it introduces an extra flag to track if the folio is > >>>>>>> partially mapped, there is no functional change intended with thi= s > >>>>>>> patch and the flag is not useful in this patch itself, it will > >>>>>>> become useful in the next patch when _deferred_list has non parti= ally > >>>>>>> mapped folios. > >>>>>>> > >>>>>>> Signed-off-by: Usama Arif > >>>>>>> --- > >>>>>>> include/linux/huge_mm.h | 4 ++-- > >>>>>>> include/linux/page-flags.h | 11 +++++++++++ > >>>>>>> mm/huge_memory.c | 23 ++++++++++++++++------- > >>>>>>> mm/internal.h | 4 +++- > >>>>>>> mm/memcontrol.c | 3 ++- > >>>>>>> mm/migrate.c | 3 ++- > >>>>>>> mm/page_alloc.c | 5 +++-- > >>>>>>> mm/rmap.c | 5 +++-- > >>>>>>> mm/vmscan.c | 3 ++- > >>>>>>> 9 files changed, 44 insertions(+), 17 deletions(-) > >>>>>>> > >>>>>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > >>>>>>> index 4c32058cacfe..969f11f360d2 100644 > >>>>>>> --- a/include/linux/huge_mm.h > >>>>>>> +++ b/include/linux/huge_mm.h > >>>>>>> @@ -321,7 +321,7 @@ static inline int split_huge_page(struct page= *page) > >>>>>>> { > >>>>>>> return split_huge_page_to_list_to_order(page, NULL, 0); > >>>>>>> } > >>>>>>> -void deferred_split_folio(struct folio *folio); > >>>>>>> +void deferred_split_folio(struct folio *folio, bool partially_ma= pped); > >>>>>>> > >>>>>>> void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, > >>>>>>> unsigned long address, bool freeze, struct folio = *folio); > >>>>>>> @@ -495,7 +495,7 @@ static inline int split_huge_page(struct page= *page) > >>>>>>> { > >>>>>>> return 0; > >>>>>>> } > >>>>>>> -static inline void deferred_split_folio(struct folio *folio) {} > >>>>>>> +static inline void deferred_split_folio(struct folio *folio, boo= l partially_mapped) {} > >>>>>>> #define split_huge_pmd(__vma, __pmd, __address) \ > >>>>>>> do { } while (0) > >>>>>>> > >>>>>>> diff --git a/include/linux/page-flags.h b/include/linux/page-flag= s.h > >>>>>>> index a0a29bd092f8..c3bb0e0da581 100644 > >>>>>>> --- a/include/linux/page-flags.h > >>>>>>> +++ b/include/linux/page-flags.h > >>>>>>> @@ -182,6 +182,7 @@ enum pageflags { > >>>>>>> /* At least one page in this folio has the hwpoison flag = set */ > >>>>>>> PG_has_hwpoisoned =3D PG_active, > >>>>>>> PG_large_rmappable =3D PG_workingset, /* anon or file-bac= ked */ > >>>>>>> + PG_partially_mapped =3D PG_reclaim, /* was identified to = be partially mapped */ > >>>>>>> }; > >>>>>>> > >>>>>>> #define PAGEFLAGS_MASK ((1UL << NR_PAGEFLAGS) - 1) > >>>>>>> @@ -861,8 +862,18 @@ static inline void ClearPageCompound(struct = page *page) > >>>>>>> ClearPageHead(page); > >>>>>>> } > >>>>>>> FOLIO_FLAG(large_rmappable, FOLIO_SECOND_PAGE) > >>>>>>> +FOLIO_TEST_FLAG(partially_mapped, FOLIO_SECOND_PAGE) > >>>>>>> +/* > >>>>>>> + * PG_partially_mapped is protected by deferred_split split_queu= e_lock, > >>>>>>> + * so its safe to use non-atomic set/clear. > >>>>>>> + */ > >>>>>>> +__FOLIO_SET_FLAG(partially_mapped, FOLIO_SECOND_PAGE) > >>>>>>> +__FOLIO_CLEAR_FLAG(partially_mapped, FOLIO_SECOND_PAGE) > >>>>>>> #else > >>>>>>> FOLIO_FLAG_FALSE(large_rmappable) > >>>>>>> +FOLIO_TEST_FLAG_FALSE(partially_mapped) > >>>>>>> +__FOLIO_SET_FLAG_NOOP(partially_mapped) > >>>>>>> +__FOLIO_CLEAR_FLAG_NOOP(partially_mapped) > >>>>>>> #endif > >>>>>>> > >>>>>>> #define PG_head_mask ((1UL << PG_head)) > >>>>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c > >>>>>>> index 2d77b5d2291e..70ee49dfeaad 100644 > >>>>>>> --- a/mm/huge_memory.c > >>>>>>> +++ b/mm/huge_memory.c > >>>>>>> @@ -3398,6 +3398,7 @@ int split_huge_page_to_list_to_order(struct= page *page, struct list_head *list, > >>>>>>> * page_deferred_list. > >>>>>>> */ > >>>>>>> list_del_init(&folio->_deferred_list); > >>>>>>> + __folio_clear_partially_mapped(folio); > >>>>>>> } > >>>>>>> spin_unlock(&ds_queue->split_queue_lock); > >>>>>>> if (mapping) { > >>>>>>> @@ -3454,11 +3455,13 @@ void __folio_undo_large_rmappable(struct = folio *folio) > >>>>>>> if (!list_empty(&folio->_deferred_list)) { > >>>>>>> ds_queue->split_queue_len--; > >>>>>>> list_del_init(&folio->_deferred_list); > >>>>>>> + __folio_clear_partially_mapped(folio); > >>>>>> > >>>>>> is it possible to make things clearer by > >>>>>> > >>>>>> if (folio_clear_partially_mapped) > >>>>>> __folio_clear_partially_mapped(folio); > >>>>>> > >>>>>> While writing without conditions isn't necessarily wrong, adding a= condition > >>>>>> will improve the readability of the code and enhance the clarity o= f my mTHP > >>>>>> counters series. also help decrease smp cache sync if we can avoid > >>>>>> unnecessary writing? > >>>>>> > >>>>> > >>>>> Do you mean if(folio_test_partially_mapped(folio))? > >>>>> > >>>>> I don't like this idea. I think it makes the readability worse? If = I was looking at if (test) -> clear for the first time, I would become conf= used why its being tested if its going to be clear at the end anyways? > >>>> > >>>> In the pmd-order case, the majority of folios are not partially mapp= ed. > >>>> Unconditional writes will trigger cache synchronization across all > >>>> CPUs (related to the MESI protocol), making them more costly. By > >>>> using conditional writes, such as "if(test) write," we can avoid > >>>> most unnecessary writes, which is much more efficient. Additionally, > >>>> we only need to manage nr_split_deferred when the condition > >>>> is met. We are carefully evaluating all scenarios to determine > >>>> if modifications to the partially_mapped flag are necessary. > >>>> > >>> > >>> > >>> Hmm okay, as you said its needed for nr_split_deferred anyways. Somet= hing like below is ok to fold in? > >>> > >>> commit 4ae9e2067346effd902b342296987b97dee29018 (HEAD) > >>> Author: Usama Arif > >>> Date: Mon Aug 19 21:07:16 2024 +0100 > >>> > >>> mm: Introduce a pageflag for partially mapped folios fix > >>> > >>> Test partially_mapped flag before clearing it. This should > >>> avoid unnecessary writes and will be needed in the nr_split_defer= red > >>> series. > >>> > >>> Signed-off-by: Usama Arif > >>> > >>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c > >>> index 5d67d3b3c1b2..ccde60aaaa0f 100644 > >>> --- a/mm/huge_memory.c > >>> +++ b/mm/huge_memory.c > >>> @@ -3479,7 +3479,8 @@ void __folio_undo_large_rmappable(struct folio = *folio) > >>> if (!list_empty(&folio->_deferred_list)) { > >>> ds_queue->split_queue_len--; > >>> list_del_init(&folio->_deferred_list); > >>> - __folio_clear_partially_mapped(folio); > >>> + if (folio_test_partially_mapped(folio)) > >>> + __folio_clear_partially_mapped(folio); > >>> } > >>> spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); > >>> } > >>> @@ -3610,7 +3611,8 @@ static unsigned long deferred_split_scan(struct= shrinker *shrink, > >>> } else { > >>> /* We lost race with folio_put() */ > >>> list_del_init(&folio->_deferred_list); > >>> - __folio_clear_partially_mapped(folio); > >>> + if (folio_test_partially_mapped(folio)) > >>> + __folio_clear_partially_mapped(folio)= ; > >>> ds_queue->split_queue_len--; > >>> } > >>> if (!--sc->nr_to_scan) > >>> > >> > >> Do we also need if (folio_test_partially_mapped(folio)) in > >> split_huge_page_to_list_to_order()? > >> > >> I recall that in Yu Zhao's TAO, there=E2=80=99s a chance of splitting = (shattering) > >> non-partially-mapped folios. To be future-proof, we might want to hand= le > >> both cases equally. > > > > we recall we also have a real case which can split entirely_mapped > > folio: > > > > mm: huge_memory: enable debugfs to split huge pages to any order > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/comm= it/?id=3Dfc4d182316bd5309b4066fd9ef21529ea397a7d4 > > > >> > >> By the way, we might not need to clear the flag for a new folio. This = differs > >> from the init_list, which is necessary. If a new folio has the partial= ly_mapped > >> flag, it indicates that we failed to clear it when freeing the folio t= o > >> the buddy system, which is a bug we need to fix in the free path. > >> > >> Thanks > >> Barry > > I believe the below fixlet should address all concerns: Hi Usama, thanks! I can't judge if we need this partially_mapped flag. but if we need, the code looks correct to me. I'd like to leave this to David and other experts to a= ck. an alternative approach might be two lists? one for entirely_mapped, the other one for split_deferred. also seems ugly ? On the other hand, when we want to extend your patchset to mTHP other than = PMD- order, will the only deferred_list create huge lock contention while adding or removing folios from it? > > > From 95492a51b1929ea274b4e5b78fc74e7736645d58 Mon Sep 17 00:00:00 2001 > From: Usama Arif > Date: Mon, 19 Aug 2024 21:07:16 +0100 > Subject: [PATCH] mm: Introduce a pageflag for partially mapped folios fix > > Test partially_mapped flag before clearing it. This should > avoid unnecessary writes and will be needed in the nr_split_deferred > series. > Also no need to clear partially_mapped prepping compound head, as it > should start with already being cleared. > > Signed-off-by: Usama Arif > --- > include/linux/page-flags.h | 2 +- > mm/huge_memory.c | 9 ++++++--- > mm/internal.h | 4 +--- > 3 files changed, 8 insertions(+), 7 deletions(-) > > diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h > index c3bb0e0da581..f1602695daf2 100644 > --- a/include/linux/page-flags.h > +++ b/include/linux/page-flags.h > @@ -1182,7 +1182,7 @@ static __always_inline void __ClearPageAnonExclusiv= e(struct page *page) > */ > #define PAGE_FLAGS_SECOND \ > (0xffUL /* order */ | 1UL << PG_has_hwpoisoned | \ > - 1UL << PG_large_rmappable) > + 1UL << PG_large_rmappable | 1UL << PG_partially_mapped) > > #define PAGE_FLAGS_PRIVATE \ > (1UL << PG_private | 1UL << PG_private_2) > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 5d67d3b3c1b2..402b9d933de0 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -3422,7 +3422,8 @@ int split_huge_page_to_list_to_order(struct page *p= age, struct list_head *list, > * page_deferred_list. > */ > list_del_init(&folio->_deferred_list); > - __folio_clear_partially_mapped(folio); > + if (folio_test_partially_mapped(folio)) > + __folio_clear_partially_mapped(folio); > } > spin_unlock(&ds_queue->split_queue_lock); > if (mapping) { > @@ -3479,7 +3480,8 @@ void __folio_undo_large_rmappable(struct folio *fol= io) > if (!list_empty(&folio->_deferred_list)) { > ds_queue->split_queue_len--; > list_del_init(&folio->_deferred_list); > - __folio_clear_partially_mapped(folio); > + if (folio_test_partially_mapped(folio)) > + __folio_clear_partially_mapped(folio); > } > spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); > } > @@ -3610,7 +3612,8 @@ static unsigned long deferred_split_scan(struct shr= inker *shrink, > } else { > /* We lost race with folio_put() */ > list_del_init(&folio->_deferred_list); > - __folio_clear_partially_mapped(folio); > + if (folio_test_partially_mapped(folio)) > + __folio_clear_partially_mapped(folio); > ds_queue->split_queue_len--; > } > if (!--sc->nr_to_scan) > diff --git a/mm/internal.h b/mm/internal.h > index 27cbb5365841..52f7fc4e8ac3 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -662,10 +662,8 @@ static inline void prep_compound_head(struct page *p= age, unsigned int order) > atomic_set(&folio->_entire_mapcount, -1); > atomic_set(&folio->_nr_pages_mapped, 0); > atomic_set(&folio->_pincount, 0); > - if (order > 1) { > + if (order > 1) > INIT_LIST_HEAD(&folio->_deferred_list); > - __folio_clear_partially_mapped(folio); > - } > } > > static inline void prep_compound_tail(struct page *head, int tail_idx) > -- > 2.43.5 > > Thanks Barry