From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BCFEBC3DA4A for ; Wed, 14 Aug 2024 11:20:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1D5676B0082; Wed, 14 Aug 2024 07:20:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 187716B0083; Wed, 14 Aug 2024 07:20:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 04D0F6B0085; Wed, 14 Aug 2024 07:20:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id DAD946B0082 for ; Wed, 14 Aug 2024 07:20:28 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 8E72180E35 for ; Wed, 14 Aug 2024 11:20:28 +0000 (UTC) X-FDA: 82450607736.03.C08B905 Received: from mail-ed1-f49.google.com (mail-ed1-f49.google.com [209.85.208.49]) by imf25.hostedemail.com (Postfix) with ESMTP id 72338A0018 for ; Wed, 14 Aug 2024 11:20:26 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="ID7b+r7/"; spf=pass (imf25.hostedemail.com: domain of usamaarif642@gmail.com designates 209.85.208.49 as permitted sender) smtp.mailfrom=usamaarif642@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723634391; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Tb6J3EBLjoNeA2OGyKq3VDoDanL15VcTO4gE9p8DPvA=; b=1rcX4Yb+YGivKfDFSF0VZn70MX/7p5VfgQcJz6HJ99JpEqTNZB4flsJJ7jjNzhkBhZnwUQ fPOISNdfuApTMEo1hGBxg/9DgMFSbqv5Q4kI1ALGCuII1Bh7NXrczRuMZrxkRY7RVsgUk7 /CUME4k8s54kLxguoiYzis/rx7NdMfA= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="ID7b+r7/"; spf=pass (imf25.hostedemail.com: domain of usamaarif642@gmail.com designates 209.85.208.49 as permitted sender) smtp.mailfrom=usamaarif642@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723634391; a=rsa-sha256; cv=none; b=3efTOHavAs+u0kZB7EirxQP7oIIcl7DtcJCptGXABmFMcUgXHeIV218toLcSrG9B2w4lXS e0R1pxuS6NnEMz4gLTZpquR/MaE5+/2+ysHcuhUjTV/1z96jwDSiKer4iFANKU+/9xeda3 3+YxzvOeL2PIE2ycXyKTUzQj05J+Vqo= Received: by mail-ed1-f49.google.com with SMTP id 4fb4d7f45d1cf-5ba43b433beso7120918a12.1 for ; Wed, 14 Aug 2024 04:20:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1723634425; x=1724239225; darn=kvack.org; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=Tb6J3EBLjoNeA2OGyKq3VDoDanL15VcTO4gE9p8DPvA=; b=ID7b+r7/g51rGadE91H7UTzEytE9SxnIImJB8W7kwETirZOtA7/OCzZoKTSDzk0CWk vAuY5WSe+iDxqcW6WhplyYURHpE4QO9RLBAAhLgIgCgViDc85X96JKTtQ9G3VADQxMTf pbyDL6Zz/F7jKN0r5nUfcgB9E9Oiyt0qZib0ZZiBSOF0TReOYxQn/ZRy0J+1h+4yZwGA 77B2L6zWw1QlG3JH5ESDv0Sno6+7uxL3ZxqZAm0otaKfCWnMFywkCMuOsQ70udxW+aMQ z1y800GZpQfYPNIolRppKRqBjJFf/iAjh34JsTc9tqrxyTkHPCRz9PS/IwPp/ElKfkMD rp6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723634425; x=1724239225; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Tb6J3EBLjoNeA2OGyKq3VDoDanL15VcTO4gE9p8DPvA=; b=YzASLxvsEkxfjsD3NytssH4su1cCNrgDSuJja/IkYaXVfEJiNApidUWSMeLDpC6jg8 i90asvY8W1fvUfHtcowZv6bYB9+VCbXQDsCORUK7/9pVDNdE+dOTcoW3tpqVIlArlS0L AgkM1E8GFxm9BOzfSt/kROTm+SxscSnyWex0VrDmlIwgmPtXR4linlyVCTHJPQ3jwLMK cCHHxcLZ87tRwH67DAZZGH1DvsasBKSWdoTv53Ajb5MVKbf6F3a+pEA2wQ24QFrn5T+W 7rfzDARfw8FeIHwwaaLUjHv/0UW2jMvVCa9r9h2Kri4mLoYMkI8Me91eM/SpxnEAiP/N utog== X-Forwarded-Encrypted: i=1; AJvYcCWP0TuJ6b6iaZ2g+WGKpXzaxU3GaHBpEHHcpJMAqOoWkv3iCYK58+hr6HSaULMcP/QWmVgTVrwFbat1Cr/vDkl5ZUo= X-Gm-Message-State: AOJu0YzjUVMjOyX1OfPJR9sucXvIH/vfsXj6muQ7pKZm6aVdZppYolUD zjU0bg+utkjgl+X8bDdTEaNZX1zE4j4YsQK/FO9mQJuE0FcBCtt3 X-Google-Smtp-Source: AGHT+IG0voeX5aKQ/WR3VH/GwKdslrggRIzEhGXJGAhCTIBzrdOtWZO74hDQHqf+ohDM30nvGvM5yQ== X-Received: by 2002:a17:907:e210:b0:a7a:a4be:2f95 with SMTP id a640c23a62f3a-a8366c1fbaemr201841066b.5.1723634424437; Wed, 14 Aug 2024 04:20:24 -0700 (PDT) Received: from ?IPV6:2a03:83e0:1126:4:eb:d0d0:c7fd:c82c? ([2620:10d:c092:500::4:61b7]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a80f411bb3csm160072166b.116.2024.08.14.04.20.23 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 14 Aug 2024 04:20:24 -0700 (PDT) Message-ID: <59725862-f4fc-456c-bafb-cbd302777881@gmail.com> Date: Wed, 14 Aug 2024 12:20:23 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 4/6] mm: Introduce a pageflag for partially mapped folios To: Barry Song Cc: akpm@linux-foundation.org, linux-mm@kvack.org, hannes@cmpxchg.org, riel@surriel.com, shakeel.butt@linux.dev, roman.gushchin@linux.dev, yuzhao@google.com, david@redhat.com, ryan.roberts@arm.com, rppt@kernel.org, willy@infradead.org, cerasuolodomenico@gmail.com, corbet@lwn.net, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kernel-team@meta.com References: <20240813120328.1275952-1-usamaarif642@gmail.com> <20240813120328.1275952-5-usamaarif642@gmail.com> Content-Language: en-US From: Usama Arif In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 72338A0018 X-Stat-Signature: o3yf5xyb1inkcd89dszf8isqnn5kii8s X-HE-Tag: 1723634426-129351 X-HE-Meta: U2FsdGVkX1+EOlfQ1r7RB+ADgRGAUl0KzO6xhiuGwTNB9Bkw0GG8JwdqmMBGkvt+2phicOhxTup1f9GHIYdaYal16ArFQxcTG7gOIz9YqjsqpzUCJ8D/7c/eZ1rQnBRP6GIxIpgYyAJqjgAtwd1Qz4rqNX1QAzXqvHpgLNhZmOT2MjcFie2JZEByppgZf2ZsCMcT8wAOjzOO+rNJrv8KwAMYoV79HyIsowkPx5G07vNUNQ/I/UNEvWe3HLD36XEj/ByirV54BSUNw0g3K+NuDlZ5R5oD0m1VnYdmaBWWWd/TtiHoynKUMw1gEn8tvFKH08TDMrUOY+FVvN4JV/xtiiF7MDZ0R1nHfdcnl3OJxvJwyVRpFuiAm6+3fTJd2oYfIC7/QJ0KNld8mDyODAq+rps8JKbL467RAbN+Mfhlg3kTPKTkr1cAW62RQMDqnCIKTa2rGjISyHgSp0b2jiEvVPUFfW7Yc91++xSSmt4gk6rJk2YT2K8xAlmZoARkTg1nuZ3DW2JRG0mFwJbo6+AfDC9qdrNnqYFKQx09EnZKNZMY36wUM1ZOXeb9jsBsDNu5nXTeiT+QvdSkyLuKFwsM0plrc+ywlsVwSSZhmmbOcvzD2KjtFXzi0cLPCUcUcLpDUhsvcEutwSm3Bjp9v+aExjELGXhckEhDenfSintqwRp65vWecSdez59lf8AYc0PePyf1Rngs7/zhlqBfON//9t38wGuXePpC1t9C5sUVMkwXSNdmML3Lmn9cMD46m5vqVHazmAFGmxIZO1+B24dQB3jrLy/lJosic4DBFGt/3YEJbN8RTajrE5V35KwDkVyfrhvMPyWSYf2mfYeNnqVMEdwOY46T3DntlBHZ7o73KjjlaAPjedO9w2jwNqtAZWRsBG3rSRBEQa8nfzAztwAVcgNOQw6DrjR4PcNDvwtUb536BSGxLzZum25rC8OhRWEpMm5Eo6UeqM2fkuYWHHq FlJ9pFuN v39Vu+LloIabK+sUo9P7laqbPGT3DMeLscixf4EKBEKFvBSWaA+LIEDSaV5avTI0oForrraPQxLDTZ/V3yyjCTYbntCnJXkBpPW8VdHsyT0xW8CF6N+5PjNLIQVn94HCl6jUXiw5GhByvXXgHejpgjxQDh0611fDbpRSJ9KgDcP81raQZPyebxeKxlkcg/0hUmgysCkt90aCjkHN2pZhmApLgIeT7HpweICxsnd25b9a7WU7QOpm34t7MXuEthg7nmFc5aKzDn0ERZ9hEHJKnnCK9aTJor3wZvAkXtdk6PLtOVVc5p/+zb+yy44WOJDmIrKt9P1ISmw2KJLOcype2bF6g/yIQxdBUCZfCtTyR+MSrCOm9O6ksVqzKquS155NRYsWy865GHeXEhxI7hPOzG1ctwFIxwQXPM/7Ot28LHyGevrQD+jhlFWeQtQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 14/08/2024 12:10, Barry Song wrote: > On Wed, Aug 14, 2024 at 12:03 AM Usama Arif wrote: >> >> Currently folio->_deferred_list is used to keep track of >> partially_mapped folios that are going to be split under memory >> pressure. In the next patch, all THPs that are faulted in and collapsed >> by khugepaged are also going to be tracked using _deferred_list. >> >> This patch introduces a pageflag to be able to distinguish between >> partially mapped folios and others in the deferred_list at split time in >> deferred_split_scan. Its needed as __folio_remove_rmap decrements >> _mapcount, _large_mapcount and _entire_mapcount, hence it won't be >> possible to distinguish between partially mapped folios and others in >> deferred_split_scan. >> >> Eventhough it introduces an extra flag to track if the folio is >> partially mapped, there is no functional change intended with this >> patch and the flag is not useful in this patch itself, it will >> become useful in the next patch when _deferred_list has non partially >> mapped folios. >> >> Signed-off-by: Usama Arif >> --- >> include/linux/huge_mm.h | 4 ++-- >> include/linux/page-flags.h | 3 +++ >> mm/huge_memory.c | 21 +++++++++++++-------- >> mm/hugetlb.c | 1 + >> mm/internal.h | 4 +++- >> mm/memcontrol.c | 3 ++- >> mm/migrate.c | 3 ++- >> mm/page_alloc.c | 5 +++-- >> mm/rmap.c | 3 ++- >> mm/vmscan.c | 3 ++- >> 10 files changed, 33 insertions(+), 17 deletions(-) >> >> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h >> index 4c32058cacfe..969f11f360d2 100644 >> --- a/include/linux/huge_mm.h >> +++ b/include/linux/huge_mm.h >> @@ -321,7 +321,7 @@ static inline int split_huge_page(struct page *page) >> { >> return split_huge_page_to_list_to_order(page, NULL, 0); >> } >> -void deferred_split_folio(struct folio *folio); >> +void deferred_split_folio(struct folio *folio, bool partially_mapped); >> >> void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, >> unsigned long address, bool freeze, struct folio *folio); >> @@ -495,7 +495,7 @@ static inline int split_huge_page(struct page *page) >> { >> return 0; >> } >> -static inline void deferred_split_folio(struct folio *folio) {} >> +static inline void deferred_split_folio(struct folio *folio, bool partially_mapped) {} >> #define split_huge_pmd(__vma, __pmd, __address) \ >> do { } while (0) >> >> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h >> index a0a29bd092f8..cecc1bad7910 100644 >> --- a/include/linux/page-flags.h >> +++ b/include/linux/page-flags.h >> @@ -182,6 +182,7 @@ enum pageflags { >> /* At least one page in this folio has the hwpoison flag set */ >> PG_has_hwpoisoned = PG_active, >> PG_large_rmappable = PG_workingset, /* anon or file-backed */ >> + PG_partially_mapped, /* was identified to be partially mapped */ >> }; >> >> #define PAGEFLAGS_MASK ((1UL << NR_PAGEFLAGS) - 1) >> @@ -861,8 +862,10 @@ static inline void ClearPageCompound(struct page *page) >> ClearPageHead(page); >> } >> FOLIO_FLAG(large_rmappable, FOLIO_SECOND_PAGE) >> +FOLIO_FLAG(partially_mapped, FOLIO_SECOND_PAGE) >> #else >> FOLIO_FLAG_FALSE(large_rmappable) >> +FOLIO_FLAG_FALSE(partially_mapped) >> #endif >> >> #define PG_head_mask ((1UL << PG_head)) >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >> index 6df0e9f4f56c..c024ab0f745c 100644 >> --- a/mm/huge_memory.c >> +++ b/mm/huge_memory.c >> @@ -3397,6 +3397,7 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, >> * page_deferred_list. >> */ >> list_del_init(&folio->_deferred_list); >> + folio_clear_partially_mapped(folio); >> } >> spin_unlock(&ds_queue->split_queue_lock); >> if (mapping) { >> @@ -3453,11 +3454,12 @@ void __folio_undo_large_rmappable(struct folio *folio) >> if (!list_empty(&folio->_deferred_list)) { >> ds_queue->split_queue_len--; >> list_del_init(&folio->_deferred_list); >> + folio_clear_partially_mapped(folio); >> } >> spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); >> } >> >> -void deferred_split_folio(struct folio *folio) >> +void deferred_split_folio(struct folio *folio, bool partially_mapped) >> { >> struct deferred_split *ds_queue = get_deferred_split_queue(folio); >> #ifdef CONFIG_MEMCG >> @@ -3485,14 +3487,17 @@ void deferred_split_folio(struct folio *folio) >> if (folio_test_swapcache(folio)) >> return; >> >> - if (!list_empty(&folio->_deferred_list)) >> - return; >> - >> spin_lock_irqsave(&ds_queue->split_queue_lock, flags); >> + if (partially_mapped) >> + folio_set_partially_mapped(folio); >> + else >> + folio_clear_partially_mapped(folio); >> if (list_empty(&folio->_deferred_list)) { >> - if (folio_test_pmd_mappable(folio)) >> - count_vm_event(THP_DEFERRED_SPLIT_PAGE); >> - count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFERRED); >> + if (partially_mapped) { >> + if (folio_test_pmd_mappable(folio)) >> + count_vm_event(THP_DEFERRED_SPLIT_PAGE); >> + count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFERRED); > > This code completely broke MTHP_STAT_SPLIT_DEFERRED for PMD_ORDER. It > added the folio to the deferred_list as entirely_mapped > (partially_mapped == false). > However, when partially_mapped becomes true, there's no opportunity to > add it again > as it has been there on the list. Are you consistently seeing the counter for > PMD_ORDER as 0? > Ah I see it, this should fix it? -void deferred_split_folio(struct folio *folio) +/* partially_mapped=false won't clear PG_partially_mapped folio flag */ +void deferred_split_folio(struct folio *folio, bool partially_mapped) { struct deferred_split *ds_queue = get_deferred_split_queue(folio); #ifdef CONFIG_MEMCG @@ -3485,14 +3488,14 @@ void deferred_split_folio(struct folio *folio) if (folio_test_swapcache(folio)) return; - if (!list_empty(&folio->_deferred_list)) - return; - spin_lock_irqsave(&ds_queue->split_queue_lock, flags); - if (list_empty(&folio->_deferred_list)) { + if (partially_mapped) { + folio_set_partially_mapped(folio); if (folio_test_pmd_mappable(folio)) count_vm_event(THP_DEFERRED_SPLIT_PAGE); count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFERRED); + } + if (list_empty(&folio->_deferred_list)) { list_add_tail(&folio->_deferred_list, &ds_queue->split_queue); ds_queue->split_queue_len++; #ifdef CONFIG_MEMCG