From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0E7BC3DA4A for ; Wed, 14 Aug 2024 11:11:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3A3226B0085; Wed, 14 Aug 2024 07:11:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 352716B0088; Wed, 14 Aug 2024 07:11:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 21B1B6B0089; Wed, 14 Aug 2024 07:11:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 05CB16B0085 for ; Wed, 14 Aug 2024 07:11:31 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id AAE90A07C2 for ; Wed, 14 Aug 2024 11:11:31 +0000 (UTC) X-FDA: 82450585182.13.24484C9 Received: from mail-ej1-f52.google.com (mail-ej1-f52.google.com [209.85.218.52]) by imf16.hostedemail.com (Postfix) with ESMTP id 83F40180018 for ; Wed, 14 Aug 2024 11:11:29 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=eh9rmdLm; spf=pass (imf16.hostedemail.com: domain of usamaarif642@gmail.com designates 209.85.218.52 as permitted sender) smtp.mailfrom=usamaarif642@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723633854; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bd95t4QRiWzWEZTdTjQVQS44zRvwX6u9xVdH0F86DIg=; b=oc63WRmJZ0Z3CHT+fzFi3PWCUjP5hQcwpojISkHV7T9cwo1UGVxibpiR41ygncTYamWbNo t3aaJBjsOK/Igv7cBfr7M58pN2ByBC3ozKCk/+41E2slfYdGpE/aiUmcBsER72O16p/1QT llldrqMB7A3ygdHCA5KcmGLDSsbyps4= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=eh9rmdLm; spf=pass (imf16.hostedemail.com: domain of usamaarif642@gmail.com designates 209.85.218.52 as permitted sender) smtp.mailfrom=usamaarif642@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723633854; a=rsa-sha256; cv=none; b=b1C9Vv2fETpgKOiyUVAyFMGQuJDuyZsDGZWwZ+lPMGduV0atAhC2Y3qx7uG3rfJdoicVRl X2yFqFEkoPkUYUj5hes2luq4VS8TPNUKmJF53GOvOP6N1cY+V7atzxf1FAM1Uiy9z5BaiF Kp9Gm+meGeNqpXVVIUhIvNnGhCM4Uxo= Received: by mail-ej1-f52.google.com with SMTP id a640c23a62f3a-a7a94478a4eso140873066b.1 for ; Wed, 14 Aug 2024 04:11:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1723633888; x=1724238688; darn=kvack.org; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=bd95t4QRiWzWEZTdTjQVQS44zRvwX6u9xVdH0F86DIg=; b=eh9rmdLmxBUvx4Co8SGDsDUrKC9ussBVjrwe4c/cZR/izwimm/HMCk5l72Q40sGS8t LA8XGWdEEFneDef8DMKDllbsR4xUkUCsm4eLYh1j9G3xsAMu0vu8I6nqyEULtOJXyf0q jCsxoVMqf18N5SnJ98XIFdW72EWHLnOkMiGxSSxOGCPjZlUahD3V7LkhfrsXGH7EnsDq +dc12+d3pSNOAMarRba2cdqAq0zTUW34ISMgiG6RtARO7nl1sBKurbB7346sgtu4MG36 Hhs5s29gHWJyUuxIB6tzTTPlbQdF4gp9bU/QMvK1gT/UkMPbI4E2xF1Wsq8Gd63QLs1z ojLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723633888; x=1724238688; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bd95t4QRiWzWEZTdTjQVQS44zRvwX6u9xVdH0F86DIg=; b=EhFMwp94RlvDPmK89xSAdyegBN7hYq1p8p79ntUGdiui+CJB8ce7UsT8CnfBeSXnZS WzQVdBKNs5S7g9sMyoFs5DaIoRK4gevvUroHUplRUmX3TqtZjtCtmlghSRUtCJVJjH0+ nSM7KnIRNYQKZlxTVolaDrhtPC+yI8MgjORvbGJK0Q3vqBqFb+cd5ID9mpGoksHKWJZv ZVQF49DrPdRLuOnfOyqGKY/JFwAlWE4UZGOAU96hIce839KplfZOpDGtkffZapad7YCD gIJiuyAAcR1wos1iI6Z0S93Rw3DMzM+YPsJRPZEaDmcRXISsqwihg4Qa6WVPf487/2j9 aMAA== X-Forwarded-Encrypted: i=1; AJvYcCWbCdPe1NqRCD9q71Wz61aoDPEHfpbnPMH13zGAWXrdkHsulaV2yMwf3CfAY9+iH32YO4pmf3lGnOvWpMzZANcib0E= X-Gm-Message-State: AOJu0YyZWym+Kp/4ctI6mMnkev4RRd6zbm2mt0OtIeqJsGZB1mLm8t6T XxBY5EUNsJfz0fwyk+L6hxpaRfUPVgUfF6hF6r3M5YvfVfTJvUjQ X-Google-Smtp-Source: AGHT+IGBU4oEgLhtMtUTy0jy1FrOvevihP2RUEWR89Y1SrW/4ky4uCUP/QrzhHHHWRNNQ88+2LKuiQ== X-Received: by 2002:a17:907:e257:b0:a7c:d284:4f1d with SMTP id a640c23a62f3a-a836afcbf78mr152560766b.28.1723633887604; Wed, 14 Aug 2024 04:11:27 -0700 (PDT) Received: from ?IPV6:2a03:83e0:1126:4:eb:d0d0:c7fd:c82c? ([2620:10d:c092:500::4:61b7]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a80f4184ec9sm159151466b.219.2024.08.14.04.11.27 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 14 Aug 2024 04:11:27 -0700 (PDT) Message-ID: <925804e4-0b33-45eb-905d-e00f67192828@gmail.com> Date: Wed, 14 Aug 2024 12:11:26 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 4/6] mm: Introduce a pageflag for partially mapped folios To: Barry Song Cc: akpm@linux-foundation.org, linux-mm@kvack.org, hannes@cmpxchg.org, riel@surriel.com, shakeel.butt@linux.dev, roman.gushchin@linux.dev, yuzhao@google.com, david@redhat.com, ryan.roberts@arm.com, rppt@kernel.org, willy@infradead.org, cerasuolodomenico@gmail.com, corbet@lwn.net, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kernel-team@meta.com References: <20240813120328.1275952-1-usamaarif642@gmail.com> <20240813120328.1275952-5-usamaarif642@gmail.com> Content-Language: en-US From: Usama Arif In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 83F40180018 X-Stat-Signature: f44u1zif9beeekuumzmp1etrcb4x8oi8 X-HE-Tag: 1723633889-49140 X-HE-Meta: U2FsdGVkX1+jC1DPLps5us14P1vJ4JSLbASUNv50+HQsDKZLZ4gihcTTltRmTgIUbL0Qxc607aeUL98YHcXvblaDGHEOCSBhsvqvLJvw/y+M5+dhuaTRpSVLBaNb4jzbBnIdSW+3PY1Y0bylmPWWjqIb6mUp0CJoG8LRisI5/zDAJFeeGxMz5TtgA/CwanNhgWekGoIwQ2Vk/Abe2CHET0imdZldj0rEe/hG13ZnlEgEgbpYbYLd99NQ2xLk7Mrvvz5krLm7WosDhQ6v7y+FAoijWy5hyLxQseGHOXrtPwjFOAg4sB9AyDAMnjTh9obJP4nbWXC08BzwUPSSx8wvYl0sqShzZW2DdmSCOoC7r+zlX+55Ptk0XKbNDxbKB1+GHlx/Wqf0mgx0K1wyI6niXP5AyR32DOruUkGU2OFJBz5VwrBmg0ez5RUv2ihNqIQZWxFTRn8Zmd7RBwJqvNjWWE5Sl31OjRrh7cCiLUVfTvibnp8pAHsjsCNCTXEUXhIMSzeOD1bPksiT9KAAblc4qWXDh9x5k4SaqDDWQItE/yubYKjAvWTRBWeG1Fp8iF6QvMBw1kgHdygbwTp3TDEMABAw7mNsLNz6RIlY221Y9JYvW5H3jQb6E4DiQsqAR2FKeSDlURlbTmJY+vOrFEt+ThRaXXkoGr51EWr7DJFIrwkpqLfrzD/3SzryEJmu7FYmWrps+cBa9IheJQMTjVP2H2VcaJtC5QKERMfRz9c2tZMQ7aSl3FRKv0Ku3OcORRj4ega+DghiDfzE4Z3ZkI3v87z8GLuphNd8Db8I03i1eQNAgkQss/27TgWiAOm3096MwwL3taz4L5nuGtHQo3MhdagFl6pw7pWL9rwKTkgRElcCO9RWp0Q+AiNxK1WE+pbzXdgXnJgF4/whucakpQ9Eh3Bz/NsT1rxFeMmAzKk8P45CuJZ0/S+IQwzCNLTcsOESaWAltJr5KVFdz5PLz2C ASnVR6bG SUAMDvbOQiObEoLehbLasmaELVg1pzE+rs0WzrlBhWQJJw5E1vt0Khvo8OC1Hrly0p8kNz4CqA8tcz4ibs+EIzQnmqJ9vNDH/JIk8SHlqhzxB+N836CMzmLQOglhAOdcbvHtZ4cdybDQdSf2/I0DZr5I0s6rHl9cGzZdjBnrkLs539BxfN+dM0Z8vSqqs6VOunoVmJQdZto2ymSNvDcvxVJIaI5PvLgh2eiR5qhS9cPGP4j3ETg4ctRcrBJpqsWdiESU0bFH0YXhRaI2ks2wBIrfIKVSWQvN1mtaMZqdWw9OsbuKi92taneLm/6uiAm/O3U/ybff2y3j3LGWpb+r/C+w0xcEb15/t8VYFHEdyYdEgeDHPuWusNWgkpF01GhSY8hmu3zvUjUkdKK2ddeSIQpGHg1Ls80TKy7+bQmPISxE3oy5k23cE/g29RQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 14/08/2024 11:44, Barry Song wrote: > On Wed, Aug 14, 2024 at 12:03 AM Usama Arif wrote: >> >> Currently folio->_deferred_list is used to keep track of >> partially_mapped folios that are going to be split under memory >> pressure. In the next patch, all THPs that are faulted in and collapsed >> by khugepaged are also going to be tracked using _deferred_list. >> >> This patch introduces a pageflag to be able to distinguish between >> partially mapped folios and others in the deferred_list at split time in >> deferred_split_scan. Its needed as __folio_remove_rmap decrements >> _mapcount, _large_mapcount and _entire_mapcount, hence it won't be >> possible to distinguish between partially mapped folios and others in >> deferred_split_scan. >> >> Eventhough it introduces an extra flag to track if the folio is >> partially mapped, there is no functional change intended with this >> patch and the flag is not useful in this patch itself, it will >> become useful in the next patch when _deferred_list has non partially >> mapped folios. >> >> Signed-off-by: Usama Arif >> --- >> include/linux/huge_mm.h | 4 ++-- >> include/linux/page-flags.h | 3 +++ >> mm/huge_memory.c | 21 +++++++++++++-------- >> mm/hugetlb.c | 1 + >> mm/internal.h | 4 +++- >> mm/memcontrol.c | 3 ++- >> mm/migrate.c | 3 ++- >> mm/page_alloc.c | 5 +++-- >> mm/rmap.c | 3 ++- >> mm/vmscan.c | 3 ++- >> 10 files changed, 33 insertions(+), 17 deletions(-) >> >> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h >> index 4c32058cacfe..969f11f360d2 100644 >> --- a/include/linux/huge_mm.h >> +++ b/include/linux/huge_mm.h >> @@ -321,7 +321,7 @@ static inline int split_huge_page(struct page *page) >> { >> return split_huge_page_to_list_to_order(page, NULL, 0); >> } >> -void deferred_split_folio(struct folio *folio); >> +void deferred_split_folio(struct folio *folio, bool partially_mapped); >> >> void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, >> unsigned long address, bool freeze, struct folio *folio); >> @@ -495,7 +495,7 @@ static inline int split_huge_page(struct page *page) >> { >> return 0; >> } >> -static inline void deferred_split_folio(struct folio *folio) {} >> +static inline void deferred_split_folio(struct folio *folio, bool partially_mapped) {} >> #define split_huge_pmd(__vma, __pmd, __address) \ >> do { } while (0) >> >> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h >> index a0a29bd092f8..cecc1bad7910 100644 >> --- a/include/linux/page-flags.h >> +++ b/include/linux/page-flags.h >> @@ -182,6 +182,7 @@ enum pageflags { >> /* At least one page in this folio has the hwpoison flag set */ >> PG_has_hwpoisoned = PG_active, >> PG_large_rmappable = PG_workingset, /* anon or file-backed */ >> + PG_partially_mapped, /* was identified to be partially mapped */ >> }; >> >> #define PAGEFLAGS_MASK ((1UL << NR_PAGEFLAGS) - 1) >> @@ -861,8 +862,10 @@ static inline void ClearPageCompound(struct page *page) >> ClearPageHead(page); >> } >> FOLIO_FLAG(large_rmappable, FOLIO_SECOND_PAGE) >> +FOLIO_FLAG(partially_mapped, FOLIO_SECOND_PAGE) >> #else >> FOLIO_FLAG_FALSE(large_rmappable) >> +FOLIO_FLAG_FALSE(partially_mapped) >> #endif >> >> #define PG_head_mask ((1UL << PG_head)) >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >> index 6df0e9f4f56c..c024ab0f745c 100644 >> --- a/mm/huge_memory.c >> +++ b/mm/huge_memory.c >> @@ -3397,6 +3397,7 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, >> * page_deferred_list. >> */ >> list_del_init(&folio->_deferred_list); >> + folio_clear_partially_mapped(folio); >> } >> spin_unlock(&ds_queue->split_queue_lock); >> if (mapping) { >> @@ -3453,11 +3454,12 @@ void __folio_undo_large_rmappable(struct folio *folio) >> if (!list_empty(&folio->_deferred_list)) { >> ds_queue->split_queue_len--; >> list_del_init(&folio->_deferred_list); >> + folio_clear_partially_mapped(folio); >> } >> spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); >> } >> >> -void deferred_split_folio(struct folio *folio) >> +void deferred_split_folio(struct folio *folio, bool partially_mapped) >> { >> struct deferred_split *ds_queue = get_deferred_split_queue(folio); >> #ifdef CONFIG_MEMCG >> @@ -3485,14 +3487,17 @@ void deferred_split_folio(struct folio *folio) >> if (folio_test_swapcache(folio)) >> return; >> >> - if (!list_empty(&folio->_deferred_list)) >> - return; >> - >> spin_lock_irqsave(&ds_queue->split_queue_lock, flags); >> + if (partially_mapped) >> + folio_set_partially_mapped(folio); >> + else >> + folio_clear_partially_mapped(folio); > > Hi Usama, > > Do we need this? When can a partially_mapped folio on deferred_list become > non-partially-mapped and need a clear? I understand transferring from > entirely_map > to partially_mapped is a one way process? partially_mapped folios can't go back > to entirely_mapped? > Hi Barry, deferred_split_folio function is called in 3 places after this series, at fault, collapse and partial mapping. partial mapping can only happen after fault/collapse, and we have FOLIO_FLAG_FALSE(partially_mapped), i.e. flag initialized to false, so technically its not needed. A partially_mapped folio on deferred list wont become non-partially mapped. I just did it as a precaution if someone ever changes the kernel and calls deferred_split_folio with partially_mapped set to false after it had been true. The function arguments of deferred_split_folio make it seem that passing partially_mapped=false as an argument would clear it, which is why I cleared it as well. I could change the patch to something like below if it makes things better? i.e. add a comment at the top of the function: -void deferred_split_folio(struct folio *folio) +/* partially_mapped=false won't clear PG_partially_mapped folio flag */ +void deferred_split_folio(struct folio *folio, bool partially_mapped) { struct deferred_split *ds_queue = get_deferred_split_queue(folio); #ifdef CONFIG_MEMCG @@ -3485,14 +3488,15 @@ void deferred_split_folio(struct folio *folio) if (folio_test_swapcache(folio)) return; - if (!list_empty(&folio->_deferred_list)) - return; - spin_lock_irqsave(&ds_queue->split_queue_lock, flags); + if (partially_mapped) + folio_set_partially_mapped(folio); if (list_empty(&folio->_deferred_list)) { - if (folio_test_pmd_mappable(folio)) - count_vm_event(THP_DEFERRED_SPLIT_PAGE); - count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFERRED); + if (partially_mapped) { + if (folio_test_pmd_mappable(folio)) + count_vm_event(THP_DEFERRED_SPLIT_PAGE); + count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFERRED); + } list_add_tail(&folio->_deferred_list, &ds_queue->split_queue); ds_queue->split_queue_len++; #ifdef CONFIG_MEMCG > I am trying to rebase my NR_SPLIT_DEFERRED counter on top of your > work, but this "clear" makes that job quite tricky. as I am not sure > if this patch > is going to clear the partially_mapped flag for folios on deferred_list. > > Otherwise, I can simply do the below whenever folio is leaving deferred_list > > ds_queue->split_queue_len--; > list_del_init(&folio->_deferred_list); > if (folio_test_clear_partially_mapped(folio)) > mod_mthp_stat(folio_order(folio), > MTHP_STAT_NR_SPLIT_DEFERRED, -1); >