From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17A0CC48BC1 for ; Wed, 14 Feb 2024 16:22:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 815F78D0014; Wed, 14 Feb 2024 11:22:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7C3C38D000E; Wed, 14 Feb 2024 11:22:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 63E208D0014; Wed, 14 Feb 2024 11:22:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 4EF6A8D000E for ; Wed, 14 Feb 2024 11:22:54 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 12D7EA24AE for ; Wed, 14 Feb 2024 16:22:54 +0000 (UTC) X-FDA: 81790928268.14.3CCE86D Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf13.hostedemail.com (Postfix) with ESMTP id CBB3020024 for ; Wed, 14 Feb 2024 16:22:51 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707927772; a=rsa-sha256; cv=none; b=hK0ZbWkTDOXf+54cuLvXhdGed5EM1RatAexoGWW3T/oueFZRVLybOZJKMH8GglCmvh5S4s ck3QzvkV4WEE5IFHeb4e8nLd2Niik4DH85El6XYjJTgH0hnCWetNbq59rNiOPTRKH/xVef 5EMlLzYccTMmr9jiETN9buftqCDZfug= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707927772; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=agkhVLLdLkGMe+bypPC22xsDLtsiV1Cvn1K/2a2k/6Q=; b=bvXukCAT0m0+GzkPKUoiON/++X52LVvR6y7m6fPKMcaGy1y0LJ25y8mp9ZnPm4+cpkHw0x XYXOIsQlliVwh4/MFH6nSp9eQKBd9J98hIj189p8BueOhrVeX6UTIIqA0P6SlY2SstAdo0 QuoltLCgK9twZIFGY0HW7Tm30nCEAVA= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BC6721FB; Wed, 14 Feb 2024 08:23:31 -0800 (PST) Received: from [10.57.64.120] (unknown [10.57.64.120]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 32A5B3F762; Wed, 14 Feb 2024 08:22:48 -0800 (PST) Message-ID: <6c986b83-e00d-46fe-8c88-374f8e6bd0fa@arm.com> Date: Wed, 14 Feb 2024 16:22:46 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v4 5/7] mm: thp: split huge page to any lower order pages (except order-1). Content-Language: en-GB To: Zi Yan Cc: "Pankaj Raghav (Samsung)" , linux-mm@kvack.org, "Matthew Wilcox (Oracle)" , David Hildenbrand , Yang Shi , Yu Zhao , "Kirill A . Shutemov" , =?UTF-8?Q?Michal_Koutn=C3=BD?= , Roman Gushchin , Zach O'Keefe , Hugh Dickins , Mcgrof Chamberlain , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org References: <20240213215520.1048625-1-zi.yan@sent.com> <20240213215520.1048625-6-zi.yan@sent.com> <6859C8DA-5B7F-458E-895C-763BA782F4B9@nvidia.com> From: Ryan Roberts In-Reply-To: <6859C8DA-5B7F-458E-895C-763BA782F4B9@nvidia.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: CBB3020024 X-Stat-Signature: a5gyf1zo1iiixffheta3qtfy9gwjkzea X-HE-Tag: 1707927771-307587 X-HE-Meta: U2FsdGVkX1/qcI12tPtgVJESVEm82OiXb/X6G6Gq7ylCrduHJiSqmArSRdk7/NSe3iWMl8K0noHG2BkKshFuW5L0K+oVfQovIuZ9PsLHL8fqQjy5mXp4JiK8H2GLEK0aT6zNSiglrm/A2zgRsIVoRGYsDS+4Wscxq4Lq78S8AMX3BMIy8cBQJ3YBO7+6v4wgHLejfnpzCG8sC1O1nPG1CheeZKONgMp1kSwrsadtJFTNG4sB94gvrJ/f10KYLrcAOWHuYKnESAQoaEUcD9m4GIQ1uK3A75v0i4h9bOD2ynDJnPor0ICAmgQ2/vmBdDuzQ8AVFtfk7SWAigJE3rU5qtiiaef1PR17tF9HbTR+EkHm5H0ILvWtmx3ebhIyyu6tZE/Jwz4LEQasCtvqNIcUDZge1h2sIYs6C8fAr6D5qdCUA3lT08wIVy2VEPjDOq/TbTBXB+TnweAEeWFxNq0ozBwVfoIR8Nj7CIjRBu6W1424rxhda+DD9oEUAo3mOUduocZCLnTlGY2tt51gsAFhYaZF1sWJBahcV7O+cpVfEDAWcycQPrloOonILAufL/tW6TuuRDWQTPp+JTw/j9aBdaFcsT/Yvig8RShilfB1F4i7f6OXmHyIy/9eb+/6qA2PHAVD7XDDtDUwcxZaZ6Q1PDs9YPsj6rHEJ2PaMU6NqE1U1RL0Bae+895VBonwPxM/N/pQ7HkToeZSr6mlnPb23p3Jq+FRwA7+lSfF0Ufo6hkPrjQWWPbF3yqHzMEoi5yRqLVMjGaIL06JUdfySIxiMhhnz6fKEXZvURjQzy0QISX2e6IKErkenHmxBMVQasRI/PYugng9SJQUTkGbZwzI9vSucTC9XeQUl8kAH0thztJbErx/NxEwtNFty4gPhSYw/sscJTR061ydxRcTkF4uz1YSYJkejTYvMMuvg+xzecrUyjtiBCwEmNrUv3BHIf48IY3xx7NbKniOJkY6Lfy 2qlEE5/v 1pTLJgTMAM3O61zVUyMjjp/F1KtDGPkL6DF/Pfn963vayvBz3EFyMCaEOHkkWDdR4d1bC0QT7H2PDVIIeffwnAU2CNq99wxiyREgeFHQKwIG8gPDQt6Yl4D6Jv29sB2cGnC15dHJCuq6FazSeEUSArn1S6JcA6O4A6gOqjGgskrjU2oI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 14/02/2024 16:11, Zi Yan wrote: > On 14 Feb 2024, at 5:38, Ryan Roberts wrote: > >> On 13/02/2024 21:55, Zi Yan wrote: >>> From: Zi Yan >>> >>> To split a THP to any lower order (except order-1) pages, we need to >>> reform THPs on subpages at given order and add page refcount based on the >>> new page order. Also we need to reinitialize page_deferred_list after >>> removing the page from the split_queue, otherwise a subsequent split will >>> see list corruption when checking the page_deferred_list again. >>> >>> It has many uses, like minimizing the number of pages after >>> truncating a huge pagecache page. For anonymous THPs, we can only split >>> them to order-0 like before until we add support for any size anonymous >>> THPs. >> >> multi-size THP is now upstream. Not sure if this comment still makes sense. > Will change it to reflect the fact that multi-size THP is already upstream. > >> Still its not completely clear to me how you would integrate this new machinery >> and decide what non-zero order to split anon THP to? > > Originally, it was developed along with my 1GB THP support. So it was intended > to split order-18 to order-9. But for now, like you and David said in the cover > letter email thread, we might not want to use it for anonymous large folios > until we find a necessary use case. > >>> >>> Order-1 folio is not supported because _deferred_list, which is used by >>> partially mapped folios, is stored in subpage 2 and an order-1 folio only >>> has subpage 0 and 1. >>> >>> Signed-off-by: Zi Yan >>> --- >>> include/linux/huge_mm.h | 21 +++++--- >>> mm/huge_memory.c | 114 +++++++++++++++++++++++++++++++--------- >>> 2 files changed, 101 insertions(+), 34 deletions(-) >>> >>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h >>> index 5adb86af35fc..de0c89105076 100644 >>> --- a/include/linux/huge_mm.h >>> +++ b/include/linux/huge_mm.h >>> @@ -265,10 +265,11 @@ unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, >>> >>> void folio_prep_large_rmappable(struct folio *folio); >>> bool can_split_folio(struct folio *folio, int *pextra_pins); >>> -int split_huge_page_to_list(struct page *page, struct list_head *list); >>> +int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, >>> + unsigned int new_order); >>> static inline int split_huge_page(struct page *page) >>> { >>> - return split_huge_page_to_list(page, NULL); >>> + return split_huge_page_to_list_to_order(page, NULL, 0); >>> } >>> void deferred_split_folio(struct folio *folio); >>> >>> @@ -422,7 +423,8 @@ can_split_folio(struct folio *folio, int *pextra_pins) >>> return false; >>> } >>> static inline int >>> -split_huge_page_to_list(struct page *page, struct list_head *list) >>> +split_huge_page_to_list_to_order(struct page *page, struct list_head *list, >>> + unsigned int new_order) >>> { >>> return 0; >>> } >>> @@ -519,17 +521,20 @@ static inline bool thp_migration_supported(void) >>> } >>> #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ >>> >>> -static inline int split_folio_to_list(struct folio *folio, >>> - struct list_head *list) >>> +static inline int split_folio_to_list_to_order(struct folio *folio, >>> + struct list_head *list, int new_order) >>> { >>> - return split_huge_page_to_list(&folio->page, list); >>> + return split_huge_page_to_list_to_order(&folio->page, list, new_order); >>> } >>> >>> -static inline int split_folio(struct folio *folio) >>> +static inline int split_folio_to_order(struct folio *folio, int new_order) >>> { >>> - return split_folio_to_list(folio, NULL); >>> + return split_folio_to_list_to_order(folio, NULL, new_order); >>> } >>> >>> +#define split_folio_to_list(f, l) split_folio_to_list_to_order(f, l, 0) >>> +#define split_folio(f) split_folio_to_order(f, 0) >>> + >>> /* >>> * archs that select ARCH_WANTS_THP_SWAP but don't support THP_SWP due to >>> * limitations in the implementation like arm64 MTE can override this to >>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >>> index ad7133c97428..d0e555a8ea98 100644 >>> --- a/mm/huge_memory.c >>> +++ b/mm/huge_memory.c >>> @@ -2718,11 +2718,14 @@ void vma_adjust_trans_huge(struct vm_area_struct *vma, >>> >>> static void unmap_folio(struct folio *folio) >>> { >>> - enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD | >>> - TTU_SYNC | TTU_BATCH_FLUSH; >>> + enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SYNC | >>> + TTU_BATCH_FLUSH; >>> >>> VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); >>> >>> + if (folio_test_pmd_mappable(folio)) >>> + ttu_flags |= TTU_SPLIT_HUGE_PMD; >> >> Should we split this change out? I think it makes sense independent of this series? >> > > Sure. Since multi-size THP is upstream, this avoid unnecessary code path if > the THP is not PMD-mapped. > >>> + >>> /* >>> * Anon pages need migration entries to preserve them, but file >>> * pages can simply be left unmapped, then faulted back on demand. >>> @@ -2756,7 +2759,6 @@ static void lru_add_page_tail(struct page *head, struct page *tail, >>> struct lruvec *lruvec, struct list_head *list) >>> { >>> VM_BUG_ON_PAGE(!PageHead(head), head); >>> - VM_BUG_ON_PAGE(PageCompound(tail), head); >>> VM_BUG_ON_PAGE(PageLRU(tail), head); >>> lockdep_assert_held(&lruvec->lru_lock); >>> >>> @@ -2777,7 +2779,8 @@ static void lru_add_page_tail(struct page *head, struct page *tail, >>> } >>> >>> static void __split_huge_page_tail(struct folio *folio, int tail, >>> - struct lruvec *lruvec, struct list_head *list) >>> + struct lruvec *lruvec, struct list_head *list, >>> + unsigned int new_order) >>> { >>> struct page *head = &folio->page; >>> struct page *page_tail = head + tail; >>> @@ -2847,10 +2850,15 @@ static void __split_huge_page_tail(struct folio *folio, int tail, >>> * which needs correct compound_head(). >>> */ >>> clear_compound_head(page_tail); >>> + if (new_order) { >>> + prep_compound_page(page_tail, new_order); >>> + folio_prep_large_rmappable(page_folio(page_tail)); >>> + } >>> >>> /* Finally unfreeze refcount. Additional reference from page cache. */ >>> - page_ref_unfreeze(page_tail, 1 + (!folio_test_anon(folio) || >>> - folio_test_swapcache(folio))); >>> + page_ref_unfreeze(page_tail, >>> + 1 + ((!folio_test_anon(folio) || folio_test_swapcache(folio)) ? >>> + folio_nr_pages(page_folio(page_tail)) : 0)); >>> >>> if (folio_test_young(folio)) >>> folio_set_young(new_folio); >>> @@ -2868,7 +2876,7 @@ static void __split_huge_page_tail(struct folio *folio, int tail, >>> } >>> >>> static void __split_huge_page(struct page *page, struct list_head *list, >>> - pgoff_t end) >>> + pgoff_t end, unsigned int new_order) >>> { >>> struct folio *folio = page_folio(page); >>> struct page *head = &folio->page; >>> @@ -2877,10 +2885,11 @@ static void __split_huge_page(struct page *page, struct list_head *list, >>> unsigned long offset = 0; >>> unsigned int nr = thp_nr_pages(head); >>> int i, nr_dropped = 0; >>> + unsigned int new_nr = 1 << new_order; >>> int order = folio_order(folio); >>> >>> /* complete memcg works before add pages to LRU */ >>> - split_page_memcg(head, order, 0); >>> + split_page_memcg(head, order, new_order); >>> >>> if (folio_test_anon(folio) && folio_test_swapcache(folio)) { >>> offset = swp_offset(folio->swap); >>> @@ -2893,8 +2902,8 @@ static void __split_huge_page(struct page *page, struct list_head *list, >>> >>> ClearPageHasHWPoisoned(head); >>> >>> - for (i = nr - 1; i >= 1; i--) { >>> - __split_huge_page_tail(folio, i, lruvec, list); >>> + for (i = nr - new_nr; i >= new_nr; i -= new_nr) { >>> + __split_huge_page_tail(folio, i, lruvec, list, new_order); >>> /* Some pages can be beyond EOF: drop them from page cache */ >>> if (head[i].index >= end) { >>> struct folio *tail = page_folio(head + i); >>> @@ -2910,29 +2919,41 @@ static void __split_huge_page(struct page *page, struct list_head *list, >>> __xa_store(&head->mapping->i_pages, head[i].index, >>> head + i, 0); >>> } else if (swap_cache) { >>> + /* >>> + * split anonymous THPs (including swapped out ones) to >>> + * non-zero order not supported >>> + */ >>> + VM_WARN_ONCE(new_order, >>> + "Split swap-cached anon folio to non-0 order not supported"); >> >> Why isn't it supported? Even if it's not supported, is this level the right >> place to enforce these kinds of policy decisions? I wonder if we should be >> leaving that to the higher level to decide? > > Is the swap-out small-size THP without splitting merged? This needs that patchset. No not yet. I have to respin it. Its on my todo list. I'm not sure I understand the dependency though? > You are right that a warning here is not appropriate. I will fail the splitting > if the folio is swapcached and going to be split into >0 order. > >>> __xa_store(&swap_cache->i_pages, offset + i, >>> head + i, 0); >>> } >>> } >>> > > > -- > Best Regards, > Yan, Zi