From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3F0EEB64DA for ; Wed, 28 Jun 2023 10:56:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5952C8D0003; Wed, 28 Jun 2023 06:56:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 545018D0001; Wed, 28 Jun 2023 06:56:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 40CFB8D0003; Wed, 28 Jun 2023 06:56:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 301D68D0001 for ; Wed, 28 Jun 2023 06:56:58 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id EC48EA08B2 for ; Wed, 28 Jun 2023 10:56:57 +0000 (UTC) X-FDA: 80951854074.09.A1CC466 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf22.hostedemail.com (Postfix) with ESMTP id D9313C000D for ; Wed, 28 Jun 2023 10:56:55 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=none; spf=pass (imf22.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687949816; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ri5RwDVGRTCotJ0ZNDTTJzrjCgqMUBsS2EQd44OzSr8=; b=EFUlHTT8w/vErMQDNiEo24B4JKo3NEO8Cg+2lKY7yl2CPqm1vVktwFZ/BpVWRttvFRlwAj Ex4scQTg8tGa8jQc+eHMrdqnxyx9a4xpfyA+s70R6iK4CkKykAbg26gJd0ITlRkv9RM5gL u8lMPCsVe0sVSAnmKu+Hp3WbweY13KY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687949816; a=rsa-sha256; cv=none; b=35CtxNr7s0S40UTQTLxuFC4vNGwBrGsnoXkzMEEYErUjCNisrOGGaBDZbVQxS2AHFrJRWH V09UnvjMxWhUO0MLCbQS2SXsCpD3oMhS7lb6dlkjnJ2gNWMNE1XQJJaAa0lAndMKKADO5B bX2VOrU/kMwRSFArKO0mRodEuvwT0mE= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; spf=pass (imf22.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6555AC14; Wed, 28 Jun 2023 03:57:38 -0700 (PDT) Received: from [10.57.76.180] (unknown [10.57.76.180]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BC1553F663; Wed, 28 Jun 2023 03:56:51 -0700 (PDT) Message-ID: Date: Wed, 28 Jun 2023 11:56:50 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.12.0 Subject: Re: [PATCH v1 01/10] mm: Expose clear_huge_page() unconditionally To: Yu Zhao Cc: Andrew Morton , "Matthew Wilcox (Oracle)" , "Kirill A. Shutemov" , Yin Fengwei , David Hildenbrand , Catalin Marinas , Will Deacon , Geert Uytterhoeven , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-s390@vger.kernel.org References: <20230626171430.3167004-1-ryan.roberts@arm.com> <20230626171430.3167004-2-ryan.roberts@arm.com> <2ff8ccf6-bf36-48b2-7dc2-e6c0d962f8b7@arm.com> <91e3364f-1d1b-f959-636b-4f60bf5a577b@arm.com> From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: D9313C000D X-Rspam-User: X-Stat-Signature: q9hss6kgzf3zq8gnjzqsgzgr45yaordd X-Rspamd-Server: rspam03 X-HE-Tag: 1687949815-169217 X-HE-Meta: U2FsdGVkX18zKjvuUVO37ryxMGZLojpNnagmBM4C5IpBk0TzwDXjMA23jLTFyz3Gfezfci3kM6UQEqfpBywYvLDk5ZEoVLKbdNEgoDLrD32+UYwY5atHQjmZ/lh57FX97AZRddsi5hpQMp/tsM9U4brEkBm8y63TiOkTBMK8Ot5vJ0VzB2+MIC/nDqdQN1rt1kRdXB9Z63vgLqcd6Xw0uOtbPfOQOf86XUVvxZX41tmvKe9eMzCpiW0a3XHGqnUTPUh42CfSUk2oc0KKd6nirdQts2bEgq6tdZghG3XFYL9XCMF1NnVdik9OZy2HMOcn+S4/gaw+XWDRj7X85GHiUjtziRqEnvpKxN6VpwHt9FthYJeRG5BQqbL4X/DtAIspz62cX0ZPvGDp8Vy4A9LXStsq+0PLJfSgxku2Oao3cSskoZ+agfrvE6rpqF/wZRWooFVUD/ZeeJKzRMPl86a3W9LQZpHndpCL9kb55j1dz2eauOxZ6Xwqy4Px5ydf67ULAt1f01C+jTbC4hkwhaGPMoK0N4ZSvPGH5pbfqMSg6TXy4109c8E01yEMY8A4mrjfroEsyklbHYt1hKhlWxqMg3+SN1NP792eyFo1jqj4DdZcVaxsDfaGz52+2x20SVdmD9XeMu97rqa5SbvBHjIkpDOLrK1eNsiLHoKV2FKJcFMlbot6tiSFO3er1qGIz3g6nfPBa3Rh0xzz54pgJNY2hELLIv4taXlMQZML+C9Le575dqvPe/7yEJv433VpZJ4Q5OQMsCa8sCpHn/6Tq8CPFFdfx5MDbI28oZ6Q1riEpl6yi8HMwTZ8D10kYEtN0Zn1DJCNpLIicZ/6M04/kuek43Mr93ASVeOYVx0yro7SiNVD7e+w5VnZEnQuFDVx+RNvNzhri1RLmz+TCG17TWqxb5hPQYos5S74SDfmtycD/kU6LZtAg9LusSeX4etRo4QFEIwY41JhgcQX/3wfXDm aUXCW/Fu Y4KWRY7GHy3YfqIIY1m/h+DRSZDu078zrRCOIbfrL+0iJfWo6CmOELFQan+YwNQay+DQmocox/bCL8CcN+23i6HHu48ht5h57krCaLowoUQnmtidUNpZNRr1sLN+22M/7iJ8BMt+n9XFST0ontyQic6fLeWXBmVRmjhgayjvpAJSxFiwgNCRd7kZvS8Agp2SssEy4ALnXbklNVKRhHsEi6WFxTw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 27/06/2023 19:26, Yu Zhao wrote: > On Tue, Jun 27, 2023 at 3:41 AM Ryan Roberts wrote: >> >> On 27/06/2023 09:29, Yu Zhao wrote: >>> On Tue, Jun 27, 2023 at 1:21 AM Ryan Roberts wrote: >>>> >>>> On 27/06/2023 02:55, Yu Zhao wrote: >>>>> On Mon, Jun 26, 2023 at 11:14 AM Ryan Roberts wrote: >>>>>> >>>>>> In preparation for extending vma_alloc_zeroed_movable_folio() to >>>>>> allocate a arbitrary order folio, expose clear_huge_page() >>>>>> unconditionally, so that it can be used to zero the allocated folio in >>>>>> the generic implementation of vma_alloc_zeroed_movable_folio(). >>>>>> >>>>>> Signed-off-by: Ryan Roberts >>>>>> --- >>>>>> include/linux/mm.h | 3 ++- >>>>>> mm/memory.c | 2 +- >>>>>> 2 files changed, 3 insertions(+), 2 deletions(-) >>>>>> >>>>>> diff --git a/include/linux/mm.h b/include/linux/mm.h >>>>>> index 7f1741bd870a..7e3bf45e6491 100644 >>>>>> --- a/include/linux/mm.h >>>>>> +++ b/include/linux/mm.h >>>>>> @@ -3684,10 +3684,11 @@ enum mf_action_page_type { >>>>>> */ >>>>>> extern const struct attribute_group memory_failure_attr_group; >>>>>> >>>>>> -#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) >>>>>> extern void clear_huge_page(struct page *page, >>>>>> unsigned long addr_hint, >>>>>> unsigned int pages_per_huge_page); >>>>>> + >>>>>> +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) >>>>> >>>>> We might not want to depend on THP eventually. Right now, we still >>>>> have to, unless splitting is optional, which seems to contradict >>>>> 06/10. (deferred_split_folio() is a nop without THP.) >>>> >>>> Yes, I agree - for large anon folios to work, we depend on THP. But I don't >>>> think that helps us here. >>>> >>>> In the next patch, I give vma_alloc_zeroed_movable_folio() an extra `order` >>>> parameter. So the generic/default version of the function now needs a way to >>>> clear a compound page. >>>> >>>> I guess I could do something like: >>>> >>>> static inline >>>> struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, >>>> unsigned long vaddr, gfp_t gfp, int order) >>>> { >>>> struct folio *folio; >>>> >>>> folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE | gfp, >>>> order, vma, vaddr, false); >>>> if (folio) { >>>> #ifdef CONFIG_LARGE_FOLIO >>>> clear_huge_page(&folio->page, vaddr, 1U << order); >>>> #else >>>> BUG_ON(order != 0); >>>> clear_user_highpage(&folio->page, vaddr); >>>> #endif >>>> } >>>> >>>> return folio; >>>> } >>>> >>>> But that's pretty messy and there's no reason why other users might come along >>>> that pass order != 0 and will be surprised by the BUG_ON. >>> >>> #ifdef CONFIG_LARGE_ANON_FOLIO // depends on CONFIG_TRANSPARENT_HUGE_PAGE >>> struct folio *alloc_anon_folio(struct vm_area_struct *vma, unsigned >>> long vaddr, int order) >>> { >>> // how do_huge_pmd_anonymous_page() allocs and clears >>> vma_alloc_folio(..., *true*); >> >> This controls the mem allocation policy (see mempolicy.c::vma_alloc_folio()) not >> clearing. Clearing is done in __do_huge_pmd_anonymous_page(): >> >> clear_huge_page(page, vmf->address, HPAGE_PMD_NR); > > Sorry for rushing this previously. This is what I meant. The #ifdef > makes it safe to use clear_huge_page() without 01/10. I highlighted > the last parameter to vma_alloc_folio() only because it's different > from what you chose (not implying it clears the folio).> >>> } >>> #else >>> #define alloc_anon_folio(vma, addr, order) >>> vma_alloc_zeroed_movable_folio(vma, addr) >>> #endif >> >> Sorry I don't get this at all... If you are suggesting to bypass >> vma_alloc_zeroed_movable_folio() entirely for the LARGE_ANON_FOLIO case > > Correct. > >> I don't >> think that works because the arch code adds its own gfp flags there. For >> example, arm64 adds __GFP_ZEROTAGS for VM_MTE VMAs. > > I think it's the opposite: it should be safer to reuse the THP code because > 1. It's an existing case that has been working for PMD_ORDER folios > mapped by PTEs, and it's an arch-independent API which would be easier > to review. > 2. Use vma_alloc_zeroed_movable_folio() for large folios is a *new* > case. It's an arch-*dependent* API which I have no idea what VM_MTE > does (should do) to large folios and don't plan to answer that for > now. I've done some archaology on this now, and convinced myself that your suggestion is a good one - sorry for doubting it! If you are interested here are the details: Only arm64 and ia64 do something non-standard in vma_alloc_zeroed_movable_folio(). ia64 flushes the dcache for the folio - but given it does not support THP this is not a problem for the THP path. arm64 adds the __GFP_ZEROTAGS flag which means that the MTE tags will be zeroed at the same time as the page is zeroed. This is a perf optimization - if its not performed then it will be done at set_pte_at(), which is how this works for the THP path. So on that basis, I agree we can use your proposed alloc_anon_folio() approach. arm64 will lose the MTE optimization but that can be added back later if needed. So no need to unconditionally expose clear_huge_page() and no need to modify all the arch vma_alloc_zeroed_movable_folio() implementations. Thanks, Ryan > >> Perhaps we can do away with an arch-owned vma_alloc_zeroed_movable_folio() and >> replace it with a new arch_get_zeroed_movable_gfp_flags() then >> alloc_anon_folio() add in those flags? >> >> But I still think the cleanest, simplest change is just to unconditionally >> expose clear_huge_page() as I've done it. > > The fundamental choice there as I see it is to whether the first step > of large anon folios should lean toward the THP code base or the base > page code base (I'm a big fan of the answer "Neither -- we should > create something entirely new instead"). My POV is that the THP code > base would allow us to move faster, since it's proven to work for a > very similar case (PMD_ORDER folios mapped by PTEs).