From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C998BC4332F for ; Tue, 12 Dec 2023 15:38:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 597836B0302; Tue, 12 Dec 2023 10:38:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 520A06B0303; Tue, 12 Dec 2023 10:38:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 399AB6B0304; Tue, 12 Dec 2023 10:38:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 22EC86B0302 for ; Tue, 12 Dec 2023 10:38:41 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id E6195A0960 for ; Tue, 12 Dec 2023 15:38:40 +0000 (UTC) X-FDA: 81558573600.04.7391263 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf13.hostedemail.com (Postfix) with ESMTP id D239B20014 for ; Tue, 12 Dec 2023 15:38:38 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf13.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702395519; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uITOAATuSqdQr6Ahm07NTyUENncAKVvtjj9b88FvYV4=; b=8IF64tQG7cwxzUHMuNnLtvDvDuj2wyLamqvJunDruWa2zaOeSuBHFtIzCAlOkvFTQ/Oqzz X/J7TwTqwuMBVdW1IC/kYkeiKOaJYe6giPSroN+mgxTcjwF7Vnprg8AdW5XO7Jmx0jYP3M PBnKiaf7IPtBBlptSRHP4KTEw5nf5og= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf13.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702395519; a=rsa-sha256; cv=none; b=qI2cWDbHtT5gFXdV1THTrZlPUFuz892QOoOPgef2TTLSJ8wy8KaDn1zIgJVzKpj1uBaPrC BGl4lxjj6slZuJ6y3AtuOCnjld1D5UzkIrF0JVQSweovguZ+hqrr712ST6rfAv5zbZosff XzxqBDAY9+yoJ8ZqNrkL5T0b5OwvOWE= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 36FDF143D; Tue, 12 Dec 2023 07:39:24 -0800 (PST) Received: from [10.1.39.183] (XHFQ2J9959.cambridge.arm.com [10.1.39.183]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 312083F738; Tue, 12 Dec 2023 07:38:35 -0800 (PST) Message-ID: Date: Tue, 12 Dec 2023 15:38:34 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v9 04/10] mm: thp: Support allocation of anonymous multi-size THP Content-Language: en-GB To: David Hildenbrand , Andrew Morton , Matthew Wilcox , Yin Fengwei , Yu Zhao , Catalin Marinas , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain , Itaru Kitayama , "Kirill A. Shutemov" , John Hubbard , David Rientjes , Vlastimil Babka , Hugh Dickins , Kefeng Wang , Barry Song <21cnbao@gmail.com>, Alistair Popple Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org References: <20231207161211.2374093-1-ryan.roberts@arm.com> <20231207161211.2374093-5-ryan.roberts@arm.com> <2bebcf33-e8b7-468d-86cc-31d6eb355b66@redhat.com> From: Ryan Roberts In-Reply-To: <2bebcf33-e8b7-468d-86cc-31d6eb355b66@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: D239B20014 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: gioj8oabqdzxo9hzfteuzypqgbh93snf X-HE-Tag: 1702395518-530311 X-HE-Meta: U2FsdGVkX18K9BO55Kc2yCKrEvwPdKgiEqlHIgQua+XV3aXNaTgwV8bQhDgNzp5LEag9rrWS+tKGXZC7ELOvqeZks6qwcpITq8/IrEOceGyV/OiBhC9MF5v9+9ZBn5NAEzrzGQrReRi+L1FuPpkMtHXcxOUGU0wlHGVFt/JYbeC5oAx4Wi5l/CUXzlj3N3T0iWUMRrM2gMgQqunyxqkSTrO8ywOVOKcIaV5N0iPBBNBeiot11fiBwsp6M5TmsSxbyUFkfkZ5JRXidobNQmiTW29zGg8mZ4Q7Oux2UsX/suug0Mue69XEfwfK2qarx0045ulX4RtxDsaYPmjIw4vkpvozuHaev45xOb8btA8c7wzwfKQrnRcgWdlfBo9x6685xLu/7eMxmRlQefQA94Ike2UUiywdnvLcP2fDwmphp0jHPTj/WKI/GOsY5btigL4Fkkd8IK9HDqGS+qr40u5NIoWDdo0yfZ6cqz2hU8OBqtvJxPZsVCQEI9foDEJd8U7NHmksLEo2QN0HrDjSh/zdd8aeuYmgVviE5+Tw8u2K8kEab32A/Lh5lOmDPCI871c7e4ynpkrCFHe6PLkD7ogPWVNLNUZADGXmpb+1eOe+17qdmbqocA4+lETuzag95W4GOdT/vQiwPPLhWQsqIps9Z3Xo5xjGnyzYWeFaoBw3ZtKC5uNA5H2NIml2S3Y0ccoF+ORw/g1hNjgZOitBU/1WjpQzcfJ8we+Tuj1xDE1inLJ7kmIXTK1Skn571voCa+zX2yq/GmP+mpAIzgABjqYlkmPtlew+fWS3yHKcF3Y37m5AtrkeRERBuKOuIEWCQ7sNXNiPM+0BLMLq6YPioqHUYLcUGarljiUGdUpTVGgT+D1YjCCsxaag9GB76JWIGK5maAFjaSTqXM4rIwZvhpgCbmBs44rSyINmZS/ozJs13ZGBOtbY/vZ/gp1eZDKjW7A4vahQhgoaa85K5t2Gmgt KDc8GYZe JxR3iwd7/9L7/IhHNU/d6Ot2JvfrxOe9jgXobf9wG3zlAc4d8uutKfuMNXRrqwVMIiUGV9PmCG8dkl407ups+LcFPWIrSrrwV3mPeCna2FWcvLhvpdRQpk96/FHewV+bKHN3D/1WZ5ibEni0dRE2OKjiTHn0mTA1uDi3x/fuUv3mKf6ZQIvIeiviOmrCLiFVJ0dBARQ5VcejevW8k6tHdKC2FS3wpgIlSVLn8ab/GYWCifHQycBYNJhhqLe36dY4eg7u2JSzxUtuxpJss5fL2XHV70MZJF0uv6f5MhFXfjfAL8kmuUXutw45pNIZvc1+o+S9fq2ugAow5BHHRh/UJSIi6uxll3ECdcOSQ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 12/12/2023 15:02, David Hildenbrand wrote: > On 07.12.23 17:12, Ryan Roberts wrote: >> Introduce the logic to allow THP to be configured (through the new sysfs >> interface we just added) to allocate large folios to back anonymous >> memory, which are larger than the base page size but smaller than >> PMD-size. We call this new THP extension "multi-size THP" (mTHP). >> >> mTHP continues to be PTE-mapped, but in many cases can still provide >> similar benefits to traditional PMD-sized THP: Page faults are >> significantly reduced (by a factor of e.g. 4, 8, 16, etc. depending on >> the configured order), but latency spikes are much less prominent >> because the size of each page isn't as huge as the PMD-sized variant and >> there is less memory to clear in each page fault. The number of per-page >> operations (e.g. ref counting, rmap management, lru list management) are >> also significantly reduced since those ops now become per-folio. > > I'll note that with always-pte-mapped-thp it will be much easier to support > incremental page clearing (e.g., zero only parts of the folio and map the > remainder in a pro-non-like fashion whereby we'll zero on the next page fault). > With a PMD-sized thp, you have to eventually place/rip out page tables to > achieve that. But then you lose the benefits of reduced number of page faults; reducing page faults gives a big speed up for workloads with lots of short lived processes like compiling. But yes, I agree this could be an interesting future optimization for some workloads. > >> >> Some architectures also employ TLB compression mechanisms to squeeze >> more entries in when a set of PTEs are virtually and physically >> contiguous and approporiately aligned. In this case, TLB misses will >> occur less often. >> >> The new behaviour is disabled by default, but can be enabled at runtime >> by writing to /sys/kernel/mm/transparent_hugepage/hugepage-XXkb/enabled >> (see documentation in previous commit). The long term aim is to change >> the default to include suitable lower orders, but there are some risks >> around internal fragmentation that need to be better understood first. >> >> Tested-by: Kefeng Wang >> Tested-by: John Hubbard >> Signed-off-by: Ryan Roberts >> --- >>   include/linux/huge_mm.h |   6 ++- >>   mm/memory.c             | 111 ++++++++++++++++++++++++++++++++++++---- >>   2 files changed, 106 insertions(+), 11 deletions(-) >> >> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h >> index 609c153bae57..fa7a38a30fc6 100644 >> --- a/include/linux/huge_mm.h >> +++ b/include/linux/huge_mm.h >> @@ -68,9 +68,11 @@ extern struct kobj_attribute shmem_enabled_attr; >>   #define HPAGE_PMD_NR (1< > [...] > >> + >> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE >> +static struct folio *alloc_anon_folio(struct vm_fault *vmf) >> +{ >> +    struct vm_area_struct *vma = vmf->vma; >> +    unsigned long orders; >> +    struct folio *folio; >> +    unsigned long addr; >> +    pte_t *pte; >> +    gfp_t gfp; >> +    int order; >> + >> +    /* >> +     * If uffd is active for the vma we need per-page fault fidelity to >> +     * maintain the uffd semantics. >> +     */ >> +    if (unlikely(userfaultfd_armed(vma))) >> +        goto fallback; >> + >> +    /* >> +     * Get a list of all the (large) orders below PMD_ORDER that are enabled >> +     * for this vma. Then filter out the orders that can't be allocated over >> +     * the faulting address and still be fully contained in the vma. >> +     */ >> +    orders = thp_vma_allowable_orders(vma, vma->vm_flags, false, true, true, >> +                      BIT(PMD_ORDER) - 1); >> +    orders = thp_vma_suitable_orders(vma, vmf->address, orders); >> + >> +    if (!orders) >> +        goto fallback; >> + >> +    pte = pte_offset_map(vmf->pmd, vmf->address & PMD_MASK); >> +    if (!pte) >> +        return ERR_PTR(-EAGAIN); >> + >> +    /* >> +     * Find the highest order where the aligned range is completely >> +     * pte_none(). Note that all remaining orders will be completely >> +     * pte_none(). >> +     */ >> +    order = highest_order(orders); >> +    while (orders) { >> +        addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); >> +        if (pte_range_none(pte + pte_index(addr), 1 << order)) >> +            break; >> +        order = next_order(&orders, order); >> +    } >> + >> +    pte_unmap(pte); >> + >> +    /* Try allocating the highest of the remaining orders. */ >> +    gfp = vma_thp_gfp_mask(vma); >> +    while (orders) { >> +        addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); >> +        folio = vma_alloc_folio(gfp, order, vma, addr, true); >> +        if (folio) { >> +            clear_huge_page(&folio->page, vmf->address, 1 << order); >> +            return folio; >> +        } >> +        order = next_order(&orders, order); >> +    } >> + >> +fallback: >> +    return vma_alloc_zeroed_movable_folio(vma, vmf->address); >> +} >> +#else >> +#define alloc_anon_folio(vmf) \ >> +        vma_alloc_zeroed_movable_folio((vmf)->vma, (vmf)->address) >> +#endif > > A neater alternative might be > > static struct folio *alloc_anon_folio(struct vm_fault *vmf) > { > #ifdef CONFIG_TRANSPARENT_HUGEPAGE >     /* magic */ > fallback: > #endif >     return vma_alloc_zeroed_movable_folio((vmf)->vma, (vmf)->address): > } I guess beauty lies in the eye of the beholder... I don't find it much neater personally :). But happy to make the change if you insist; what's the process now that its in mm-unstable? Just send a patch to Andrew for squashing? > > [...] > > Acked-by: David Hildenbrand >