From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CB83C4167B for ; Tue, 5 Dec 2023 01:15:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F32156B0099; Mon, 4 Dec 2023 20:15:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EE1FC6B009D; Mon, 4 Dec 2023 20:15:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D820F6B00A3; Mon, 4 Dec 2023 20:15:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id C0AD36B0099 for ; Mon, 4 Dec 2023 20:15:49 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 926DBC04B1 for ; Tue, 5 Dec 2023 01:15:49 +0000 (UTC) X-FDA: 81530997618.05.6C20BBD Received: from mail-ua1-f48.google.com (mail-ua1-f48.google.com [209.85.222.48]) by imf30.hostedemail.com (Postfix) with ESMTP id C7E9080005 for ; Tue, 5 Dec 2023 01:15:47 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=XHvG+5Up; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf30.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.48 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701738947; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=y6OPJL50zxW+NLtfpJ/4wZLmjOQWjCZr7g4xDF4MxJM=; b=LFcR5EQfsVr61fhovwtONXGn8BBV+Gr1gBsnD8bSe9ep0o+Hr9kmloLK4FZSzlniG6UY2Z rbaT8uohr1l7+PjJO5aKspZr7IOD6sd8KNoQLeGLYLJ6waUNWbm0o+OxH4ogllxZiAjkBs CGC+QiPvKiM6XiW26TC7hKNmJ3PorBo= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=XHvG+5Up; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf30.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.48 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701738947; a=rsa-sha256; cv=none; b=TAECCfub1kn/C/IXm865eIUTYo0rRTL31UjE08DcD5nnAlJ+dL+6kMrYHnCFfoj9EsljH/ NJg9PJccUvMGNRBCdyO9feOi1j40+qTZ5eIWvSJcfM1z/hTCGn9b72TXTeQjHIC4U6reLQ V2gQ4EsGsLAyJYA9OsHoKqlHqqaRwQM= Received: by mail-ua1-f48.google.com with SMTP id a1e0cc1a2514c-7c55625fb66so964935241.3 for ; Mon, 04 Dec 2023 17:15:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701738947; x=1702343747; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=y6OPJL50zxW+NLtfpJ/4wZLmjOQWjCZr7g4xDF4MxJM=; b=XHvG+5UpH9Vo0JUv23CO95cG/RXjR3B5F78wnIkp2Okqp03U8k4Ch5jJEk4huDixRR pbEgcd9C+3mkxJlMLsEHvFkWx5kPbW39lYVHNfyJksH6vz1YKk1o7738VBjaeuY1d5Kg tSiJx9Nyg1mJI5M2Xbt3XxC9q3ETeXVqVfVuaKZIYXGje5YJhLWhlhmrOAUbNNKpvIsa 1JoLpzabduo1b1yjlueUxKMFtCgt95qRGLtGZrJ65AA5ztWgy2Ewd+FYK/5sMuaNDLEH OJggNJ0sQa+NXyF9OHXFgk76LKGNDUJj7tbzYg7OM4Rbk/ZDNjha7Zoq85L8403MeAL2 JVvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701738947; x=1702343747; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=y6OPJL50zxW+NLtfpJ/4wZLmjOQWjCZr7g4xDF4MxJM=; b=bOWlywsRRsXagMqCuhTB0vJqmaXSeoF2MOb6wSAf5vcvp8J8/8713uH2Nm7byQibIr 41h959Ph3L0f2DChhFl9T3ovcOZxtqUyhbwn7jHQ+OKTUx/fToMAfek/OTmmm8+77Fkk KanHJYUoSnUtPpYPI4Vve1Ltj0cj4Rg250rFnIw9ZgJOFe35JVKcumg3wgU0r4tV/3Tg NRS24z13cZDTfRZfCrXqIdb047gaH7qvyVfFUfoTCbMEBnKKGJvg0LkCvyJJ7RNKyMrp AfwgtsqVHEn60oIoSw5p5m+JgGhkthPn52ntJTJ73n2BH3OIxwYh5K2bL014Gr/j+SIF TTyA== X-Gm-Message-State: AOJu0YwOhKNHTsdPJ/Ca8upcnrlIJlTz39idv26NLOOOfuQqbq4XvIM/ oRT4Bpuwet3uZL6iBH61jhU/zHev3BDGsEnSr4c= X-Google-Smtp-Source: AGHT+IFW1PuSAm8mKEGdLxUG4/ewSPW1ZcY/qo3lMzsRhZUKGuFfik/5dyZF4LwrOmr9AUrfOELYrx/hm4Foih29CzY= X-Received: by 2002:a05:6122:1821:b0:4b2:c554:e3f4 with SMTP id ay33-20020a056122182100b004b2c554e3f4mr1932612vkb.20.1701738946714; Mon, 04 Dec 2023 17:15:46 -0800 (PST) MIME-Version: 1.0 References: <20231204102027.57185-1-ryan.roberts@arm.com> <20231204102027.57185-5-ryan.roberts@arm.com> In-Reply-To: <20231204102027.57185-5-ryan.roberts@arm.com> From: Barry Song <21cnbao@gmail.com> Date: Tue, 5 Dec 2023 09:15:34 +0800 Message-ID: Subject: Re: [PATCH v8 04/10] mm: thp: Support allocation of anonymous multi-size THP To: Ryan Roberts Cc: Andrew Morton , Matthew Wilcox , Yin Fengwei , David Hildenbrand , Yu Zhao , Catalin Marinas , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain , Itaru Kitayama , "Kirill A. Shutemov" , John Hubbard , David Rientjes , Vlastimil Babka , Hugh Dickins , Kefeng Wang , Alistair Popple , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: C7E9080005 X-Stat-Signature: tkxiyc1gjxxbkj5bchyfxwqbacm5f9oz X-Rspam-User: X-HE-Tag: 1701738947-998477 X-HE-Meta: U2FsdGVkX18JLjpnCr+wMxdjd/hU6bEMUUW5GqVjTWzmycfry/mmBW5bUTXm+Zl/oSfZa3sMnYA2C2WAjzEgioz3MlUSl0Mec+uOExcAKgONhgOT7y91uY2r1blcJVEtsomqy8DLC0HCIKk7NaIgZ5mViadDX3FLtPXP7tSkCSku0XCF4BtRnyfpf9HPb9xp3J9J0ePJhlJLRKf0QqFyqKHtYxtsKidLj5tH1JKGaAI5E2Cr8dgr+YBdWjGz97Nwlt0TsfAhVpi4BdZuewWA4bZtlCWlS+TGmreJCCRYyilUj0Uh2c6tQf5e8r1992VvngYBYKTPxdKyzUH1EDdxqjr0VjYAGYius+pVCi4VFRhgmfmrITh48moNW75z7eJP8bYsINBPJo5yDjjZVZat/m6sp7NjKSnXNPpiUvQG1s1mTjwBZTDiJ9uNsEruCcbfRLmf2qrPFEeLHsZjCf0z+oERXnJ0jgjdTsP85JAlXkc/NP96OpG6Ne/phOWOz5BAfkpre9PdJL7DMYwdv/p1YqdXnwZIotJAHEf9dv+VAFE0h9pO6sz8HA2yftD3EvCARMQkjULeyJAv4lx0XBUe9n7SloLT6RDCaD7eRQ4n3rJmnlqFP6s38FImQRJWTyy1zU47a0f8weO1joJCtJfZg6Iqb2N8leUzDM7JHfBaZXnPi6+4o9BllG1dc1+HdQ0Vf978YvtgJgd8uPEf/AYPaACKBk5h8wV2/2h5EHpws+GjAkGjEKg7fVCveiqnrcAnOlQBGsIfzHVVyp+Oh0qPYF6ZHjEO3NMyI7wliWixYhP83bzwNrDCCBv92EBmGw7LOUsqYunKnlsXH6bELPbVNwScAO4b3lkLCZbK+glyIWv3S4K0SC7pLrdUeZsJufptAigWONe7/2lKT2FcpZD2sVI2Ex0n2nl8sXPbdAPQpGiJbU/BFnPAFuqtHqxDRIcGzD3T2Gks+xzM+GUxj2v +/5fhAqS iv9c/kcjLA8KpuVux+Gb9JSJ2umNOmBq3quvpknXpch9Il0GJqpiN9nYsevbtmN8+ewQ9w4W2I5vxePzIXgWEEes5FpT1hQNvGGAC+n5gOht6+8KN+JCHhKMVsBQU5V5yw6GXXMGzpCR7AfSCtt5R3d8d8EXS2zEXWpRCFI28SPdrHbH6n2ZmroBVGN2+6Ib/uXC/scMxkWLTZSmJFj5I+jTvOL6Yi4ys9Ob1JARuyowf4x6FAQknbIFgnUze/rcQu7XgZ+uwd12/JhmEHZr+Un46ilwig6Gj0to/u0i80iUBogXAm8zZj9Njkh9TVimUt+9MLGRMBw4kaz7v0XcDsLXvFOKIozMZPiRmAqza6B123gEBaRD2WPilYBYAWbHlzR14JOdef9QOtAM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Dec 4, 2023 at 6:21=E2=80=AFPM Ryan Roberts = wrote: > > Introduce the logic to allow THP to be configured (through the new sysfs > interface we just added) to allocate large folios to back anonymous > memory, which are larger than the base page size but smaller than > PMD-size. We call this new THP extension "multi-size THP" (mTHP). > > mTHP continues to be PTE-mapped, but in many cases can still provide > similar benefits to traditional PMD-sized THP: Page faults are > significantly reduced (by a factor of e.g. 4, 8, 16, etc. depending on > the configured order), but latency spikes are much less prominent > because the size of each page isn't as huge as the PMD-sized variant and > there is less memory to clear in each page fault. The number of per-page > operations (e.g. ref counting, rmap management, lru list management) are > also significantly reduced since those ops now become per-folio. > > Some architectures also employ TLB compression mechanisms to squeeze > more entries in when a set of PTEs are virtually and physically > contiguous and approporiately aligned. In this case, TLB misses will > occur less often. > > The new behaviour is disabled by default, but can be enabled at runtime > by writing to /sys/kernel/mm/transparent_hugepage/hugepage-XXkb/enabled > (see documentation in previous commit). The long term aim is to change > the default to include suitable lower orders, but there are some risks > around internal fragmentation that need to be better understood first. > > Signed-off-by: Ryan Roberts > --- > include/linux/huge_mm.h | 6 ++- > mm/memory.c | 106 ++++++++++++++++++++++++++++++++++++---- > 2 files changed, 101 insertions(+), 11 deletions(-) > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > index bd0eadd3befb..91a53b9835a4 100644 > --- a/include/linux/huge_mm.h > +++ b/include/linux/huge_mm.h > @@ -68,9 +68,11 @@ extern struct kobj_attribute shmem_enabled_attr; > #define HPAGE_PMD_NR (1< > /* > - * Mask of all large folio orders supported for anonymous THP. > + * Mask of all large folio orders supported for anonymous THP; all order= s up to > + * and including PMD_ORDER, except order-0 (which is not "huge") and ord= er-1 > + * (which is a limitation of the THP implementation). > */ > -#define THP_ORDERS_ALL_ANON BIT(PMD_ORDER) > +#define THP_ORDERS_ALL_ANON ((BIT(PMD_ORDER + 1) - 1) & ~(BIT(0) | BI= T(1))) > > /* > * Mask of all large folio orders supported for file THP. > diff --git a/mm/memory.c b/mm/memory.c > index 3ceeb0f45bf5..bf7e93813018 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -4125,6 +4125,84 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > return ret; > } > > +static bool pte_range_none(pte_t *pte, int nr_pages) > +{ > + int i; > + > + for (i =3D 0; i < nr_pages; i++) { > + if (!pte_none(ptep_get_lockless(pte + i))) > + return false; > + } > + > + return true; > +} > + > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE > +static struct folio *alloc_anon_folio(struct vm_fault *vmf) > +{ > + gfp_t gfp; > + pte_t *pte; > + unsigned long addr; > + struct folio *folio; > + struct vm_area_struct *vma =3D vmf->vma; > + unsigned long orders; > + int order; > + > + /* > + * If uffd is active for the vma we need per-page fault fidelity = to > + * maintain the uffd semantics. > + */ > + if (userfaultfd_armed(vma)) > + goto fallback; > + > + /* > + * Get a list of all the (large) orders below PMD_ORDER that are = enabled > + * for this vma. Then filter out the orders that can't be allocat= ed over > + * the faulting address and still be fully contained in the vma. > + */ > + orders =3D thp_vma_allowable_orders(vma, vma->vm_flags, false, tr= ue, true, > + BIT(PMD_ORDER) - 1); > + orders =3D thp_vma_suitable_orders(vma, vmf->address, orders); > + > + if (!orders) > + goto fallback; > + > + pte =3D pte_offset_map(vmf->pmd, vmf->address & PMD_MASK); > + if (!pte) > + return ERR_PTR(-EAGAIN); > + > + order =3D first_order(orders); > + while (orders) { > + addr =3D ALIGN_DOWN(vmf->address, PAGE_SIZE << order); > + vmf->pte =3D pte + pte_index(addr); > + if (pte_range_none(vmf->pte, 1 << order)) > + break; > + order =3D next_order(&orders, order); > + } > + > + vmf->pte =3D NULL; > + pte_unmap(pte); > + > + gfp =3D vma_thp_gfp_mask(vma); > + > + while (orders) { > + addr =3D ALIGN_DOWN(vmf->address, PAGE_SIZE << order); > + folio =3D vma_alloc_folio(gfp, order, vma, addr, true); > + if (folio) { > + clear_huge_page(&folio->page, addr, 1 << order); Minor. Do we have to constantly clear a huge page? Is it possible to let post_alloc_hook() finish this job by using __GFP_ZERO/__GFP_ZEROTAGS as vma_alloc_zeroed_movable_folio() is doing? struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, unsigned long vaddr) { gfp_t flags =3D GFP_HIGHUSER_MOVABLE | __GFP_ZERO; /* * If the page is mapped with PROT_MTE, initialise the tags at the * point of allocation and page zeroing as this is usually faster t= han * separate DC ZVA and STGM. */ if (vma->vm_flags & VM_MTE) flags |=3D __GFP_ZEROTAGS; return vma_alloc_folio(flags, 0, vma, vaddr, false); } > + return folio; > + } > + order =3D next_order(&orders, order); > + } > + > +fallback: > + return vma_alloc_zeroed_movable_folio(vma, vmf->address); > +} > +#else > +#define alloc_anon_folio(vmf) \ > + vma_alloc_zeroed_movable_folio((vmf)->vma, (vmf)->address= ) > +#endif > + > /* > * We enter with non-exclusive mmap_lock (to exclude vma changes, > * but allow concurrent faults), and pte mapped but not yet locked. > @@ -4132,6 +4210,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > */ > static vm_fault_t do_anonymous_page(struct vm_fault *vmf) > { > + int i; > + int nr_pages =3D 1; > + unsigned long addr =3D vmf->address; > bool uffd_wp =3D vmf_orig_pte_uffd_wp(vmf); > struct vm_area_struct *vma =3D vmf->vma; > struct folio *folio; > @@ -4176,10 +4257,15 @@ static vm_fault_t do_anonymous_page(struct vm_fau= lt *vmf) > /* Allocate our own private page. */ > if (unlikely(anon_vma_prepare(vma))) > goto oom; > - folio =3D vma_alloc_zeroed_movable_folio(vma, vmf->address); > + folio =3D alloc_anon_folio(vmf); > + if (IS_ERR(folio)) > + return 0; > if (!folio) > goto oom; > > + nr_pages =3D folio_nr_pages(folio); > + addr =3D ALIGN_DOWN(vmf->address, nr_pages * PAGE_SIZE); > + > if (mem_cgroup_charge(folio, vma->vm_mm, GFP_KERNEL)) > goto oom_free_page; > folio_throttle_swaprate(folio, GFP_KERNEL); > @@ -4196,12 +4282,13 @@ static vm_fault_t do_anonymous_page(struct vm_fau= lt *vmf) > if (vma->vm_flags & VM_WRITE) > entry =3D pte_mkwrite(pte_mkdirty(entry), vma); > > - vmf->pte =3D pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->addre= ss, > - &vmf->ptl); > + vmf->pte =3D pte_offset_map_lock(vma->vm_mm, vmf->pmd, addr, &vmf= ->ptl); > if (!vmf->pte) > goto release; > - if (vmf_pte_changed(vmf)) { > - update_mmu_tlb(vma, vmf->address, vmf->pte); > + if ((nr_pages =3D=3D 1 && vmf_pte_changed(vmf)) || > + (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages))) { > + for (i =3D 0; i < nr_pages; i++) > + update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->pt= e + i); > goto release; > } > > @@ -4216,16 +4303,17 @@ static vm_fault_t do_anonymous_page(struct vm_fau= lt *vmf) > return handle_userfault(vmf, VM_UFFD_MISSING); > } > > - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); > - folio_add_new_anon_rmap(folio, vma, vmf->address); > + folio_ref_add(folio, nr_pages - 1); > + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); > + folio_add_new_anon_rmap(folio, vma, addr); > folio_add_lru_vma(folio, vma); > setpte: > if (uffd_wp) > entry =3D pte_mkuffd_wp(entry); > - set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry); > + set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr_pages); > > /* No need to invalidate - it was non-present before */ > - update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); > + update_mmu_cache_range(vmf, vma, addr, vmf->pte, nr_pages); > unlock: > if (vmf->pte) > pte_unmap_unlock(vmf->pte, vmf->ptl); > -- > 2.25.1 >