From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51235EB64DD for ; Tue, 1 Aug 2023 06:19:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A12A22800E1; Tue, 1 Aug 2023 02:18:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9C3132800C8; Tue, 1 Aug 2023 02:18:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 889692800E1; Tue, 1 Aug 2023 02:18:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 79D9F2800C8 for ; Tue, 1 Aug 2023 02:18:59 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 46D8BA0893 for ; Tue, 1 Aug 2023 06:18:59 +0000 (UTC) X-FDA: 81074532798.21.1C80AA1 Received: from mail-qt1-f176.google.com (mail-qt1-f176.google.com [209.85.160.176]) by imf15.hostedemail.com (Postfix) with ESMTP id 791BCA0015 for ; Tue, 1 Aug 2023 06:18:57 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=aq58tPJR; spf=pass (imf15.hostedemail.com: domain of yuzhao@google.com designates 209.85.160.176 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690870737; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=XnQmdBO+sQiEeREpmllXHSrex2CdPULNl4Hy44lzRY0=; b=Eifst6rz88zJ2vCloRHA/o0LV64XHKhIopPXTUpqGu3GfK065eogTzxD3I4KP1edtOOPcA fdy2u2I/8Ppn5cRWjX3CYuK7ukcfD11kt0rLilzb7OQX254Cf0uqWtm2klf0tpFTSCl2pL v/D4SUXdtGmF7RvM+Z7MRxfWO1zIA2g= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690870737; a=rsa-sha256; cv=none; b=F7ORgmHJmNmV5GwFC5lBEEM8WFDF3uVMMyGLAPSV9465L1s15bPIgR40NgaoWwEj0JsMcX t/0BNPU5VF7AJTJwRz7awYoNJIMl0zoHEZuvJU3UpDI5oLnKMstq8mN1khoJYbMnaWLtZp 2NeXOITUJwSXFv0hEnQgk8LOZ6lZh1A= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=aq58tPJR; spf=pass (imf15.hostedemail.com: domain of yuzhao@google.com designates 209.85.160.176 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-qt1-f176.google.com with SMTP id d75a77b69052e-407db3e9669so139311cf.1 for ; Mon, 31 Jul 2023 23:18:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690870736; x=1691475536; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=XnQmdBO+sQiEeREpmllXHSrex2CdPULNl4Hy44lzRY0=; b=aq58tPJR/4zjaqO6D9oGuMVuamG3dpBItK5mCYBPT7uiNwoUi3qknB6/rJcp0rrskM Z1yjgmcTJ/qGVkueD3wRNpvV8ior5hPPUl5ONRyUc82pQzCK8qzyt0l+viqOMJoCmL7n MzplLdf4SAIqdRFNaaNqhGU5DrN3dOYOdw3YyG/3LTt5nN1bVz9dy1jOc4jdhOrC+xXD GB3nealCBvgkV9wPLbHujAQCU0vQ8H+LqAxYMmruKiQNm70TiN1flGo2j3SK9fotckdn 6UXOEN/z6Ju+gwEGmy0Wcm7HbaGjiPqxoRN4grxpiK0FCFwLqOt045wpNbUZU8IAfQ4m Um8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690870736; x=1691475536; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XnQmdBO+sQiEeREpmllXHSrex2CdPULNl4Hy44lzRY0=; b=FMXL/f5rO6VJBKvVxZTq3QZI1FLhw+r0efWVddMuJPqjEJHk2PjdEHN2gohYFJZqoH YziprJDcAk+FKbkJlhlOCWN2KUI5Nv/FeMi/BU/2RDIRglSRPxCMKdL6teH6wAlyUKyn hWMqtwgO8sl6MYxvhfC22BwoHoa2Vu2biXOf2bpc+bAkEeJfAZp1J/hNcty9CAr4HRoG MhXatSehfzEo+4eluVux5ho40f7xI2MsxcWhSlR2BPhLYfoXjmgcvl8pM3P6hLggqbnw k7KJ3KmzafL3fJ4F6FAsaPhxU50eNIxSMPcA8wUBk4dF5EEl9bpH9wzZUlNsJhUz9J7V e5MA== X-Gm-Message-State: ABy/qLZkoxah2lV3JcSQlfH0OzuKvw4EPFTsxirczXAu8WIjmdSLKrJ3 /IIzY9SMSDv83FdMFN+Wmcbg+zntPPuMjdrh/9Ojng== X-Google-Smtp-Source: APBJJlHSDttDk2v1YrLbv3qbnqqDCXyKJI9nRdWol4jD2Hsajqgxf64VVzSnMhgwT4nxy72Df+xlX3oc5pNE30WzMNM= X-Received: by 2002:a05:622a:1a8a:b0:3f9:a770:7279 with SMTP id s10-20020a05622a1a8a00b003f9a7707279mr554036qtc.9.1690870736427; Mon, 31 Jul 2023 23:18:56 -0700 (PDT) MIME-Version: 1.0 References: <20230726095146.2826796-1-ryan.roberts@arm.com> <20230726095146.2826796-3-ryan.roberts@arm.com> In-Reply-To: <20230726095146.2826796-3-ryan.roberts@arm.com> From: Yu Zhao Date: Tue, 1 Aug 2023 00:18:20 -0600 Message-ID: Subject: Re: [PATCH v4 2/5] mm: LARGE_ANON_FOLIO for improved performance To: Ryan Roberts Cc: Andrew Morton , Matthew Wilcox , Yin Fengwei , David Hildenbrand , Catalin Marinas , Will Deacon , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain , Itaru Kitayama , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 791BCA0015 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: b8rscuui3jff4x13wodigsqwesb7z1fp X-HE-Tag: 1690870737-525290 X-HE-Meta: U2FsdGVkX1+uIGfpPW7m2dTL103tncMjSCRqqk6m/2PHhmMGmkixIh/o/IciNRhfPD7O9xfeb5QEeQV4Y5tD+P/wkkCmig73WCWi5CNMtRa6rtRKLOeM8Xz3+4eBspc6W1mhZnOAMt2dhaMlhtb8EP0VDoW95OsIjQQzgBqYDm8WbmqF2rNViH8K6cXYnTxuwRW28/D2gLXJaFmKQVzExm9Ow70YWMk7aVjt8ZMM37acz7oZtE9JxqwcLA8oyt1ckEMTukT6bc6Nj/BnI4pTcmiLOB+/lc1DY00mQM21Zz1lrhh8/Zh1cXCh1XqKsBvnmxBL22qKKHGAJnMJAqEVTJsC4bZYeV2VW7pcUAcTxkyBn/1afybdk305BntsTmOvFeVYkCYZXtMPDJmd7w19iVUG7gPhnW3QGVoPkjLaU/it8ksltC8co2t2W7UwJWOLVQIOTh/jt9vqZJhekMwu1cDlx5ylYkS1d1kZKvdf+JvrtjD4LfaK2lguRQV5EkJ1EdCDbOoNcPgU+pIDJDtv52XkxEffGVgFejGjeGxvBEcyW5g+pIxdMd1QIIR4SdeCxH/MG0KITJYBroTAEU1gnU1NFJ+IHNkIHBeZTRKy7m8N/WuE76W0D7oV6omV72GeI0Wam1kPUDkGHCblk14YeG3/gq8W0Yi6ZCjqW7zN4S0wWtJxZAiap1O6Cff7+9JlyOMl6rB/S1QPxu+s1HObybfRf6I03x4pgSJGHQ8VI3E1j1WFlmvwViBa7ZiahxljT69e03j2hxlgu2Di6a1gxsylXUunW+YWJKEiypR2EBe2EVhc1EAeDJksvebq/ONrALcXX50S+jpkSnZUF7UNexfTY2nqKCIB+bxsTKJUiGgpmXicSUtbAHubXXSphEqq9ZpwveY03vmOEwFb+sAuCQB+mZDKD1/VqIcc+DePlpSTeUx/4KQw7Ep63J0VbZaThJAYPV4I53eQ1xfx9Xf 6aTHhpPq YjV3dgjrn67oFfq4nTqXpw1gjgIOnSm2+TB3UX4gIEOI2v8u4TZuAnRzj4kIVxa+w608lDBSHLY63GScUdvZO708ictJjKxFH6jd2d6bdVCCkbRi/cnZVFokrub24meS05Wo3AW9tDlXonhE+L7wo+kadxRFT/xez8jNDVtwsj6/YfUViMEbg7oEBUDcwfqK0SjfJEaZCf/5n24qpivXGrsRAufQilBUjyfdY+FlFeGIjmYpbHOSqphzFnrepuy6Y+Fx/8RcZvqQSs8S/z331GmVlYyfRhWL4a/luokyQ9MuMpdmr/1+wPNJCMeGDhAXQa64Eyy2zbTKLmLKxB5ol+n7nf9Jr8YG03tO5wkG4oUkzej+p3D+jTxkv4tZIac88ZHry42w+NLDGPyVrIy8QpXgeIQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jul 26, 2023 at 3:52=E2=80=AFAM Ryan Roberts = wrote: > > Introduce LARGE_ANON_FOLIO feature, which allows anonymous memory to be > allocated in large folios of a determined order. All pages of the large > folio are pte-mapped during the same page fault, significantly reducing > the number of page faults. The number of per-page operations (e.g. ref > counting, rmap management lru list management) are also significantly > reduced since those ops now become per-folio. > > The new behaviour is hidden behind the new LARGE_ANON_FOLIO Kconfig, > which defaults to disabled for now; The long term aim is for this to > defaut to enabled, but there are some risks around internal > fragmentation that need to be better understood first. > > When enabled, the folio order is determined as such: For a vma, process > or system that has explicitly disabled THP, we continue to allocate > order-0. THP is most likely disabled to avoid any possible internal > fragmentation so we honour that request. > > Otherwise, the return value of arch_wants_pte_order() is used. For vmas > that have not explicitly opted-in to use transparent hugepages (e.g. > where thp=3Dmadvise and the vma does not have MADV_HUGEPAGE), then > arch_wants_pte_order() is limited to 64K (or PAGE_SIZE, whichever is > bigger). This allows for a performance boost without requiring any > explicit opt-in from the workload while limitting internal > fragmentation. > > If the preferred order can't be used (e.g. because the folio would > breach the bounds of the vma, or because ptes in the region are already > mapped) then we fall back to a suitable lower order; first > PAGE_ALLOC_COSTLY_ORDER, then order-0. > > arch_wants_pte_order() can be overridden by the architecture if desired. > Some architectures (e.g. arm64) can coalsece TLB entries if a contiguous > set of ptes map physically contigious, naturally aligned memory, so this > mechanism allows the architecture to optimize as required. > > Here we add the default implementation of arch_wants_pte_order(), used > when the architecture does not define it, which returns -1, implying > that the HW has no preference. In this case, mm will choose it's own > default order. > > Signed-off-by: Ryan Roberts > --- > include/linux/pgtable.h | 13 ++++ > mm/Kconfig | 10 +++ > mm/memory.c | 166 ++++++++++++++++++++++++++++++++++++---- > 3 files changed, 172 insertions(+), 17 deletions(-) > > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h > index 5063b482e34f..2a1d83775837 100644 > --- a/include/linux/pgtable.h > +++ b/include/linux/pgtable.h > @@ -313,6 +313,19 @@ static inline bool arch_has_hw_pte_young(void) > } > #endif > > +#ifndef arch_wants_pte_order > +/* > + * Returns preferred folio order for pte-mapped memory. Must be in range= [0, > + * PMD_SHIFT-PAGE_SHIFT) and must not be order-1 since THP requires larg= e folios > + * to be at least order-2. Negative value implies that the HW has no pre= ference > + * and mm will choose it's own default order. > + */ > +static inline int arch_wants_pte_order(void) > +{ > + return -1; > +} > +#endif > + > #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR > static inline pte_t ptep_get_and_clear(struct mm_struct *mm, > unsigned long address, > diff --git a/mm/Kconfig b/mm/Kconfig > index 09130434e30d..fa61ea160447 100644 > --- a/mm/Kconfig > +++ b/mm/Kconfig > @@ -1238,4 +1238,14 @@ config LOCK_MM_AND_FIND_VMA > > source "mm/damon/Kconfig" > > +config LARGE_ANON_FOLIO > + bool "Allocate large folios for anonymous memory" > + depends on TRANSPARENT_HUGEPAGE > + default n > + help > + Use large (bigger than order-0) folios to back anonymous memory= where > + possible, even for pte-mapped memory. This reduces the number o= f page > + faults, as well as other per-page overheads to improve performa= nce for > + many workloads. > + > endmenu > diff --git a/mm/memory.c b/mm/memory.c > index 01f39e8144ef..64c3f242c49a 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -4050,6 +4050,127 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > return ret; > } > > +static bool vmf_pte_range_changed(struct vm_fault *vmf, int nr_pages) > +{ > + int i; > + > + if (nr_pages =3D=3D 1) > + return vmf_pte_changed(vmf); > + > + for (i =3D 0; i < nr_pages; i++) { > + if (!pte_none(ptep_get_lockless(vmf->pte + i))) > + return true; > + } > + > + return false; > +} > + > +#ifdef CONFIG_LARGE_ANON_FOLIO > +#define ANON_FOLIO_MAX_ORDER_UNHINTED \ > + (ilog2(max_t(unsigned long, SZ_64K, PAGE_SIZE)) - PAGE_SH= IFT) > + > +static int anon_folio_order(struct vm_area_struct *vma) > +{ > + int order; > + > + /* > + * If THP is explicitly disabled for either the vma, the process = or the > + * system, then this is very likely intended to limit internal > + * fragmentation; in this case, don't attempt to allocate a large > + * anonymous folio. > + * > + * Else, if the vma is eligible for thp, allocate a large folio o= f the > + * size preferred by the arch. Or if the arch requested a very sm= all > + * size or didn't request a size, then use PAGE_ALLOC_COSTLY_ORDE= R, > + * which still meets the arch's requirements but means we still t= ake > + * advantage of SW optimizations (e.g. fewer page faults). > + * > + * Finally if thp is enabled but the vma isn't eligible, take the > + * arch-preferred size and limit it to ANON_FOLIO_MAX_ORDER_UNHIN= TED. > + * This ensures workloads that have not explicitly opted-in take = benefit > + * while capping the potential for internal fragmentation. > + */ > + > + if ((vma->vm_flags & VM_NOHUGEPAGE) || > + test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags) || > + !hugepage_flags_enabled()) > + order =3D 0; > + else { > + order =3D max(arch_wants_pte_order(), PAGE_ALLOC_COSTLY_O= RDER); > + > + if (!hugepage_vma_check(vma, vma->vm_flags, false, true, = true)) > + order =3D min(order, ANON_FOLIO_MAX_ORDER_UNHINTE= D); > + } > + > + return order; > +} > + > +static int alloc_anon_folio(struct vm_fault *vmf, struct folio **folio) > +{ > + int i; > + gfp_t gfp; > + pte_t *pte; > + unsigned long addr; > + struct vm_area_struct *vma =3D vmf->vma; > + int prefer =3D anon_folio_order(vma); > + int orders[] =3D { > + prefer, > + prefer > PAGE_ALLOC_COSTLY_ORDER ? PAGE_ALLOC_COSTLY_ORDE= R : 0, > + 0, > + }; > + > + *folio =3D NULL; > + > + if (vmf_orig_pte_uffd_wp(vmf)) > + goto fallback; I think we need to s/vmf_orig_pte_uffd_wp/userfaultfd_armed/ here; otherwise UFFD would miss VM_UFFD_MISSING/MINOR.