From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8648BC001DF for ; Wed, 2 Aug 2023 21:06:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D302A2801E6; Wed, 2 Aug 2023 17:06:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CE0972801AA; Wed, 2 Aug 2023 17:06:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B81372801E6; Wed, 2 Aug 2023 17:06:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id A2F712801AA for ; Wed, 2 Aug 2023 17:06:01 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 7338DA0F56 for ; Wed, 2 Aug 2023 21:06:01 +0000 (UTC) X-FDA: 81080396922.30.DFA18C8 Received: from mail-qt1-f174.google.com (mail-qt1-f174.google.com [209.85.160.174]) by imf16.hostedemail.com (Postfix) with ESMTP id 83FE718001B for ; Wed, 2 Aug 2023 21:05:59 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=eS2QUfl5; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of yuzhao@google.com designates 209.85.160.174 as permitted sender) smtp.mailfrom=yuzhao@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1691010359; a=rsa-sha256; cv=none; b=WtqikujkvtfJ0BRJCFmqCbCr0YyNKyc8iRcnBpH6P0fzDUBb+DupkBKEbV+K+Xgj9DCd74 pm70V0dr4Fg3F2EU8g2WHwpVwvzWZsn9DN0uJrBx79VfH1Xx7ZgPwOnHNwEKxGZzm/H+Jx vQCZCjKvx180vQproLNcoxCeEr6HR3M= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=eS2QUfl5; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of yuzhao@google.com designates 209.85.160.174 as permitted sender) smtp.mailfrom=yuzhao@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1691010359; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LWF3aL11NyD1CVoDxNQ2PmhgsCGdDd1WMU/bfe/1eEE=; b=UgaE1B+LMEvfhIj1Ni2caSqIAZeVTK8Bm+9wCitavwNCvyN8j3iu/+YQNKNWq+2Ukl6Hw6 etBGPUGGW2FzcrrL9O1hnBMlFrbz15gBbuaSgchk8BhsPKDS29ydZBrxYPqY1jS4DI7aPe 5VKv9XEznl8FVSFIb8TCMZBH6guqzLE= Received: by mail-qt1-f174.google.com with SMTP id d75a77b69052e-407db3e9669so25571cf.1 for ; Wed, 02 Aug 2023 14:05:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691010358; x=1691615158; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=LWF3aL11NyD1CVoDxNQ2PmhgsCGdDd1WMU/bfe/1eEE=; b=eS2QUfl5KkS5/4qYspLxFcyIWawGjoXuxrzojJcFUWfhQBuk8wCmg8UjLXgWWRFMYN XBs0ibAOd4+hPuHgzO/POxPJi4dA9npAS36wy22iXsQYgCrO0IVOqwDfk0gJ8zeqiggT sDdBQsiNtJrXxPtCDBceQ/CpiFOS9ad6/PSfllq/bI/jNXbtcYNXYyCuSIweZddEJdeB xfaDqfH5XyQdlILjQctBmN6AGY6Hbj93z6LgU3z23IpWFWI9MHyS/KT7ufzHd+x6TEh1 1Ks1m/juQqRFYMbFsQRMtuYzgJg/e4rgV3GOK++RStfLWZKxG3vNzDTsJGCjfvJH8tuR bNqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691010358; x=1691615158; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LWF3aL11NyD1CVoDxNQ2PmhgsCGdDd1WMU/bfe/1eEE=; b=Z1axez2CjjGxqCxxLeoWAkdP0lV6WrKKcs+fraYsxLikGhAN1aBDzIVl+gkpoT8FAs UPyGLDSZHnncBVINXs45YpVZd4bF1Jb+Jdv0I/560Gb+Yftb4382n/9nfkQkipnMaJHB DbnX/z6ccYO2mSDHNm/iyqngkGNAHaT+kp5Vu+B4bsRBgS3fMHlTu8OBwMazMK8n87sh APKGBa3J4JY2jIQdZTec9CDfSJVHvaN2HlRXl98UBMYD6KKu7HBh4eANKS2oRBSjIcwz RgrElvrQBIWdl2dxGOxtkmonDksQgLl7JXRUdbc5Xzm4Un47Y7b5DhcueZmliTLtLct1 vfSw== X-Gm-Message-State: ABy/qLZU6xNIdiEGgctyzrBtnpGmilfnPuqfxEKOrGAhzvoiJN+N2QK1 3BqLP/B6jbyCERgtu82es9EyjH+6UR4ZBPqnrA1UKg== X-Google-Smtp-Source: APBJJlH3DxuillX6R4KD7WNDmRI2MFHocTd0U+76vT7nEv7eSiLgFEtWRYp11e+lSBPRiPfL3gqW5fUF9QCnQGbIuKY= X-Received: by 2002:a05:622a:1441:b0:3f2:2c89:f1ef with SMTP id v1-20020a05622a144100b003f22c89f1efmr1316954qtx.5.1691010358222; Wed, 02 Aug 2023 14:05:58 -0700 (PDT) MIME-Version: 1.0 References: <20230726095146.2826796-1-ryan.roberts@arm.com> <20230726095146.2826796-3-ryan.roberts@arm.com> In-Reply-To: From: Yu Zhao Date: Wed, 2 Aug 2023 15:05:21 -0600 Message-ID: Subject: Re: [PATCH v4 2/5] mm: LARGE_ANON_FOLIO for improved performance To: Ryan Roberts Cc: Andrew Morton , Matthew Wilcox , Yin Fengwei , David Hildenbrand , Catalin Marinas , Will Deacon , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain , Itaru Kitayama , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 83FE718001B X-Stat-Signature: htaoy431wqrwhzb5nnnsjruejadduetk X-HE-Tag: 1691010359-122249 X-HE-Meta: U2FsdGVkX19Roz7s7JicT5f0WbbCtKHnkb6H3v0zb4QJO7hjRvxXfwd5F051woCtO9FBKL1I1b9jYTTzkPmDXwWoMtdb3MNFFm7k40Vwql6LQeMWDUhU4YgLl78jpFua+0j7d2LJD88l9uth/LOFIgmSaFdxbE77UQUUUJ/a+dfKEHCD8MCX2AQfcSQZkzJng7uziOHLVvbRG0Z/71D4ffW2Vq8Ksy2LBJLVoy/P+TanrXUL6uVUyWcsMoJiDXFmUOdTJyTnD3fJlhe9Hf3hcAjYMX46vn2D+6skljEsMgeIGltYYnqoNpkzTdZq7QXi9mceo0YgnNvRitDnqNg5iFtz4y37Zn5apoVIP3v4yKG0eKoZic6knpf1IQSidpbfF0tMbaGP0NLthPiNMRrHTJmfRWNdoROgE24mbru2JGSzRDxp3DtdmPlyK3LX3OMeO77ZOCwKuTcgHJpqyP2BAU05+jQOpAh/myz4qLdI9L4zpdQdTQZg6AzsnzP0+QG31w+gB4KyWhoQLhhVZcP+rNOi4mU7varf3bFphyPqd2qZ+1iN5T1Mc5chkp2FQRsxzzAbaBDXJsl9nZs9aALR1K7G6aUsRDoGaeQUN2ZknrOqqEdzX3HOxItQDNO1laxXYzMB5Rd40JlCuz+f3ScXWEzhf67mdd9JT8yZtu4VkLoZuGE6UsG2SExGuoXhgypER7199k0dLVaKWUAzCKpHrbqFSqcqOEYdxgOJlVsBWUaRECfd3lfQWmZZdLySBfeX+BKLMecSo+jsqcO94K8+5p8r+la/a42680/ueojvzCOpnc6JJbZhpSu8fdgSGQye8b/L6pVOGiwoU2gfzfM82Jra2Cdc3CUkKFr9JNTC9fvw8k+pySdesO+hrSBoJDeJ6GlHoY4vduavKKj+RfCxVFgP7fUFgnoIw0FXQu3xyKjOyoHMuhQTtfpKScTzqYKtwPTHYvH+uEMq4qax78m OtRIwaZy dzCVEwyucH5nKms1/LQcJY/8auZXInKS6ut5SHHLoOUhiAdMLW5ZNUjN1nIgvoXToHYN+3mdNuj467PqvH7KOclh5wda8HH4ndRIEtvyTp/cC3oG05tfA5OANiQM+JlS5afPG5vASltWbItetLOIWqV5FCMf7GbdW2zBAHElwkzaUWRIb6Eom7JDI5dcrIDxWqWBg9u4bdKs/4VnUgZ13lLOT4cDDplAqRu5Zaz9pudb4ctJdfT+sL/HKK4jkM0HRK4r6BO6zyOQwXzUf5G8U6JWgEeqqbIaI7SplcLCK48RA9hTFoikOyKI9L5vBejx16w2et94ttGlms70jb70CnUy4jK+FjODPH1SQv9YFtdcNvpVzHJvK+vxEDAMA/Wtqx8XTsRWJtoQcj8RVv0UyA3MD6g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Aug 2, 2023 at 3:33=E2=80=AFAM Ryan Roberts = wrote: > > On 01/08/2023 07:18, Yu Zhao wrote: > > On Wed, Jul 26, 2023 at 3:52=E2=80=AFAM Ryan Roberts wrote: > >> > >> Introduce LARGE_ANON_FOLIO feature, which allows anonymous memory to b= e > >> allocated in large folios of a determined order. All pages of the larg= e > >> folio are pte-mapped during the same page fault, significantly reducin= g > >> the number of page faults. The number of per-page operations (e.g. ref > >> counting, rmap management lru list management) are also significantly > >> reduced since those ops now become per-folio. > >> > >> The new behaviour is hidden behind the new LARGE_ANON_FOLIO Kconfig, > >> which defaults to disabled for now; The long term aim is for this to > >> defaut to enabled, but there are some risks around internal > >> fragmentation that need to be better understood first. > >> > >> When enabled, the folio order is determined as such: For a vma, proces= s > >> or system that has explicitly disabled THP, we continue to allocate > >> order-0. THP is most likely disabled to avoid any possible internal > >> fragmentation so we honour that request. > >> > >> Otherwise, the return value of arch_wants_pte_order() is used. For vma= s > >> that have not explicitly opted-in to use transparent hugepages (e.g. > >> where thp=3Dmadvise and the vma does not have MADV_HUGEPAGE), then > >> arch_wants_pte_order() is limited to 64K (or PAGE_SIZE, whichever is > >> bigger). This allows for a performance boost without requiring any > >> explicit opt-in from the workload while limitting internal > >> fragmentation. > >> > >> If the preferred order can't be used (e.g. because the folio would > >> breach the bounds of the vma, or because ptes in the region are alread= y > >> mapped) then we fall back to a suitable lower order; first > >> PAGE_ALLOC_COSTLY_ORDER, then order-0. > >> > >> arch_wants_pte_order() can be overridden by the architecture if desire= d. > >> Some architectures (e.g. arm64) can coalsece TLB entries if a contiguo= us > >> set of ptes map physically contigious, naturally aligned memory, so th= is > >> mechanism allows the architecture to optimize as required. > >> > >> Here we add the default implementation of arch_wants_pte_order(), used > >> when the architecture does not define it, which returns -1, implying > >> that the HW has no preference. In this case, mm will choose it's own > >> default order. > >> > >> Signed-off-by: Ryan Roberts > >> --- > >> include/linux/pgtable.h | 13 ++++ > >> mm/Kconfig | 10 +++ > >> mm/memory.c | 166 ++++++++++++++++++++++++++++++++++++---= - > >> 3 files changed, 172 insertions(+), 17 deletions(-) > >> > >> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h > >> index 5063b482e34f..2a1d83775837 100644 > >> --- a/include/linux/pgtable.h > >> +++ b/include/linux/pgtable.h > >> @@ -313,6 +313,19 @@ static inline bool arch_has_hw_pte_young(void) > >> } > >> #endif > >> > >> +#ifndef arch_wants_pte_order > >> +/* > >> + * Returns preferred folio order for pte-mapped memory. Must be in ra= nge [0, > >> + * PMD_SHIFT-PAGE_SHIFT) and must not be order-1 since THP requires l= arge folios > >> + * to be at least order-2. Negative value implies that the HW has no = preference > >> + * and mm will choose it's own default order. > >> + */ > >> +static inline int arch_wants_pte_order(void) > >> +{ > >> + return -1; > >> +} > >> +#endif > >> + > >> #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR > >> static inline pte_t ptep_get_and_clear(struct mm_struct *mm, > >> unsigned long address, > >> diff --git a/mm/Kconfig b/mm/Kconfig > >> index 09130434e30d..fa61ea160447 100644 > >> --- a/mm/Kconfig > >> +++ b/mm/Kconfig > >> @@ -1238,4 +1238,14 @@ config LOCK_MM_AND_FIND_VMA > >> > >> source "mm/damon/Kconfig" > >> > >> +config LARGE_ANON_FOLIO > >> + bool "Allocate large folios for anonymous memory" > >> + depends on TRANSPARENT_HUGEPAGE > >> + default n > >> + help > >> + Use large (bigger than order-0) folios to back anonymous mem= ory where > >> + possible, even for pte-mapped memory. This reduces the numbe= r of page > >> + faults, as well as other per-page overheads to improve perfo= rmance for > >> + many workloads. > >> + > >> endmenu > >> diff --git a/mm/memory.c b/mm/memory.c > >> index 01f39e8144ef..64c3f242c49a 100644 > >> --- a/mm/memory.c > >> +++ b/mm/memory.c > >> @@ -4050,6 +4050,127 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > >> return ret; > >> } > >> > >> +static bool vmf_pte_range_changed(struct vm_fault *vmf, int nr_pages) > >> +{ > >> + int i; > >> + > >> + if (nr_pages =3D=3D 1) > >> + return vmf_pte_changed(vmf); > >> + > >> + for (i =3D 0; i < nr_pages; i++) { > >> + if (!pte_none(ptep_get_lockless(vmf->pte + i))) > >> + return true; > >> + } > >> + > >> + return false; > >> +} > >> + > >> +#ifdef CONFIG_LARGE_ANON_FOLIO > >> +#define ANON_FOLIO_MAX_ORDER_UNHINTED \ > >> + (ilog2(max_t(unsigned long, SZ_64K, PAGE_SIZE)) - PAGE= _SHIFT) > >> + > >> +static int anon_folio_order(struct vm_area_struct *vma) > >> +{ > >> + int order; > >> + > >> + /* > >> + * If THP is explicitly disabled for either the vma, the proce= ss or the > >> + * system, then this is very likely intended to limit internal > >> + * fragmentation; in this case, don't attempt to allocate a la= rge > >> + * anonymous folio. > >> + * > >> + * Else, if the vma is eligible for thp, allocate a large foli= o of the > >> + * size preferred by the arch. Or if the arch requested a very= small > >> + * size or didn't request a size, then use PAGE_ALLOC_COSTLY_O= RDER, > >> + * which still meets the arch's requirements but means we stil= l take > >> + * advantage of SW optimizations (e.g. fewer page faults). > >> + * > >> + * Finally if thp is enabled but the vma isn't eligible, take = the > >> + * arch-preferred size and limit it to ANON_FOLIO_MAX_ORDER_UN= HINTED. > >> + * This ensures workloads that have not explicitly opted-in ta= ke benefit > >> + * while capping the potential for internal fragmentation. > >> + */ > >> + > >> + if ((vma->vm_flags & VM_NOHUGEPAGE) || > >> + test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags) || > >> + !hugepage_flags_enabled()) > >> + order =3D 0; > >> + else { > >> + order =3D max(arch_wants_pte_order(), PAGE_ALLOC_COSTL= Y_ORDER); > >> + > >> + if (!hugepage_vma_check(vma, vma->vm_flags, false, tru= e, true)) > >> + order =3D min(order, ANON_FOLIO_MAX_ORDER_UNHI= NTED); > >> + } > >> + > >> + return order; > >> +} > >> + > >> +static int alloc_anon_folio(struct vm_fault *vmf, struct folio **foli= o) > >> +{ > >> + int i; > >> + gfp_t gfp; > >> + pte_t *pte; > >> + unsigned long addr; > >> + struct vm_area_struct *vma =3D vmf->vma; > >> + int prefer =3D anon_folio_order(vma); > >> + int orders[] =3D { > >> + prefer, > >> + prefer > PAGE_ALLOC_COSTLY_ORDER ? PAGE_ALLOC_COSTLY_O= RDER : 0, > >> + 0, > >> + }; > >> + > >> + *folio =3D NULL; > >> + > >> + if (vmf_orig_pte_uffd_wp(vmf)) > >> + goto fallback; > > > > I think we need to s/vmf_orig_pte_uffd_wp/userfaultfd_armed/ here; > > otherwise UFFD would miss VM_UFFD_MISSING/MINOR. > > I don't think this is the case. As far as I can see, VM_UFFD_MINOR only a= pplies > to shmem and hugetlb. Correct, but we don't have a helper to check against (VM_UFFD_WP | VM_UFFD_MISSING). Reusing userfaultfd_armed() seems cleaner to me or even future proof. > VM_UFFD_MISSING is checked under the PTL and if set on the > VMA, then it is handled without mapping the folio that was just allocated= : > > /* Deliver the page fault to userland, check inside PT lock */ > if (userfaultfd_missing(vma)) { > pte_unmap_unlock(vmf->pte, vmf->ptl); > folio_put(folio); > return handle_userfault(vmf, VM_UFFD_MISSING); > } > > So we are racing to allocate a large folio; if the vma later turns out to= have > MISSING handling registered, we drop the folio and handle it, else we map= the > large folio. Yes, then we have allocated a large folio (with great effort if under heavy memory pressure) for nothing. > The vmf_orig_pte_uffd_wp() *is* required because we need to individually = check > each PTE for the uffd_wp bit and fix it up. This is not correct: we cannot see a WP PTE before you see VM_UFFD_WP. So checking VM_UFFD_WP is perfectly safe. The reason we might want to check individual PTEs is because WP can be done to a subrange of a VMA that has VM_UFFD_WP, which I don't think is the common case and worth considering here. But if you want to keep it, that's fine with me. Without some comments, the next person might find these two checks confusing though, if you plan to add both. > So I think the code is correct, but perhaps it is safer/simpler to always= avoid > allocating a large folio if the vma is registered for uffd in the way you > suggest? I don't know enough about uffd to form a strong opinion either w= ay. Yes, it's not about correctness. Just a second thought about not allocating large folios unnecessarily when possible.