From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BB23EB64D9 for ; Tue, 27 Jun 2023 05:30:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6811E8D0002; Tue, 27 Jun 2023 01:30:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6089F8D0001; Tue, 27 Jun 2023 01:30:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4834A8D0002; Tue, 27 Jun 2023 01:30:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 351888D0001 for ; Tue, 27 Jun 2023 01:30:30 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id E666140879 for ; Tue, 27 Jun 2023 05:30:29 +0000 (UTC) X-FDA: 80947402578.16.E42CC57 Received: from mail-qt1-f179.google.com (mail-qt1-f179.google.com [209.85.160.179]) by imf11.hostedemail.com (Postfix) with ESMTP id 2C47D40010 for ; Tue, 27 Jun 2023 05:30:26 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=RVa8Qm3k; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of yuzhao@google.com designates 209.85.160.179 as permitted sender) smtp.mailfrom=yuzhao@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687843827; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Tr6z7HXoBtRJiy9wIGxM4+Vh8ji4UZ7Pb/0A3wtEW+4=; b=oE5HOVrN6GZ801PzqbnbTReT/2Co8kX7uED6lQJ40hk9ZrROYglWtkyyWNZ3rHb+RMSyfS fuI3EP2+qJV9a0mecVTU6mL38YbX/K4xO4TQKMwzTh+56+Fua9QV7iXlp7seuKZypxarJV dX/tvBfXHW5pSZ6Em19vEzJ6OARRuoE= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=RVa8Qm3k; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of yuzhao@google.com designates 209.85.160.179 as permitted sender) smtp.mailfrom=yuzhao@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687843827; a=rsa-sha256; cv=none; b=WhTXhXoLkty9hwh2J7Y1qnheewuNfy9aSZOoMocMyKpY5IXYDpzBngQPAodM7lIKYPG4Hw oTyl2NsCtjzMJil2uE+vlDszH+tMDfFLpTekXMlkpNbXI40S2eefhcG1FiWqpepfDG4yiC RqKl2snjVPtx6N+wBs1kDP97JG/9/0c= Received: by mail-qt1-f179.google.com with SMTP id d75a77b69052e-401d1d967beso158561cf.0 for ; Mon, 26 Jun 2023 22:30:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687843826; x=1690435826; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=Tr6z7HXoBtRJiy9wIGxM4+Vh8ji4UZ7Pb/0A3wtEW+4=; b=RVa8Qm3kb8eoPf1kUKXDt9afcKa8JJGNOR1UFa/0AXvcyyeUBXp0NPK1D2+mSv1Sh5 rpQwalJ7ERXwii22vOhV4jdreZ71bDgoFL91oi5ZvAytI2TnTqKdZwsJJQksiPAb9F5A eMtWcHbSMLPm/8/NAY764PCtLSzNEpaHonyQA9ibN2YB1nec+NhxYXg3DPSBci/Mieiy hfXmyBnF32aUNEdMMETSpTgoQF29B/RzDOVAEaK0wjkiw/1dwDOIrB71dOGzSLXH2WRw abdGK1RCBPpBQm75z0N+Eb1oHSROhMs00hs7GqWlUPqWns8MaEPXjUrfpNiqrC14TiiP ftdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687843826; x=1690435826; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Tr6z7HXoBtRJiy9wIGxM4+Vh8ji4UZ7Pb/0A3wtEW+4=; b=RF7tEDPQQwK++dzFsM+iK9UzLoRSSlip9V4Ly915J70iNJEVWQSfR9Bm8G3O7TU04r 8ODUjFe87ZKWb+BlKMyPTHy2rQuF8TKI0rUi60N4cR+1yThGmJOsfTu2y0CcvpRpALXp /sH8DJbaR5jGKFcClga2CeU1h7tJ6J2IBkHi/xKhP4/WbgB1P0HevlM8sPLt+3lmJN8D IO4boF7+4kHx35d6FU1+WMJc26j6iOd8vwW7J7X4gWqSxdumD44x3i0Z6THDEwf+nqNG PxNcBjnFSKqR4ijNUB0h6eZKUQtVgAyLg3i1/F9dsMSlFaUM01Sf+l38hsGehluWKO2h 3PTw== X-Gm-Message-State: AC+VfDyOxezihtzd/tT/NEJcLzR/A5xxCn5UcAC3Jr8V2EJIPyXjPM4P /eRoECjm2wK9qJyDsV4YycRjrV3yU6BHKD+lHfElyQ== X-Google-Smtp-Source: ACHHUZ53Gqj9pbs/heeF8njdErsrbiCtEuLOgb4nbpjNe6JZUe+y5NpTAJ4tEKixwEP+mIc66585aRjMV7mzdReURHY= X-Received: by 2002:ac8:5708:0:b0:3f3:75c2:7466 with SMTP id 8-20020ac85708000000b003f375c27466mr100849qtw.8.1687843826024; Mon, 26 Jun 2023 22:30:26 -0700 (PDT) MIME-Version: 1.0 References: <20230626171430.3167004-1-ryan.roberts@arm.com> <20230626171430.3167004-4-ryan.roberts@arm.com> In-Reply-To: From: Yu Zhao Date: Mon, 26 Jun 2023 23:29:49 -0600 Message-ID: Subject: Re: [PATCH v1 03/10] mm: Introduce try_vma_alloc_movable_folio() To: Ryan Roberts Cc: Andrew Morton , "Matthew Wilcox (Oracle)" , "Kirill A. Shutemov" , Yin Fengwei , David Hildenbrand , Catalin Marinas , Will Deacon , Geert Uytterhoeven , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-s390@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 2C47D40010 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: 6yn1s3kjif4n8owqxo3twkga8hx6khb8 X-HE-Tag: 1687843826-516623 X-HE-Meta: U2FsdGVkX19q7aPwzuLJVrU6vQIWse/DyKjufLHXyuEvkWdVAhPlmQkd6xu+iYn3fDEbFYbAR3a6mm8zguaOv6YSGGfxjFIqLmlpXyWvPN+QLaYNUBcj6sLnbPaVSKm/q48RIVGrKC8dG386EPxck6hhakMRDbYr0a9sswW3dYVHNF0wEtZ862xXK52bPjwOumUHYQy8UnOMouJrwPbIrN8ERawCJSNY93UbFbMw1m/YkUoMULzVC6yN9zVrYllyTOtklZfWxCuZcpdP2kX8izolHHtcoZDPC6ym09G9nF4wj6qYd3zv6GaVsImzd1AiipTpp+oPL7wZifp/3CMCrd11xJ+4QlhWjhIlfuDC9Qpe6/MSW7AE8O4NgO6Z9fjLUMcpqBchgZgRyEP2R2CEl5lPDvCfw5DX6RUo7bMO6JqzRRsuQNvS5A0jYQWAtRqJsLPLlwAvoZT4VSAnO83Ps9OyPZAx5Ak6KEWqu0kb3cef6/RxAKUbJFjX9bzNzNpYpG3aepOwe+6AyNIHHOR/nVZZ0nBm6l8L5LezjP/jTEiFJ2VbX5vdVaCwR8emgnni6Wvr9xk/WIAOz0hys9jKuHXwzJsYP19/1B5iHEtHO7rAyIT9L3TSZkzylXSBjnMIdGKR2O8H5HNtEZcUXQEerQzyRdwQ6qudoeNQu3quylbOyYRrghxReEZoruXwAqnKBBpIB6l7ZUMEXRzSusomfNoHxXVbVQzNXT+xKeFzOat1Y+sp7iYeyP9hl40+siP1UPjctYYzU1HLguh00TYXhUSbeXzRVJ42nm6rd2ipVgy+ygO8NlTWZQOiOXD1i95ToHv/N9wsrrki5dJAWXzuOwTw0BMneANZo10mvqtIOZIjPPXFNpW2df41pMziCu8XOqNNB5Z93hofjJGWsUj6liRdqGbhMJO51K9cls7vwwPUrq772mHWf7GVfvO7dj8TTyB4mw4gBWysWc/DnU7 IZWAi3Bc bU3f4lPeIx+c1z8Kw7UM6DBa+ACJl3OuZuJiHSM2A5fVg6PA0mUGMyMRcuCKCS+egNCiTEMeNM2E7K1ra52rg0DJx8oPNYo5phwoqhUV3pubMOIV3muXYFbyEGRojEpmecAVvFBY8KUOdg4PhBQs7++PBnsXQPhHka36q/ne0qOHzHmfTN4bT8X8ju27kj6G9Xdnx5DyS06Tjts82gPbpXMKhG3sTOUOCHCH77uYjPzuVbYahoa2l/kPdnXW5LDNE5Nl4OrZZ2t1VwyyS700+3nek3WpJGom+hnEYmmVZ4+TsTuLMYlZ868LNtYhJ4z5VZAhgfCJcx6mvNt03qw3dja9ZKEYEgl79or6d8CIe9HurTzQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jun 26, 2023 at 8:34=E2=80=AFPM Yu Zhao wrote: > > On Mon, Jun 26, 2023 at 11:14=E2=80=AFAM Ryan Roberts wrote: > > > > Opportunistically attempt to allocate high-order folios in highmem, > > optionally zeroed. Retry with lower orders all the way to order-0, unti= l > > success. Although, of note, order-1 allocations are skipped since a > > large folio must be at least order-2 to work with the THP machinery. Th= e > > user must check what they got with folio_order(). > > > > This will be used to oportunistically allocate large folios for > > anonymous memory with a sensible fallback under memory pressure. > > > > For attempts to allocate non-0 orders, we set __GFP_NORETRY to prevent > > high latency due to reclaim, instead preferring to just try for a lower > > order. The same approach is used by the readahead code when allocating > > large folios. > > > > Signed-off-by: Ryan Roberts > > --- > > mm/memory.c | 33 +++++++++++++++++++++++++++++++++ > > 1 file changed, 33 insertions(+) > > > > diff --git a/mm/memory.c b/mm/memory.c > > index 367bbbb29d91..53896d46e686 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -3001,6 +3001,39 @@ static vm_fault_t fault_dirty_shared_page(struct= vm_fault *vmf) > > return 0; > > } > > > > +static inline struct folio *vma_alloc_movable_folio(struct vm_area_str= uct *vma, > > + unsigned long vaddr, int order, bool ze= roed) > > +{ > > + gfp_t gfp =3D order > 0 ? __GFP_NORETRY | __GFP_NOWARN : 0; > > + > > + if (zeroed) > > + return vma_alloc_zeroed_movable_folio(vma, vaddr, gfp, = order); > > + else > > + return vma_alloc_folio(GFP_HIGHUSER_MOVABLE | gfp, orde= r, vma, > > + vaddr, = false); > > +} > > + > > +/* > > + * Opportunistically attempt to allocate high-order folios, retrying w= ith lower > > + * orders all the way to order-0, until success. order-1 allocations a= re skipped > > + * since a folio must be at least order-2 to work with the THP machine= ry. The > > + * user must check what they got with folio_order(). vaddr can be any = virtual > > + * address that will be mapped by the allocated folio. > > + */ > > +static struct folio *try_vma_alloc_movable_folio(struct vm_area_struct= *vma, > > + unsigned long vaddr, int order, bool ze= roed) > > +{ > > + struct folio *folio; > > + > > + for (; order > 1; order--) { > > + folio =3D vma_alloc_movable_folio(vma, vaddr, order, ze= roed); > > + if (folio) > > + return folio; > > + } > > + > > + return vma_alloc_movable_folio(vma, vaddr, 0, zeroed); > > +} > > I'd drop this patch. Instead, in do_anonymous_page(): > > if (IS_ENABLED(CONFIG_ARCH_WANTS_PTE_ORDER)) > folio =3D vma_alloc_zeroed_movable_folio(vma, addr, > CONFIG_ARCH_WANTS_PTE_ORDER)) > > if (!folio) > folio =3D vma_alloc_zeroed_movable_folio(vma, addr, 0); I meant a runtime function arch_wants_pte_order() (Its default implementation would return 0.)