From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 509C4EB64D9 for ; Tue, 27 Jun 2023 02:34:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AD9508D0002; Mon, 26 Jun 2023 22:34:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A89488D0001; Mon, 26 Jun 2023 22:34:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 951228D0002; Mon, 26 Jun 2023 22:34:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 8141E8D0001 for ; Mon, 26 Jun 2023 22:34:57 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 58DADAFF51 for ; Tue, 27 Jun 2023 02:34:57 +0000 (UTC) X-FDA: 80946960234.07.BCDEA7D Received: from mail-qt1-f176.google.com (mail-qt1-f176.google.com [209.85.160.176]) by imf10.hostedemail.com (Postfix) with ESMTP id 98E6CC0002 for ; Tue, 27 Jun 2023 02:34:55 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=2ONe3v1N; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf10.hostedemail.com: domain of yuzhao@google.com designates 209.85.160.176 as permitted sender) smtp.mailfrom=yuzhao@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687833295; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=xQTOkBN0GiMAgDG6ZOmDj9RulFsjN3i64Kvjq5jchlk=; b=iwSzF7FshFJy/gRMQDeiy0ohNCT9/3TkgYWF6PkVwzc+IE+BUi6FnSTYlbBvkuHOOlSH22 fBSvJCWA0013kWK+ZCHN3lKPUtTR9DkPa9n7OhErngnNAxntULiS6dH8g7SlzLSziwGJ6v ThS60V2kQ9HWXtF3gPD1VLLPoUgynio= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=2ONe3v1N; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf10.hostedemail.com: domain of yuzhao@google.com designates 209.85.160.176 as permitted sender) smtp.mailfrom=yuzhao@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687833295; a=rsa-sha256; cv=none; b=hp2k3i0TNVNOvgh/b047T6VxKN9CYcKNSo8oED0bprp2LtiTciLiHJtokdFQvczno/8i2L AX4NN5onC1Gxm2v8I+geWrB0/1PyXB4zbvPaVjCpJFHm67pkZrJWjUw4rsCo4CqpyZBH16 OUxRWhi3N9B4gOiB4Wp8cM4g+VBfs+Q= Received: by mail-qt1-f176.google.com with SMTP id d75a77b69052e-40079620a83so132521cf.0 for ; Mon, 26 Jun 2023 19:34:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687833294; x=1690425294; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=xQTOkBN0GiMAgDG6ZOmDj9RulFsjN3i64Kvjq5jchlk=; b=2ONe3v1NzdXE0h5fj2MWjBVQiq6S1GdLr8eEH6r69uFRebt1NiaIjiqToWQ5+oQSIg deN/EOrGfQJTX8/IdlI46HYmmnUjOnsrxEv13aUKomZIkHWSGMAKvqYevb/K3QWcLbR9 uYFa99/05449KMzf049eKDYSuN8GMHKbyRc3dA/tpDmhB1dFdyQovn3/8ZOwws6k7JBz eiSy+SaSo4Ndcni6q+aCfD0OlX9EaJRWRRtbkqrZf+yUHqoDEIGNFR2AH3ma4yaceXsd OztHi2s5KfV5Yj8S0MHsmjPInWKZAWUFhMFQytcdDY0Yz3p6nPMmWy44dUIHb/wSpgHH azgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687833294; x=1690425294; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xQTOkBN0GiMAgDG6ZOmDj9RulFsjN3i64Kvjq5jchlk=; b=Zk6W4y0flIw9buwzcXrnP5/909ojiSZqp4PETvpQmcm0efkxpM3BAV/+HEbW/UPFGs GfiG6ruNWilLw+KqgeoJ4Aq/dTwidGODxUmSWniNr1ZeiV0IKhIik1O53wkc/zjHOJ+v oa5FBclyV+lbna73Y3WwMKt8sdjmt0jSx8IW5Ky1bbUFXc24hVU/OXPy4XRiidzSAGjh T5hVE2JwW8fFUd5RvEG50VBUcLqbJw8SlvScm3FcZZ4tEv7mvrr8hfSRKXBooOy97JIa gfPha1CGaKqMC9iovUbVQ7W1qWKOJqjmEEd7952RK5BFjwgU7QzBjxJ4X9GDmpTGiRHU mR5Q== X-Gm-Message-State: AC+VfDyX/2mu6EE6Zdhkc0U/XLRJhgewFmbMLLNGIAFfAisFW+j8Sa/A ZP+Gd7w4wCDuNyaikznVn+Ur4/kyDrkr/1ZtTag5fQ== X-Google-Smtp-Source: ACHHUZ5HoW4XPHfg6XbF+Oslzos42MHdia8Ruth5exgz8mKxPN8m+nytOZ3PCELPRviuNzXWfKSV+3UQR1cRdLtaqqk= X-Received: by 2002:a05:622a:5c8:b0:3ef:3361:75d5 with SMTP id d8-20020a05622a05c800b003ef336175d5mr535403qtb.11.1687833294539; Mon, 26 Jun 2023 19:34:54 -0700 (PDT) MIME-Version: 1.0 References: <20230626171430.3167004-1-ryan.roberts@arm.com> <20230626171430.3167004-4-ryan.roberts@arm.com> In-Reply-To: <20230626171430.3167004-4-ryan.roberts@arm.com> From: Yu Zhao Date: Mon, 26 Jun 2023 20:34:18 -0600 Message-ID: Subject: Re: [PATCH v1 03/10] mm: Introduce try_vma_alloc_movable_folio() To: Ryan Roberts Cc: Andrew Morton , "Matthew Wilcox (Oracle)" , "Kirill A. Shutemov" , Yin Fengwei , David Hildenbrand , Catalin Marinas , Will Deacon , Geert Uytterhoeven , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-s390@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 98E6CC0002 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: i7niemhpeqfdt61txpapmh38p1yqkesx X-HE-Tag: 1687833295-60752 X-HE-Meta: U2FsdGVkX18dEJLieUURecs9uxd8C/Qi/r/i7gC950kE7cP7z+Pkh8/lyYJ9b09pOSzbpUOA6LmhQu+PXeSYWTAGxZxsHWoWZFoEy4Z3DvfkY1VixkUr552U5btac5groVw1IxFjUgb2UhfqcPWLFe9AedpQ5RJE+QXmAWiuJNmtDsTCCtzrg8NGu9nvXRx0m67OA4i5K7c5dR+LmO2fQ7l/6ws1dEnxTYYObKizdrMZ2J8oeDtSURbcWP8bWBbkEm8iO3mME/pB/fsUbcFwlmm0d0TqqDKQ9D3WFvNzpHNeEKsHcrxKTv4gvLO/dI65qVCkDVyuTcQXv+K1oo8DgYya9KsjjDaXzlpuzAnMtpn9lU0BNrg02ckfFQzOHuOABPxSyYpSBVD7ysyP4FU+ykWcMdPLHff2xRx7kqzDoApH8Wd1pVBN8Z9VN84pWU9MDeb9EZwnxb+BFTKmA+p8IMeN3u1hdQf9VZw9/6qVlgGcQQTVc5q9/C1++CtQl/kmx9tsEGa/pUryXJFxSWiPvnQhFOaxVl/q8UNf5pIKdB7YiLN78lGLLSUr4x88W3S5EMYjvONFEh3BNz3HzqfTrFgOvNeQYM8cp2NOjo7G6N2urr37E20bq/m3pSFazdHWFEV3bDvWBNCHsuN/7U4SX8DEKwcF1NTlHbu8m581yGSmdM7ZvkYwL+fj2mmpHCHRs6kY5Z3JP8UpeKds/P8W74rFYKA/hJt3fqQKzQhHKh2oqDdwF1SJH1IsMwreM3BR995tKy4ps5sBVBfxKivTMUWa38BmBJFrVCv4PBl7E1AO7ol291CezM6GFiDHOnlvoTrOjfrZMbFaK7RqOcAnh8qIJ55q5IvAhb3Yabl4x27IL5IT61mDl6eBGef8Lsk4kZ5g+Dw/AhMLJu0RAbsLOcP22wIR+avyFa2ye4RZMog9BGuzeVaHyvvgK1s5efgasJy9o0LX1nk4lgLHZkH Zof3Q84Q rtqZb0XIkXCNPGXmszPqUw/X3Kb1CnJ0hY8AkPHdTL3YdaAuxUlHlSpGMjydRqEYj+e/8EsZwskLNdPTDSn8hEfy7eRYuPZMJrBjEvQeI4x4eSmqm2/wqVZe7r3o8bLMG6tkm1XvundUVcX7vPihciM0j7CKFIAT+33bUWxvXd3fSwSPEnfudAvaHWfr0P7sCNDCjTYpzIOncGN+jIlBx2RYsvSCb+NJlhozgKzFKCtYiq8IN0hld9vhlth3MXWCMFuc9eCrQz2EoExWl4WNrIGpLGX0jI7S/vvm2N9UNnlL0JRF7MF2PAYRy24PfQlfunHXrjsfLrDMplOWymMjgF15+QQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jun 26, 2023 at 11:14=E2=80=AFAM Ryan Roberts wrote: > > Opportunistically attempt to allocate high-order folios in highmem, > optionally zeroed. Retry with lower orders all the way to order-0, until > success. Although, of note, order-1 allocations are skipped since a > large folio must be at least order-2 to work with the THP machinery. The > user must check what they got with folio_order(). > > This will be used to oportunistically allocate large folios for > anonymous memory with a sensible fallback under memory pressure. > > For attempts to allocate non-0 orders, we set __GFP_NORETRY to prevent > high latency due to reclaim, instead preferring to just try for a lower > order. The same approach is used by the readahead code when allocating > large folios. > > Signed-off-by: Ryan Roberts > --- > mm/memory.c | 33 +++++++++++++++++++++++++++++++++ > 1 file changed, 33 insertions(+) > > diff --git a/mm/memory.c b/mm/memory.c > index 367bbbb29d91..53896d46e686 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -3001,6 +3001,39 @@ static vm_fault_t fault_dirty_shared_page(struct v= m_fault *vmf) > return 0; > } > > +static inline struct folio *vma_alloc_movable_folio(struct vm_area_struc= t *vma, > + unsigned long vaddr, int order, bool zero= ed) > +{ > + gfp_t gfp =3D order > 0 ? __GFP_NORETRY | __GFP_NOWARN : 0; > + > + if (zeroed) > + return vma_alloc_zeroed_movable_folio(vma, vaddr, gfp, or= der); > + else > + return vma_alloc_folio(GFP_HIGHUSER_MOVABLE | gfp, order,= vma, > + vaddr, fa= lse); > +} > + > +/* > + * Opportunistically attempt to allocate high-order folios, retrying wit= h lower > + * orders all the way to order-0, until success. order-1 allocations are= skipped > + * since a folio must be at least order-2 to work with the THP machinery= . The > + * user must check what they got with folio_order(). vaddr can be any vi= rtual > + * address that will be mapped by the allocated folio. > + */ > +static struct folio *try_vma_alloc_movable_folio(struct vm_area_struct *= vma, > + unsigned long vaddr, int order, bool zero= ed) > +{ > + struct folio *folio; > + > + for (; order > 1; order--) { > + folio =3D vma_alloc_movable_folio(vma, vaddr, order, zero= ed); > + if (folio) > + return folio; > + } > + > + return vma_alloc_movable_folio(vma, vaddr, 0, zeroed); > +} I'd drop this patch. Instead, in do_anonymous_page(): if (IS_ENABLED(CONFIG_ARCH_WANTS_PTE_ORDER)) folio =3D vma_alloc_zeroed_movable_folio(vma, addr, CONFIG_ARCH_WANTS_PTE_ORDER)) if (!folio) folio =3D vma_alloc_zeroed_movable_folio(vma, addr, 0);