From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A4E3EB64DC for ; Mon, 17 Jul 2023 19:32:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CF8678D0003; Mon, 17 Jul 2023 15:32:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CA89F6B0074; Mon, 17 Jul 2023 15:32:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B96BC8D0003; Mon, 17 Jul 2023 15:32:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id A66B56B0072 for ; Mon, 17 Jul 2023 15:32:09 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 6E562C0575 for ; Mon, 17 Jul 2023 19:32:09 +0000 (UTC) X-FDA: 81022099578.15.0241A6A Received: from mail-qt1-f171.google.com (mail-qt1-f171.google.com [209.85.160.171]) by imf27.hostedemail.com (Postfix) with ESMTP id 85AA64000E for ; Mon, 17 Jul 2023 19:32:07 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=eEZoK578; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf27.hostedemail.com: domain of yuzhao@google.com designates 209.85.160.171 as permitted sender) smtp.mailfrom=yuzhao@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1689622327; a=rsa-sha256; cv=none; b=o5pGaC8jHGrDASUMLxJdRaCNzWI6o04GdGgY51TUPv/9Hn/0TeCEDmLPVAZZ76YGHTuaCC 9qXmvqQvaL9Z168IyLGQgpKSRcHXI51jFnhcJNlst675EQosRDcizQn4uMQQvQkD4mbLqL uRSy/LWA5awQu7mC9WPZBK36DoV93c0= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=eEZoK578; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf27.hostedemail.com: domain of yuzhao@google.com designates 209.85.160.171 as permitted sender) smtp.mailfrom=yuzhao@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1689622327; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uJjS2MKoaxJHnAQjvrDhXqMPcmj9GPq6druRIeP/a3w=; b=i4XY+nNowPl+mdl5WDbJV9WiUzA2NcoAfoUn0nEWA8rHWfVPRigG5RuHQ2vteVfUS4rcuz e2mHGMCPH0iHDIznEEc8J96s0+eGNphfaw9ITFXxinpR9cPNsg4MpMWEA1wZ79nUon26h7 vVc7HbMVqusxw7bvZry7xB436JhpZFg= Received: by mail-qt1-f171.google.com with SMTP id d75a77b69052e-401d1d967beso62371cf.0 for ; Mon, 17 Jul 2023 12:32:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1689622326; x=1692214326; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=uJjS2MKoaxJHnAQjvrDhXqMPcmj9GPq6druRIeP/a3w=; b=eEZoK578XjidGGSs/x5nWoMgo1UcO+mpQdHymRPXuEizNfKai/jwiuJeACRRl1ncoU 2UFlRA5Bwc7UfPRaBKkPedL1u8E7CFqmhK3j1x1xNWJdP7BgyEGwqiucOURV+x7rGPIg +rdFmPMdQgT1bQne5iKIfpwnHwTyKq1sybcRjOOeYfCCrqDtFT+5g4OB/fMKzMqp2J2F a3aJAcaC0r0zOxoY8sDabnQTCSn5oYVPIkrYC3CERtAElP0F4k7I4W7WPT0S8mRJTQRa a3/gf9+cofgUgnJ/g53dRmOrJ5x8Pw23beN6Cy0/StrbDOtgPqKa0r1XhscBiuH0yNl6 3oYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689622326; x=1692214326; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uJjS2MKoaxJHnAQjvrDhXqMPcmj9GPq6druRIeP/a3w=; b=R9U+SKTKdWckyc4cMyG11yXGACQIYdxpqtxIM3OJVrsVvmM3bhUQJGKukq5lbyt6Yh FtQYK2pyyq6L1DicoYf3kXEpspfOuSQ5rtCA5mEx1mAeYQ8YUv46JRQ9wWA3TxE60ahq +/Pa8cyvkjoKFgOG6QKk+MWexKcOlB2DL2CGgDrPbgdCHJ8GdcM6XebUtRTQjpbZQAeM vXWVKnWQa+ZrASjF1QjSPeyto4Dle/IfOfS5F2C+zDL4lKapTzsJLE1Df9KDTgF89xlc 0ynQMhWJNbmThQGK+UUjdV8gom25LUY/G+dTSIObokfRdUOrq8QeWkCK5HBR2NJZt5sr shtg== X-Gm-Message-State: ABy/qLasvz6iHAiAZwcyBaCs4GkNL68w8DbZ2DzAZ/MEH8vYG6oRe3Z+ E8AZnMn1D+VKYhLnBNrHy55UUBsbG0vkp8M8u8kE8Q== X-Google-Smtp-Source: APBJJlGFkCcjE+9R4DIuEHvn9v4EanNueQ7hy8UO9ZuOg7HM5tC5oqYA4jgrOVFDTg9qY4RU5FnlUER7zLVZp8OY7g4= X-Received: by 2002:a05:622a:612:b0:403:affb:3c03 with SMTP id z18-20020a05622a061200b00403affb3c03mr45444qta.10.1689622326499; Mon, 17 Jul 2023 12:32:06 -0700 (PDT) MIME-Version: 1.0 References: <20230714160407.4142030-1-ryan.roberts@arm.com> <20230714161733.4144503-3-ryan.roberts@arm.com> <432490d1-8d1e-1742-295a-d6e60a054ab6@arm.com> <5df787a0-8e69-2472-cdd6-f96a3f7dfaaf@arm.com> In-Reply-To: <5df787a0-8e69-2472-cdd6-f96a3f7dfaaf@arm.com> From: Yu Zhao Date: Mon, 17 Jul 2023 13:31:30 -0600 Message-ID: Subject: Re: [PATCH v3 3/4] mm: FLEXIBLE_THP for improved performance To: Ryan Roberts Cc: Hugh Dickins , Matthew Wilcox , Andrew Morton , "Kirill A. Shutemov" , Yin Fengwei , David Hildenbrand , Catalin Marinas , Will Deacon , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 85AA64000E X-Stat-Signature: yjcqaeybczqu7ycjkr7baz8hmuunttb6 X-HE-Tag: 1689622327-662706 X-HE-Meta: U2FsdGVkX18ZjOOG3/ufM4N1rAWKbLqciEEqYHGJASglEXsSD5fC65595ap32AJT7DSgKwzyxw35eN0EzXcwtGjyy9HAnfWlV7kz4/KcI4E0yhxNsnqRp/ltHaAygyWJFdM8AHTaQEwWStZE7nKZoii8JLtSjuPqnWP0xHZPrSu5W87vgAVz4ITGBfN90R0kD1UyuGIONUJ1tcbcRspuEl/BLZ+zfCsgOlAcQ7yS6YgxxD151OtpwYCHLKajLO4exz460xruRZ33BxuHUYG1nD7u9GgLLihm0IXnPPzlih9AuhJVTuShTwvvSeu/hg3Slxh8i8VT00ncslk6d0zlgHmLGMvsE05WZABLX59TSzRj/NqIkMeKK7xcMavdJmOs7L3rptXwMGjHLhlnU33TUE+q6WB+RsFj/Xkkrj+QurWd0jCL7mJ+6FIq0BxAp0XGh3Rmqs8LZ5p2L5CgtYBzfn5r7NrY49RDvlcxgGiyJMcPcXvbY/fHhovtF6g7pPaqsjohjDcd692aIDyy7uAoevvd1FzywMHx3HtCyCdG+bY2vlAX7c5j4G7iJEjgP6PAeIifUm3eVUFx+MLzirXzkKciUUkFjcg4G4dxYkDsG/pOtJD6YHOtkSQEl61Rsn0j0kCbQVS0+TniyHE+Fk05ptEm4HP7hcLBXdREZoXQvat0UuULHYry4FTSocunZMVuPEwPPMIHAr/EnGQgPiZygcjTwFi8/lS3xrDXQQN/0loxBPPohzSBerLPcMbQCxvlWruZTuoc7R8HwvfHNZ07NBWng7clC8N5H1eYCWdw62ZzozQOZCStZXvH0nrj4cYmUAfOxOsnEQqbx2l8GFmLZZl9xzqQRfWN6OSyXoy/4aaP9p68Z50uQnpUg6pkNDNeD8+S3X234QiOch2KBEChceIxn+zJ8T9oDkHUzZ9tZWMFVJKENY6uNrYwogTJ2u6BIWoztlTP/CdNtwSYfuN etMxdh8y u4HrkD+YMbnvbnsqWqVCY/FoU8E6b4d9O3B+3erX5W8NZVfaVqLKs+gJd/zYOoYpT6eptPRhMY97YnsM6xUqVolWfwSRbYAlS/vDRTuBMlQClVgOaPRLXR1YHkPR4LgPYb+vrsu9zOMeFuJgk7DsWd6JIbtHPVb1tpAa49naLtIiltRzmxwlYRyzYmi4Rp6LUAdLG5pXxgIfIuq+eCpm7mK/UJyuAHk2n6x53QsT/pPdKmlFLOOWSNj78K3TZDyYrZw6b/r2qftbWn+aJRhb4sGo41AVN9UwCqAf8Ci6DcNLveovHIU14VfVD25JziirqXZX2nhtI1pZvqfzweY2wyP88pg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jul 17, 2023 at 7:36=E2=80=AFAM Ryan Roberts = wrote: > > >>>> +static int alloc_anon_folio(struct vm_fault *vmf, struct folio **fo= lio) > >>>> +{ > >>>> + int i; > >>>> + gfp_t gfp; > >>>> + pte_t *pte; > >>>> + unsigned long addr; > >>>> + struct vm_area_struct *vma =3D vmf->vma; > >>>> + int prefer =3D anon_folio_order(vma); > >>>> + int orders[] =3D { > >>>> + prefer, > >>>> + prefer > PAGE_ALLOC_COSTLY_ORDER ? PAGE_ALLOC_COSTLY= _ORDER : 0, > >>>> + 0, > >>>> + }; > >>>> + > >>>> + *folio =3D NULL; > >>>> + > >>>> + if (vmf_orig_pte_uffd_wp(vmf)) > >>>> + goto fallback; > >>>> + > >>>> + for (i =3D 0; orders[i]; i++) { > >>>> + addr =3D ALIGN_DOWN(vmf->address, PAGE_SIZE << order= s[i]); > >>>> + if (addr >=3D vma->vm_start && > >>>> + addr + (PAGE_SIZE << orders[i]) <=3D vma->vm_end= ) > >>>> + break; > >>>> + } > >>>> + > >>>> + if (!orders[i]) > >>>> + goto fallback; > >>>> + > >>>> + pte =3D pte_offset_map(vmf->pmd, vmf->address & PMD_MASK); > >>>> + if (!pte) > >>>> + return -EAGAIN; > >>> > >>> It would be a bug if this happens. So probably -EINVAL? > >> > >> Not sure what you mean? Hugh Dickins' series that went into v6.5-rc1 m= akes it > >> possible for pte_offset_map() to fail (if I understood correctly) and = we have to > >> handle this. The intent is that we will return from the fault without = making any > >> change, then we will refault and try again. > > > > Thanks for checking that -- it's very relevant. One detail is that > > that series doesn't affect anon. IOW, collapsing PTEs into a PMD can't > > happen while we are holding mmap_lock for read here, and therefore, > > the race that could cause pte_offset_map() on shmem/file PTEs to fail > > doesn't apply here. > > But Hugh's patches have changed do_anonymous_page() to handle failure fro= m > pte_offset_map_lock(). So I was just following that pattern. If this real= ly > can't happen, then I'd rather WARN/BUG on it, and simplify alloc_anon_fol= io()'s > prototype to just return a `struct folio *` (and if it's null that means = ENOMEM). > > Hugh, perhaps you can comment? > > As an aside, it was my understanding from LWN, that we are now using a pe= r-VMA > lock so presumably we don't hold mmap_lock for read here? Or perhaps that= only > applies to file-backed memory? For anon under mmap_lock for read: 1. pte_offset_map[_lock]() fails when a parallel PF changes PMD from none to leaf. 2. changing PMD from non-leaf to leaf is a bug. See the comments in the "else" branch in handle_pte_fault(). So for do_anonymous_page(), there is only one case pte_offset_map[_lock]() can fail. For the code above, this case was ruled out by vmf_orig_pte_uffd_wp(). Checking the return value from pte_offset_map[_lock]() is a good practice. What I'm saying is that -EAGAIN would mislead people to think, in our case, !pte is legitimate, and hence the suggestion of replacing it with -EINVAL. No BUG_ON() please. As I've previously mentioned, it's against Documentation/process/coding-style.rst. > > +Hugh Dickins for further consultation if you need it. > > > >>>> + > >>>> + for (; orders[i]; i++) { > >>>> + addr =3D ALIGN_DOWN(vmf->address, PAGE_SIZE << order= s[i]); > >>>> + vmf->pte =3D pte + pte_index(addr); > >>>> + if (!vmf_pte_range_changed(vmf, 1 << orders[i])) > >>>> + break; > >>>> + } > >>>> + > >>>> + vmf->pte =3D NULL; > >>>> + pte_unmap(pte); > >>>> + > >>>> + gfp =3D vma_thp_gfp_mask(vma); > >>>> + > >>>> + for (; orders[i]; i++) { > >>>> + addr =3D ALIGN_DOWN(vmf->address, PAGE_SIZE << order= s[i]); > >>>> + *folio =3D vma_alloc_folio(gfp, orders[i], vma, addr= , true); > >>>> + if (*folio) { > >>>> + clear_huge_page(&(*folio)->page, addr, 1 << = orders[i]); > >>>> + return 0; > >>>> + } > >>>> + } > >>>> + > >>>> +fallback: > >>>> + *folio =3D vma_alloc_zeroed_movable_folio(vma, vmf->address)= ; > >>>> + return *folio ? 0 : -ENOMEM; > >>>> +} > >>>> +#else > >>>> +static inline int alloc_anon_folio(struct vm_fault *vmf, struct fol= io **folio) > >>> > >>> Drop "inline" (it doesn't do anything in .c). > >> > >> There are 38 instances of inline in memory.c alone, so looks like a we= ll used > >> convention, even if the compiler may choose to ignore. Perhaps you can= educate > >> me; what's the benefit of dropping it? > > > > I'll let Willy and Andrew educate both of us :) > > > > +Matthew Wilcox +Andrew Morton please. Thank you. > > > >>> The rest looks good to me. > >> > >> Great - just incase it wasn't obvious, I decided not to overwrite vmf-= >address > >> with the aligned version, as you suggested > > > > Yes, I've noticed. Not overwriting has its own merits for sure. > > > >> for 2 reasons; 1) address is const > >> in the struct, so would have had to change that. 2) there is a uffd pa= th that > >> can be taken after the vmf->address fixup would have occured and the p= ath > >> consumes that member, so it would have had to be un-fixed-up making it= more > >> messy than the way I opted for. > >> > >> Thanks for the quick review as always! >