From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80791EB64DC for ; Mon, 17 Jul 2023 13:36:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E19256B0074; Mon, 17 Jul 2023 09:36:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DA1066B0075; Mon, 17 Jul 2023 09:36:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C43E28D0001; Mon, 17 Jul 2023 09:36:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id ADE316B0074 for ; Mon, 17 Jul 2023 09:36:11 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 78C2F40323 for ; Mon, 17 Jul 2023 13:36:11 +0000 (UTC) X-FDA: 81021202542.04.B7DD7DF Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf08.hostedemail.com (Postfix) with ESMTP id 8A689160010 for ; Mon, 17 Jul 2023 13:36:08 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1689600968; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AXsKu8d6t1i8i9cJ+cSrgHLX++jwqhLHjpW5LhE4868=; b=RUGpdeB/cH9NF12S7Bfl4P87+6sANZPeB6SwuetDOIYZd89DEYfgY3/iTjBTtspnXGsaP+ LYy94A1N6iNiia1aj521G0sFwmSuS9OkP2L8R6q/wf2ufg647uvBn6uB3dYcrWXsEd0R4b 3Smtjp+xveXj9xeM0MfPsvhQBg5ade4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1689600968; a=rsa-sha256; cv=none; b=Oyju6Y79x3s76W315lmlgXU9aPwwMCS6liZMJoxWdHmlaaVlcgj7hFrEKrp3M0mgt7F1ED sD3zT6B1cwzMQYuSa7PfgdeFrsoOIsoILK3zxx0QFNQ7rXDKKOlN8doqeNu9Xgd/vfhiFK 2ZXpw7OhjrZI6E3TOIdUW2TbIQo++4E= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EAE3EC15; Mon, 17 Jul 2023 06:36:50 -0700 (PDT) Received: from [10.57.76.30] (unknown [10.57.76.30]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E99A93F67D; Mon, 17 Jul 2023 06:36:04 -0700 (PDT) Message-ID: <5df787a0-8e69-2472-cdd6-f96a3f7dfaaf@arm.com> Date: Mon, 17 Jul 2023 14:36:03 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.13.0 Subject: Re: [PATCH v3 3/4] mm: FLEXIBLE_THP for improved performance To: Yu Zhao , Hugh Dickins , Matthew Wilcox , Andrew Morton Cc: "Kirill A. Shutemov" , Yin Fengwei , David Hildenbrand , Catalin Marinas , Will Deacon , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <20230714160407.4142030-1-ryan.roberts@arm.com> <20230714161733.4144503-3-ryan.roberts@arm.com> <432490d1-8d1e-1742-295a-d6e60a054ab6@arm.com> From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Stat-Signature: usxfbd6isu4iumbj6sjn3k6wskodhais X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 8A689160010 X-Rspam-User: X-HE-Tag: 1689600968-645092 X-HE-Meta: U2FsdGVkX1/nb0GAxKqYMQCFsLdUtwhZW1aN85apc23WHtv/qsVF0q17lkjlKY2SVdj5zkUXBgH38feL/9qR3K9tUb8xaK+QhCYq2n5TFO3C0pVvEnbb+tmY61R5ZZTNYWVNLdKePDD5Byb1AP+bewA0MbYKls/6RoVmtcAO5VRCj1uQq1dvR32rbYFllUINKPRczfLbwHSdoTF0WEtRdRv+zeCfWFTMFJAmVu7fiJTaPxQpwdot1SEQjF3lpSoMy4JXysS4L2EIRN/KATdrM0I/jzPvFMd6+f7Ff4eurwD9wAG3bXJFr7eQi3eCavkxVajKJ42K2oh4cht+JJv1Q/ya6GKP5VucHIMxdvR+fPOAiqfa454GKeQLAm12bKnOEsdGuAJ3TrZIzadcL+yJYiTZW4PC+KiqV1IaB6zWeLGWONt1zQQgfqGGYk5NawibX2MJuXJx80OgVbD4aOt22Peeexu25C9yejYV3FYRD+jfLG9k+4VgylbIWgJaAb4PaHz3OC5hGNCq2s+jXv+bHDoKOCfa5n9C22AdoX/YMLrFnv0O+6G7xVkGpU7bODI/CnUJXQxbu3vQxeWXLaBwdKDov4ZFK9ET6AvcmAY+lq7Bv2gH/h4yF639o8co8Yv4vSi1NXB/xOqHhJs4sdC/7SRp4Y0CGoR0kw9723K/3lYuZEW0u4bwQccyGWTzj4Z3TW0aa6ZbuSAm0O6xsI23IDxQCUFKPlkSUh5hVrrHwKs4MHu5hlPwaZ5iym8RjuGvSSNaEQc2hvlK5oY//ea7j+IVpKIfZK/S2Vj+S3URvW6aVE2JX9KgHJO/nq9/euthKVOKwbUh1Qb+VWTvPOFjXqH3Ou9U65mfWpO4Na50sc2P65hXM5+EjTkN+w/RScstSxQe318gmPvjxD4Lz9esYRJr3pjoARpoKBVyYrbE95rdsDC6wES58w9fU05H7PcVs1jrXtdCc2Vm2LC6knG qts1Opak tZyHYQ2DSes5CQA2/ZvfCCQ0ZkSoq/nkgJWbUQhcp7pTFHNo6WgYG746kMAocVkh5h+aaKTexdcPWjCbhB4CKeDhTkz5XsDTRhvOg2Ji7dJYP/IhIlOn+Qi89eUmUy2S9S09tZVyhMy0YHgTmBuKWM4P98/IWKc1ZpbZw X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: >>>> +static int alloc_anon_folio(struct vm_fault *vmf, struct folio **folio) >>>> +{ >>>> + int i; >>>> + gfp_t gfp; >>>> + pte_t *pte; >>>> + unsigned long addr; >>>> + struct vm_area_struct *vma = vmf->vma; >>>> + int prefer = anon_folio_order(vma); >>>> + int orders[] = { >>>> + prefer, >>>> + prefer > PAGE_ALLOC_COSTLY_ORDER ? PAGE_ALLOC_COSTLY_ORDER : 0, >>>> + 0, >>>> + }; >>>> + >>>> + *folio = NULL; >>>> + >>>> + if (vmf_orig_pte_uffd_wp(vmf)) >>>> + goto fallback; >>>> + >>>> + for (i = 0; orders[i]; i++) { >>>> + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << orders[i]); >>>> + if (addr >= vma->vm_start && >>>> + addr + (PAGE_SIZE << orders[i]) <= vma->vm_end) >>>> + break; >>>> + } >>>> + >>>> + if (!orders[i]) >>>> + goto fallback; >>>> + >>>> + pte = pte_offset_map(vmf->pmd, vmf->address & PMD_MASK); >>>> + if (!pte) >>>> + return -EAGAIN; >>> >>> It would be a bug if this happens. So probably -EINVAL? >> >> Not sure what you mean? Hugh Dickins' series that went into v6.5-rc1 makes it >> possible for pte_offset_map() to fail (if I understood correctly) and we have to >> handle this. The intent is that we will return from the fault without making any >> change, then we will refault and try again. > > Thanks for checking that -- it's very relevant. One detail is that > that series doesn't affect anon. IOW, collapsing PTEs into a PMD can't > happen while we are holding mmap_lock for read here, and therefore, > the race that could cause pte_offset_map() on shmem/file PTEs to fail > doesn't apply here. But Hugh's patches have changed do_anonymous_page() to handle failure from pte_offset_map_lock(). So I was just following that pattern. If this really can't happen, then I'd rather WARN/BUG on it, and simplify alloc_anon_folio()'s prototype to just return a `struct folio *` (and if it's null that means ENOMEM). Hugh, perhaps you can comment? As an aside, it was my understanding from LWN, that we are now using a per-VMA lock so presumably we don't hold mmap_lock for read here? Or perhaps that only applies to file-backed memory? > > +Hugh Dickins for further consultation if you need it. > >>>> + >>>> + for (; orders[i]; i++) { >>>> + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << orders[i]); >>>> + vmf->pte = pte + pte_index(addr); >>>> + if (!vmf_pte_range_changed(vmf, 1 << orders[i])) >>>> + break; >>>> + } >>>> + >>>> + vmf->pte = NULL; >>>> + pte_unmap(pte); >>>> + >>>> + gfp = vma_thp_gfp_mask(vma); >>>> + >>>> + for (; orders[i]; i++) { >>>> + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << orders[i]); >>>> + *folio = vma_alloc_folio(gfp, orders[i], vma, addr, true); >>>> + if (*folio) { >>>> + clear_huge_page(&(*folio)->page, addr, 1 << orders[i]); >>>> + return 0; >>>> + } >>>> + } >>>> + >>>> +fallback: >>>> + *folio = vma_alloc_zeroed_movable_folio(vma, vmf->address); >>>> + return *folio ? 0 : -ENOMEM; >>>> +} >>>> +#else >>>> +static inline int alloc_anon_folio(struct vm_fault *vmf, struct folio **folio) >>> >>> Drop "inline" (it doesn't do anything in .c). >> >> There are 38 instances of inline in memory.c alone, so looks like a well used >> convention, even if the compiler may choose to ignore. Perhaps you can educate >> me; what's the benefit of dropping it? > > I'll let Willy and Andrew educate both of us :) > > +Matthew Wilcox +Andrew Morton please. Thank you. > >>> The rest looks good to me. >> >> Great - just incase it wasn't obvious, I decided not to overwrite vmf->address >> with the aligned version, as you suggested > > Yes, I've noticed. Not overwriting has its own merits for sure. > >> for 2 reasons; 1) address is const >> in the struct, so would have had to change that. 2) there is a uffd path that >> can be taken after the vmf->address fixup would have occured and the path >> consumes that member, so it would have had to be un-fixed-up making it more >> messy than the way I opted for. >> >> Thanks for the quick review as always!