From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DDDB0CF044A for ; Wed, 9 Oct 2024 10:44:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 660506B00C4; Wed, 9 Oct 2024 06:44:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 610276B00CB; Wed, 9 Oct 2024 06:44:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4D79A6B00CD; Wed, 9 Oct 2024 06:44:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 30C1F6B00C4 for ; Wed, 9 Oct 2024 06:44:30 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id C45CAA12C1 for ; Wed, 9 Oct 2024 10:44:25 +0000 (UTC) X-FDA: 82653729858.01.D2CA47D Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf22.hostedemail.com (Postfix) with ESMTP id E7060C0004 for ; Wed, 9 Oct 2024 10:44:27 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf22.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728470624; a=rsa-sha256; cv=none; b=hz11Q4AWNCa89FA7Tsg9UqpKfVVyFeECq9JiJmvsHMkZYlRDoOPxQlpjLM2xysv/XYP4kR vX689kHHDl9w/VNoU5kCesHuuuxne5EZ5O0TljXVKYKDs/5wGC458H5hour1IE4N2Fb1Uo XtabBczrjLrUGYRiZ0qccaRPjBGY7u8= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf22.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728470624; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=J09PI9qwPUs0+MsGIjeiYg0BWS4e582kLnrf1S656/8=; b=jQuGqiH4UDonNNLcWwqTqlBxz7QivPFL+weyfziRy0wB9osGtp5Vm/Xt/IlRD7jbGZN12/ gssRQhoBxoc+BkTTGjvhe4h7hjH3YB1oOSor03nm2yNLkmvZ4lN6Ssa52Cz7jAPf4JOSxP hsjF7HHtvnK6CJ9qfhVuPIfxU2vNCrg= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5D30AFEC; Wed, 9 Oct 2024 03:44:56 -0700 (PDT) Received: from [10.57.85.216] (unknown [10.57.85.216]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C732C3F64C; Wed, 9 Oct 2024 03:44:25 -0700 (PDT) Message-ID: Date: Wed, 9 Oct 2024 11:44:24 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] mm: set hugepage to false when anon mthp allocation Content-Language: en-GB To: Kefeng Wang , Andrew Morton Cc: David Hildenbrand , willy@infradead.org, Barry Song , Hugh Dickins , linux-mm@kvack.org References: <20240910140625.175700-1-wangkefeng.wang@huawei.com> <9b4fa269-8e1f-4f14-a737-b6b0697d83b5@huawei.com> From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Queue-Id: E7060C0004 X-Rspamd-Server: rspam01 X-Stat-Signature: k7nna7zgfm3b3e9abzpfw6h5u6si78pg X-HE-Tag: 1728470667-924346 X-HE-Meta: U2FsdGVkX1+D0w+fedtmm7o2MbvPOwGcTep66GCimOKh/A30NqGDvefiMHuh/91o8/awejtnCzBXoI+KADQcgX+mStFpJLoXwXlR/BQsF80fLFg9FlbXXKDq0X3ekDcLf0zF9vJAlLeO3RnyIRGHAdzVSYRQO2E0o7SpW6/D4vnDUiuVzKX3mmM42DLa9Qn7UP+un4snKcYYHzoLMYZcVEzgKKi3J2b8G7QYmIMoqfCucdipJs5qScwWPpSdQPEWgoAHg0hGtuQ6REyQskYHlTkP/pMK63r9lMkcOvZM9vq7rPTk6YEWNv8/CYrbT+tr3nCzwGU14aqVRnQ1L50oEU9r0WtYGtA5n2WFZz6fsCDluuRm0+2rehfByJgFnHefya33OwM2y9rJU126Oxh5jP6gUazthwKkB9t2nBYUY8qG78sSVrysxC7/QIon1I4gGRrAld6lZYQ1l6LwkVPszOr9XhyPsYLpVbUeTp33JQYMZbJ9ICE2NJj4LnK1GAdmREXJv/+vu1Kcfl2/kR3bxVynhTDeKgPe4WSsUkRYgiP0HA2ShxsE5+45QCPP2ziU6mCUnaQ6O6Gl+ne2DmRVq8m29o0BQryevWsXr4pS/RFJfdSbt6DavvVChPv2OQjoQE71yX8jVDKLRBSBfkDCSXdf8Mmy/fiMUV19MwrA1G/c75a8zOlRn7Hg6wnYqLQcykC28M4ue4KVgTIJm1bh+evqCCnB9YQT0uUdrW7Q7VyIwA4LZi89rZcyzhU/OhQGpBunSXY4udualysfEm3YcInwFAtw3+qACS9zJeXpftK4C+a64sr0cBx62zZ/CtgX++FuQ7ctkR1u0aC7Acl02N8hRxcwVCwMNqTV3DsaK3lYU2DPtdmWM6BWEgtCcE5VnKbXO+Un8/3uR0iH1gaa+pus8Lg0rgGjQfFjXXWtNe7QsRQeGbppukFrLGC02IBcBAC7I+8ZtvfjcSoevfh qzZEF1lB 5JD54+nUvwownqbNNFjV0+eyJbYIvRg3qzVMAs7G4ntuBkOySZNDrWaRmBjS3sFCIYM/IlGlEVgBvDvgnGOid5aLadTo0c4fe/vmEl6Qff+OCu1kC15PlHzVI7ejweCaa1P/27U07UVMQO5qk4cfD+25WaOacwS/ZBwbQ8CGJ/8jMOAjluNH0s00TyYNX9uJ3hhIa X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 09/10/2024 10:15, Kefeng Wang wrote: > > On 2024/9/13 18:36, Kefeng Wang wrote: >> Hi All, >> >> On 2024/9/10 22:18, Kefeng Wang wrote: >>> >>> >>> On 2024/9/10 22:06, Kefeng Wang wrote: >>>> When the hugepage parameter is true in vma_alloc_folio(), it indicates >>>> that we only try allocation on preferred node if possible for PMD_ORDER, >>> >>> Should remove "for PMD_ORDER", I mean that it was used for PMD_ORDER, but for >>> other high-order, it will reduce the success rate of allocation if without >>> ddc1a5cbc05d. >>> >>> >>>> but it could lead to lots of failures for large folio allocation, >>>> luckily the hugepage parameter was deprecated since commit ddc1a5cbc05d >>>> ("mempolicy: alloc_pages_mpol() for NUMA policy without vma"), so no >>>> effect on runtime behavior. >>>> >>>> Signed-off-by: Kefeng Wang >>>> --- >>>> >>>> Found the issue when backport mthp to inner kernel without ddc1a5cbc05d, >>>> but for mainline, there is no issue, no clue why hugepage parameter was >>>> retained, maybe just kill the parameter for mainline? >> >> >> Any comments, fix in alloc_anon_folio() or remove hugepage parameter in >> vma_alloc_folio(), thanks. > > * vma_alloc_folio - Allocate a folio for a VMA. > @hugepage: Unused (was: For hugepages try only preferred node if possible). > > Since hugepage won't be used in vma_alloc_folio(), maybe just delete this > parameter? Sorry for the radio silence. Given the param is no longer used, I think it would be cleaner to just remove it. It was set to true here on purpose though; the aim was to follow the pattern set by PMD-sized THP, which also sets it to true. And the aargument was that the benefit of having a huge page would be outstripped by having to access it on a remote node. Now that the parameter is deprecated, do you know if the policy is still enforced by other means? Thanks, Ryan > >> >>>> >>>>   mm/memory.c | 2 +- >>>>   1 file changed, 1 insertion(+), 1 deletion(-) >>>> >>>> diff --git a/mm/memory.c b/mm/memory.c >>>> index b84443e689a8..89a15858348a 100644 >>>> --- a/mm/memory.c >>>> +++ b/mm/memory.c >>>> @@ -4479,7 +4479,7 @@ static struct folio *alloc_anon_folio(struct vm_fault >>>> *vmf) >>>>       gfp = vma_thp_gfp_mask(vma); >>>>       while (orders) { >>>>           addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); >>>> -        folio = vma_alloc_folio(gfp, order, vma, addr, true); >>>> +        folio = vma_alloc_folio(gfp, order, vma, addr, false); >>>>           if (folio) { >>>>               if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) { >>>>                   count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE); >>> >