From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C00FDE7717F for ; Tue, 17 Dec 2024 09:07:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 460916B00BA; Tue, 17 Dec 2024 04:07:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 40C836B00BD; Tue, 17 Dec 2024 04:07:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 286126B00C0; Tue, 17 Dec 2024 04:07:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 0681B6B00BA for ; Tue, 17 Dec 2024 04:07:14 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id A9C01A055B for ; Tue, 17 Dec 2024 09:07:14 +0000 (UTC) X-FDA: 82903870602.19.84BE79B Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf10.hostedemail.com (Postfix) with ESMTP id 55E42C000A for ; Tue, 17 Dec 2024 09:06:59 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=none; spf=pass (imf10.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734426404; a=rsa-sha256; cv=none; b=jT2mr6Q7j6+dK21rVK3D0CjaJc/2A9CKI3NIc83KKyewN6p+aYsC+l+fkBhoSHax2W1jjb Ut9Y2ladbcMmRDLG+tp7YeWPcH8PoRRBHJeNXCPwTPZBawv1QQVgKiaJtrojHW8hTcyhBB iN2F+7YSjz/Jq/bv0wF/7NA680TP7E8= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=none; spf=pass (imf10.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734426404; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iXktR/nhD8fPrIqFaIJO+SC1cA08C3x9oYkSModkUPU=; b=4dY0xiZVkM2b1XZ8EVuVdrGPCYKciFPT9QOisiiy3BXodLtgNKGwFcqldakJA9Wj9A6hBM TrBDGiJVbZ/PMUaopF07SJxNHan/IHpmJU91xNQHSig24WfzOtH9O4XPJsPuWyNwkyMr8j shbHkk3IRsRpZvokadW6ANnKv71mpHQ= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8FC911007; Tue, 17 Dec 2024 01:07:39 -0800 (PST) Received: from [10.162.40.67] (K4MQJ0H1H2.blr.arm.com [10.162.40.67]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 245743F720; Tue, 17 Dec 2024 01:07:01 -0800 (PST) Message-ID: <7834d1ef-bb82-41ed-9453-b49790ee8b5b@arm.com> Date: Tue, 17 Dec 2024 14:36:59 +0530 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC PATCH 02/12] khugepaged: Generalize alloc_charge_folio() To: Ryan Roberts , akpm@linux-foundation.org, david@redhat.com, willy@infradead.org, kirill.shutemov@linux.intel.com Cc: anshuman.khandual@arm.com, catalin.marinas@arm.com, cl@gentwo.org, vbabka@suse.cz, mhocko@suse.com, apopple@nvidia.com, dave.hansen@linux.intel.com, will@kernel.org, baohua@kernel.org, jack@suse.cz, srivatsa@csail.mit.edu, haowenchao22@gmail.com, hughd@google.com, aneesh.kumar@kernel.org, yang@os.amperecomputing.com, peterx@redhat.com, ioworker0@gmail.com, wangkefeng.wang@huawei.com, ziy@nvidia.com, jglisse@google.com, surenb@google.com, vishal.moola@gmail.com, zokeefe@google.com, zhengqi.arch@bytedance.com, jhubbard@nvidia.com, 21cnbao@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20241216165105.56185-1-dev.jain@arm.com> <20241216165105.56185-3-dev.jain@arm.com> <46939efe-cf79-4dcd-9e47-9340956feb03@arm.com> Content-Language: en-US From: Dev Jain In-Reply-To: <46939efe-cf79-4dcd-9e47-9340956feb03@arm.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 55E42C000A X-Stat-Signature: ktc8a9dnuhbnxxnnsf1qp5zjjh1mb5tj X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1734426419-375618 X-HE-Meta: U2FsdGVkX1+4KIjJqNdkDLHHXNE8GFjrh4/1QRUTPq8A1j497smZHnZNjVXQqqv63zAIuKcKRb7uIds8X+gsMbyRipCaygJF2eS7bAVFPlToiJbKsdvWQWGsng1SKioG3lBKaJsexGcPR+uhu0qgH6H1wSibUNAm1bkzcA5yIZXJM9ZxlgbL8k2ffjc3Qo3/hpWUXtcmuhvJcYenph0dTJeJWEHjbkj0GvQe/yE9n3E+Gl3tI2JXu/H6WQv2U+38/OP9bsE5hayCJoLfTVc2dyZZJTQU/n2scCf/fpquJUIjoUiJ9+y6+D8x1tWA5N6vy2Rm93009CRl6Lt1+HG4kpU0jCE/OMK15YtqaElmf1Ox3eG5kRrK5CJNOIfSVfels/SlxOU1r1JwVTxsDN5g2Jzj4DYUvEdd3Xn6LEO0AUoRo8H6JEWsrMH0LwQlb74Du3XQFyGIse234pBBIQNg0U9Pid03wOMjFmCu5BprYzqZl7I3GTsCQVam1nCa3eZIi1Of4Rzpae+xeMPjSIC6MBFz8kiuoEOTP8nIPDGGUpIk1V8vRbWfxWu4Ru5S5R3yWk7/UiX0E8k0XPGeEcx2DfcPG4FQ5azPVU4/H9h6AzGPVp0h/SBtScZuoF6NF1HwuStzGF1t76h+qFgDa+Yd4d7K7+Z1WizNvNdLq54O788tQBAzW/pf9ndSuMJOETJ2lESSFWKVYCkrLxh3Bq4WKStcvh6E71WXEIfs6PngpIs3a9lnXDnQR0hMTsK2I0w1J5z4gsFdaW2kz54U5Uo0SPNRBkYbZ6Lp9bSqOqgB7A1KxXaeBl9jHfM94Yp1/seKIx+0mN2r6wjb0M/IlPhzkjRFxT+T88QrgBNS8nDoOamzIZTUfChM4ZQj1yi2mds5lcRw9NsIoMluQcS+5JMByoyeFu0V/yjIh0gcssQdupZrgGZDesv2rdLjGmZUQBymYfF9xMtySoN85P1iZNc rG5FRFzG qJ1LhT+fVbzLZ/zwfFilsbUjbQsnbas99AMRk4wiK9vh4Ftjqh6nfvaFW8hgxs90BR7FSwJMOYGatKTo+INjILhWD1tpnUlaUyLrz8qMBmw4EG3K7pz1vJZX3r6VDjhnYf/rHLRndGFisNIfo126LFyzPZr7hK1rqm8rRBrbldVknl/9f6O5GZhQMv0SVZ2c/a6NU3ZDfyuvfaMcm0gIlLfL1kfAN9sf7snI+sNPKXpkbAuj8qDsNCLS8PvvVwwJX6H6/XdEjDZupcNFprjz9X1F/qm43ypKjT2DbEIPVzJAHe0BuoPjeTW1oPWwFerPXPDXEZD+7LZHEctpREwS+FTbNx13p+vph/xioizVMj0MF3P08979DH1IT3nq+LbRM7S4BQJNCdJkSJtknPF6rFEeE2pCDMDgvV1nCFCg9a7Nw1RihWTjyK8WDz0iqLHOXRQin X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 17/12/24 12:23 pm, Ryan Roberts wrote: > On 16/12/2024 16:50, Dev Jain wrote: >> Pass order to alloc_charge_folio() and update mTHP statistics. >> >> Signed-off-by: Dev Jain >> --- >> include/linux/huge_mm.h | 2 ++ >> mm/huge_memory.c | 4 ++++ >> mm/khugepaged.c | 13 +++++++++---- >> 3 files changed, 15 insertions(+), 4 deletions(-) >> >> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h >> index 93e509b6c00e..8b6d0fed99b3 100644 >> --- a/include/linux/huge_mm.h >> +++ b/include/linux/huge_mm.h >> @@ -119,6 +119,8 @@ enum mthp_stat_item { >> MTHP_STAT_ANON_FAULT_ALLOC, >> MTHP_STAT_ANON_FAULT_FALLBACK, >> MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE, >> + MTHP_STAT_ANON_COLLAPSE_ALLOC, >> + MTHP_STAT_ANON_COLLAPSE_ALLOC_FAILED, >> MTHP_STAT_ZSWPOUT, >> MTHP_STAT_SWPIN, >> MTHP_STAT_SWPIN_FALLBACK, >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >> index 2da5520bfe24..2e582fad4c77 100644 >> --- a/mm/huge_memory.c >> +++ b/mm/huge_memory.c >> @@ -615,6 +615,8 @@ static struct kobj_attribute _name##_attr = __ATTR_RO(_name) >> DEFINE_MTHP_STAT_ATTR(anon_fault_alloc, MTHP_STAT_ANON_FAULT_ALLOC); >> DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_ANON_FAULT_FALLBACK); >> DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE); >> +DEFINE_MTHP_STAT_ATTR(anon_collapse_alloc, MTHP_STAT_ANON_COLLAPSE_ALLOC); >> +DEFINE_MTHP_STAT_ATTR(anon_collapse_alloc_failed, MTHP_STAT_ANON_COLLAPSE_ALLOC_FAILED); >> DEFINE_MTHP_STAT_ATTR(zswpout, MTHP_STAT_ZSWPOUT); >> DEFINE_MTHP_STAT_ATTR(swpin, MTHP_STAT_SWPIN); >> DEFINE_MTHP_STAT_ATTR(swpin_fallback, MTHP_STAT_SWPIN_FALLBACK); >> @@ -636,6 +638,8 @@ static struct attribute *anon_stats_attrs[] = { >> &anon_fault_alloc_attr.attr, >> &anon_fault_fallback_attr.attr, >> &anon_fault_fallback_charge_attr.attr, >> + &anon_collapse_alloc_attr.attr, >> + &anon_collapse_alloc_failed_attr.attr, >> #ifndef CONFIG_SHMEM >> &zswpout_attr.attr, >> &swpin_attr.attr, >> diff --git a/mm/khugepaged.c b/mm/khugepaged.c >> index 95643e6e5f31..02cd424b8e48 100644 >> --- a/mm/khugepaged.c >> +++ b/mm/khugepaged.c >> @@ -1073,21 +1073,26 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm, >> } >> >> static int alloc_charge_folio(struct folio **foliop, struct mm_struct *mm, >> - struct collapse_control *cc) >> + int order, struct collapse_control *cc) >> { >> gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() : >> GFP_TRANSHUGE); >> int node = hpage_collapse_find_target_node(cc); >> struct folio *folio; >> >> - folio = __folio_alloc(gfp, HPAGE_PMD_ORDER, node, &cc->alloc_nmask); >> + folio = __folio_alloc(gfp, order, node, &cc->alloc_nmask); >> if (!folio) { >> *foliop = NULL; >> count_vm_event(THP_COLLAPSE_ALLOC_FAILED); >> + if (order != HPAGE_PMD_ORDER) >> + count_mthp_stat(order, MTHP_STAT_ANON_COLLAPSE_ALLOC_FAILED); > Bug? We should be calling count_mthp_stat() for all orders, but only calling > count_vm_event(THP_*) for PMD_ORDER, as per pattern laid out by other mTHP stats. Ah okay. > > The aim is for existing THP stats (which are implicitly only counting PMD-sized > THP) to continue only to count PMD-sized THP. It's a userspace ABI and we were > scared of the potential to break things if we changed the existing counters' > semantics. > >> return SCAN_ALLOC_HUGE_PAGE_FAIL; >> } >> >> count_vm_event(THP_COLLAPSE_ALLOC); >> + if (order != HPAGE_PMD_ORDER) >> + count_mthp_stat(order, MTHP_STAT_ANON_COLLAPSE_ALLOC); > Same problem. > > Also, I agree with Baolin that we don't want "anon" in the title. This is a > generic path used for file-backed memory. So once you fix the bug, the new stats > will also be counting the file-backed memory too (although for now, only for > PMD_ORDER). Sure. >> + >> if (unlikely(mem_cgroup_charge(folio, mm, gfp))) { >> folio_put(folio); >> *foliop = NULL; >> @@ -1124,7 +1129,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, >> */ >> mmap_read_unlock(mm); >> >> - result = alloc_charge_folio(&folio, mm, cc); >> + result = alloc_charge_folio(&folio, mm, order, cc); > Where is order coming from? I'm guessing that's added later, so this patch won't > compile on it's own? Perhaps HPAGE_PMD_ORDER for now? Okay yes, this won't compile on its own. I'll ensure sequential buildability next time. > >> if (result != SCAN_SUCCEED) >> goto out_nolock; >> >> @@ -1850,7 +1855,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, >> VM_BUG_ON(!IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !is_shmem); >> VM_BUG_ON(start & (HPAGE_PMD_NR - 1)); >> >> - result = alloc_charge_folio(&new_folio, mm, cc); >> + result = alloc_charge_folio(&new_folio, mm, HPAGE_PMD_ORDER, cc); >> if (result != SCAN_SUCCEED) >> goto out; >> >