From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40655D6EBFF for ; Fri, 29 Nov 2024 11:49:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C13A56B0088; Fri, 29 Nov 2024 06:49:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BC32D6B0089; Fri, 29 Nov 2024 06:49:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A8AA86B008C; Fri, 29 Nov 2024 06:49:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 8AAA96B0088 for ; Fri, 29 Nov 2024 06:49:30 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 352071603D9 for ; Fri, 29 Nov 2024 11:49:30 +0000 (UTC) X-FDA: 82838962164.25.04EABB1 Received: from mail-pg1-f170.google.com (mail-pg1-f170.google.com [209.85.215.170]) by imf07.hostedemail.com (Postfix) with ESMTP id D445040010 for ; Fri, 29 Nov 2024 11:49:19 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=UhdABBCh; spf=pass (imf07.hostedemail.com: domain of haowenchao22@gmail.com designates 209.85.215.170 as permitted sender) smtp.mailfrom=haowenchao22@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732880962; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JvohXbC1xIIhRxSz1kKXy0ooSXTmRMBZW1naMoo6crM=; b=EDB1KnCSMpZ8Gr7PMJiA+hiNCxjmrZ6pWwMa2vlHmwB6fo98GxDDc17w20Tf2KjQA4Z3ZA g+5PtxTgQTaimeI8HJN4pcBMv6bHIBhKGT078FO4EnK0BFFrxKwfT0GtKUKNiEFIg4Cdd9 07icurfTN8f9OOaagFr6B0O7wvOBAR4= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=UhdABBCh; spf=pass (imf07.hostedemail.com: domain of haowenchao22@gmail.com designates 209.85.215.170 as permitted sender) smtp.mailfrom=haowenchao22@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732880962; a=rsa-sha256; cv=none; b=XbHY8ZvpWiYuhdDRgvzhlPCawInVl/V3Z8Sa2Voph3fAgUGlYPAdFYucquD7RrqN+OujOH Q1E/nxCoM31URerqkagT+E9orP/h7MiGLHkrmyktwVDlfjnT8dNegWZNKLqIZjKIRcfw1j u8zEUNyUK/1GIvCuh6Fe3r3ax22ieNM= Received: by mail-pg1-f170.google.com with SMTP id 41be03b00d2f7-7fc22a88bcbso1171515a12.2 for ; Fri, 29 Nov 2024 03:49:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732880967; x=1733485767; darn=kvack.org; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=JvohXbC1xIIhRxSz1kKXy0ooSXTmRMBZW1naMoo6crM=; b=UhdABBChqtblI9zuEASpWS6V+yfoW0pfAVxewUJh08THRK0VONm+pLxi6Q9xOzKhtN UY+2MJd7LcZaUVPO7IbDMBmGni2YHEuMkdNPnEHOZeTFT3HYsqNm6307Ra3BGyCcdYSb 2zdHLUW780mJ9Q0LJrCr82HKKjilModljEm/ZxFVVbbZBG2FdH8CbyIJoacdYJu1VR6W PYiIZC98V3FElqJqdoRl3UzZMKvDsE3/xrhdG1WT2oUdf5CiOu04pDJBOWoOdswjF/sj tPPWm7i0zZU54sEt7tpxdjgDVrVLnalDq6YUPz5+E7zE43X/wUpf5D7alvQ0JgDstYBl S2kg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732880967; x=1733485767; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JvohXbC1xIIhRxSz1kKXy0ooSXTmRMBZW1naMoo6crM=; b=HvjDOheWk4k7v1oYRiFniCpsYMNb68wgB104vc96YgRMxxu4paqHd4q/K2VExBZ/vZ oBfE566mTNS//ECkhF7bWwE2s+eE89Mln73C5zgqMn/4R7EMA+7MGEnPpI83Rkq1Fd/t cboQrNJe5yNf/24eXdjx1UqOxoVeammQvQsr3tjGqBLWixyktP3dA5qhRekRMW9a5cJe WRtDKKuMleGINC2w4fI3tRtA11jezgOftOImCICkppBA1ae1ILRVu1JBBKVaQzwTWQmr ShWtKa3VStbgsPuqNMnsQvyyyHt1EdkCHwFmkLuwNQqraJySz09pcjczbjeHqTi+TOcQ fGKA== X-Forwarded-Encrypted: i=1; AJvYcCWeRShpRqaa5zzKpUNqZWIFLbUu2whtcRzAVIQWoubd+9ImLi483wIa89LgXMuW8Jt4WOV6hR31FA==@kvack.org X-Gm-Message-State: AOJu0Yy5YcuEcRRU+0nF9ijZqrvFmvL2leMwp098GBE3nJGHGSPbzCYS ywR1SQIKB1ouBokAZYnFZ/8WRObgQhOh23YuBklaJ1noLBlc+xdS X-Gm-Gg: ASbGncsos67Lm1Vg0akrzZfGFtIjTQ/3o0eYhEosbxdaAY90c6hmzwSk9LFGoiCtgIP JkaVx2sfg3B3e2NIHoxXreZqIHJRL4Hy2BoYQO2Z0ChX+2/QRFiQmmih7s+7Iqi+VugYGJ4MPth q67gaMOulc7gi+xmIRQ4tVoDoe1ul5LZvNJrTPTsJtwNtdY5Z4z2zqgatMno5qSsC0rw/aFPLfT Hhli0jPFdFPTo2847cK73ZnCq2HsbhI9T9q0DkhKAZzepJzQP9Bq9XrQw== X-Google-Smtp-Source: AGHT+IHYADrTLYbn/F3wRZyiquNzKdu1siH+ebN9lQjmcqt40Qz1W7l19OLUe7uSmkoCSHOWoWa77Q== X-Received: by 2002:a05:6a20:1593:b0:1e0:ddf6:56ae with SMTP id adf61e73a8af0-1e0e0aaf88fmr16517282637.7.1732880966666; Fri, 29 Nov 2024 03:49:26 -0800 (PST) Received: from [10.239.1.244] ([43.224.245.235]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7fc9c13982asm2943684a12.0.2024.11.29.03.49.22 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 29 Nov 2024 03:49:26 -0800 (PST) Message-ID: <6dbd2d37-91ca-4566-af4a-7b4153d2001c@gmail.com> Date: Fri, 29 Nov 2024 19:49:22 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2] mm: add per-order mTHP swap-in fallback/fallback_charge counters To: Lance Yang , Barry Song <21cnbao@gmail.com> Cc: Jonathan Corbet , Andrew Morton , David Hildenbrand , Ryan Roberts , Baolin Wang , Usama Arif , Matthew Wilcox , Peter Xu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <20241122161443.34667-1-haowenchao22@gmail.com> <24ea047a-7294-4e7a-bf51-66b7f79f5085@gmail.com> Content-Language: en-US From: Wenchao Hao In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam05 X-Stat-Signature: eb7o4pc1cupwfpnj686m5kjye1q643jb X-Rspamd-Queue-Id: D445040010 X-Rspam-User: X-HE-Tag: 1732880959-532574 X-HE-Meta: U2FsdGVkX19bonE7QWCmwPYNXvp9a8bBxR86FqRXRg4mwlm6yFLuXw4Ol83hU2bqFBVkfAAA4EdihuWzs12lN0M0dYfK+a2MoJ1WFrnXNfz5Up0AWR74K31Yz2IBafKuqeQSl02xXHHKOueGCEJelpt6JL1g7ZxHKy35HJI/9k6BGislv0doN+BwQn0t6jnv7XbmPYWdPfqPpQbtIzwN3MF+5no1s8+CwMux0jtjmSNmKZhmnf45UN8CY2h8kXcyONY7UyEsxSfrLcg9L9Dc0CmRbf7yQYzAzoU0Sc3InUYzfPTasR3xiEUkbCwfjQqdn9Y92E/RS9YvpX1qZ2z8zMHaGEZNDhnWrZKC/q7wK483ACPClij31JuuXzvzwW4SdbOMC6ST8+W+hfxyYD0vbSohRzpsktPHROCZ5ierB+ZPkWWMsUKKmHS/WnECJbpzfg5AJRXQ58eLbIDS6QnSDuVNJkaEhSqRLAA26zSNnP6WIMpHxxo63EBhuvKX9QGSfV97bpJk8auS0D3T/2K0N8KCoFm7fLAwk5IzaZbIwqwaUIIrWLF3fq31JQHgd48B/SZOKcMg87+yl446ULS3EALm/NWLf7PZbyiqS3oWGJ5npLkdeORuwiNK7MbNmvzqmT//vxAUttHUUJd+YAlk45yv8yP7P1dM6oTcw/Vb8MSkp8BqoWivGnjD+YkVFmEhK0PD/iQJx5ot3eyZ+GE2sRxosUG6aMmtHPY6Jbvjb0yFaHkWKEOk9t2mQyZOJI1rvlLP+F4KaXE3jWvA69y8eQ1JW/wLLM3RuherSqVsB5hCor2ND0mpJrELBto+RJFVECQLRZgOkw5I10n0VUg+OeRS/RA5lf0ivDPjgiuQbM9Q0NHAnIU6K/oTMtRc0bKsooHru9gYOXDjwkULcgYuwWa7ca58RG3Bu3Iip87SAX+qRDYP9nXRpS86Tda19/H7OOHN9lOX1gnUffxpHmp 8WEYhqvT u5BqA1Ml7fFyZeREsPkXCKbE1+XwfjPQQsY1kR7i7xB8tkoDrQeIfgN6wZwVGZ6QFVumY0kZnsbn5RFglL8kunPhIx916LRFmyeYAuOXpP8qffz8yZL6QOQ69Hil6s0T95hjlqL1a5enjqvZKO9Qfw6N74GWNZftXvUUeX2Hlvd1nUd4QLrVM+I20l4qVzJCYheHEp5f1mBPkXl2/Np9IiwzypV1rGjfnoel2GycU6RrZjciKtkW3SrJnFy4gYca14gELS5hIuSjmGiOIHZYjJ4BlxbBxLSXqn6TbU9AMqNzog6VpfzmHNl/vu6KLvRTeWI48Q5hND6gpJXVIyNzlR7YM2XhxJHaH+obqI1cZ4Y66CWOABmv0smVQEY2e+zpHBO8nthmuuYkSusfagWSBg5jErl00GgdMvEtnBtik87WNQk0eJfF7ER5vrj2knXHgE6S0pxo+4BFkj01H+6Zqst2RByZ7oIkobQVrz2XC5z1C5vwdTeH57xPzDQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/11/24 15:28, Lance Yang wrote: > On Sun, Nov 24, 2024 at 3:11 PM Barry Song <21cnbao@gmail.com> wrote: >> >> On Sun, Nov 24, 2024 at 2:56 PM Lance Yang wrote: >>> >>> On Sat, Nov 23, 2024 at 9:17 PM Wenchao Hao wrote: >>>> >>>> On 2024/11/23 19:52, Lance Yang wrote: >>>>> On Sat, Nov 23, 2024 at 6:27 PM Barry Song <21cnbao@gmail.com> wrote: >>>>>> >>>>>> On Sat, Nov 23, 2024 at 10:36 AM Lance Yang wrote: >>>>>>> >>>>>>> Hi Wenchao, >>>>>>> >>>>>>> On Sat, Nov 23, 2024 at 12:14 AM Wenchao Hao wrote: >>>>>>>> >>>>>>>> Currently, large folio swap-in is supported, but we lack a method to >>>>>>>> analyze their success ratio. Similar to anon_fault_fallback, we introduce >>>>>>>> per-order mTHP swpin_fallback and swpin_fallback_charge counters for >>>>>>>> calculating their success ratio. The new counters are located at: >>>>>>>> >>>>>>>> /sys/kernel/mm/transparent_hugepage/hugepages-/stats/ >>>>>>>> swpin_fallback >>>>>>>> swpin_fallback_charge >>>>>>>> >>>>>>>> Signed-off-by: Wenchao Hao >>>>>>>> --- >>>>>>>> V2: >>>>>>>> Introduce swapin_fallback_charge, which increments if it fails to >>>>>>>> charge a huge page to memory despite successful allocation. >>>>>>>> >>>>>>>> Documentation/admin-guide/mm/transhuge.rst | 10 ++++++++++ >>>>>>>> include/linux/huge_mm.h | 2 ++ >>>>>>>> mm/huge_memory.c | 6 ++++++ >>>>>>>> mm/memory.c | 2 ++ >>>>>>>> 4 files changed, 20 insertions(+) >>>>>>>> >>>>>>>> diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst >>>>>>>> index 5034915f4e8e..9c07612281b5 100644 >>>>>>>> --- a/Documentation/admin-guide/mm/transhuge.rst >>>>>>>> +++ b/Documentation/admin-guide/mm/transhuge.rst >>>>>>>> @@ -561,6 +561,16 @@ swpin >>>>>>>> is incremented every time a huge page is swapped in from a non-zswap >>>>>>>> swap device in one piece. >>>>>>>> >>>>>>> >>>>>>> Would the following be better? >>>>>>> >>>>>>> +swpin_fallback >>>>>>> + is incremented if a huge page swapin fails to allocate or charge >>>>>>> + it and instead falls back to using small pages. >>>>>>> >>>>>>> +swpin_fallback_charge >>>>>>> + is incremented if a huge page swapin fails to charge it and instead >>>>>>> + falls back to using small pages even though the allocation was >>>>>>> + successful. >>>>>> >>>>>> much better, but it is better to align with "huge pages with >>>>>> lower orders or small pages", not necessarily small pages: >>>>>> >>>>>> anon_fault_fallback >>>>>> is incremented if a page fault fails to allocate or charge >>>>>> a huge page and instead falls back to using huge pages with >>>>>> lower orders or small pages. >>>>>> >>>>>> anon_fault_fallback_charge >>>>>> is incremented if a page fault fails to charge a huge page and >>>>>> instead falls back to using huge pages with lower orders or >>>>>> small pages even though the allocation was successful. >>>>> >>>>> Right, I clearly overlooked that ;) >>>>> >>>> >>>> Hi Lance and Barry, >>>> >>>> Do you think the following expression is clear? Compared to my original >>>> version, I’ve removed the word “huge” from the first line, and it now >>>> looks almost identical to anon_fault_fallback/anon_fault_fallback_charge. >>> >>> Well, that's fine with me. And let's see Barry's opinion as well ;) >> >> I still prefer Lance's version. The fallback path in it only needs to >> be adjusted to >> include huge pages with lower orders. In contrast, Wenchao's version feels less >> natural to me because "page swapin" sounds quite odd - we often hear >> "page fault," >> but we have never encountered "page swapin." > > Yeah, it makes sense to me ~ > >> >> So I mean: >> >> swpin_fallback >> is incremented if swapin fails to allocate or charge a huge >> page and instead >> falls back to using huge pages with lower orders or small pages. >> >> swpin_fallback_charge >> is incremented if swapin fails to charge a huge page and instead >> falls back to using huge pages with lower orders or small >> pages even though >> the allocation was successful. > > IHMO, much better and clearer than before ;) > Hi, Thank you both very much for your valuable suggestions. I am only now able to respond to your emails due to a network issue. I will make the revisions based on your feedback and send the third version of the patch. Should I include a "Reviewed-by" or any other tags? Thanks again, Wenchao > Thank, > Lance > >> >>> >>> Thanks, >>> Lance >>> >>>> >>>> swpin_fallback >>>> is incremented if a page swapin fails to allocate or charge >>>> a huge page and instead falls back to using huge pages with >>>> lower orders or small pages. >>>> >>>> swpin_fallback_charge >>>> is incremented if a page swapin fails to charge a huge page and >>>> instead falls back to using huge pages with lower orders or >>>> small pages even though the allocation was successful. >>>> >>>> Thanks, >>>> Wencaho >>>> >>>>> Thanks, >>>>> Lance >>>>> >>>>>> >>>>>>> >>>>>>> Thanks, >>>>>>> Lance >>>>>>> >>>>>>>> +swpin_fallback >>>>>>>> + is incremented if a huge page swapin fails to allocate or charge >>>>>>>> + a huge page and instead falls back to using huge pages with >>>>>>>> + lower orders or small pages. >>>>>>>> + >>>>>>>> +swpin_fallback_charge >>>>>>>> + is incremented if a page swapin fails to charge a huge page and >>>>>>>> + instead falls back to using huge pages with lower orders or >>>>>>>> + small pages even though the allocation was successful. >>>>>>>> + >>>>>>>> swpout >>>>>>>> is incremented every time a huge page is swapped out to a non-zswap >>>>>>>> swap device in one piece without splitting. >>>>>>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h >>>>>>>> index b94c2e8ee918..93e509b6c00e 100644 >>>>>>>> --- a/include/linux/huge_mm.h >>>>>>>> +++ b/include/linux/huge_mm.h >>>>>>>> @@ -121,6 +121,8 @@ enum mthp_stat_item { >>>>>>>> MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE, >>>>>>>> MTHP_STAT_ZSWPOUT, >>>>>>>> MTHP_STAT_SWPIN, >>>>>>>> + MTHP_STAT_SWPIN_FALLBACK, >>>>>>>> + MTHP_STAT_SWPIN_FALLBACK_CHARGE, >>>>>>>> MTHP_STAT_SWPOUT, >>>>>>>> MTHP_STAT_SWPOUT_FALLBACK, >>>>>>>> MTHP_STAT_SHMEM_ALLOC, >>>>>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >>>>>>>> index ee335d96fc39..46749dded1c9 100644 >>>>>>>> --- a/mm/huge_memory.c >>>>>>>> +++ b/mm/huge_memory.c >>>>>>>> @@ -617,6 +617,8 @@ DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_ANON_FAULT_FALLBACK); >>>>>>>> DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE); >>>>>>>> DEFINE_MTHP_STAT_ATTR(zswpout, MTHP_STAT_ZSWPOUT); >>>>>>>> DEFINE_MTHP_STAT_ATTR(swpin, MTHP_STAT_SWPIN); >>>>>>>> +DEFINE_MTHP_STAT_ATTR(swpin_fallback, MTHP_STAT_SWPIN_FALLBACK); >>>>>>>> +DEFINE_MTHP_STAT_ATTR(swpin_fallback_charge, MTHP_STAT_SWPIN_FALLBACK_CHARGE); >>>>>>>> DEFINE_MTHP_STAT_ATTR(swpout, MTHP_STAT_SWPOUT); >>>>>>>> DEFINE_MTHP_STAT_ATTR(swpout_fallback, MTHP_STAT_SWPOUT_FALLBACK); >>>>>>>> #ifdef CONFIG_SHMEM >>>>>>>> @@ -637,6 +639,8 @@ static struct attribute *anon_stats_attrs[] = { >>>>>>>> #ifndef CONFIG_SHMEM >>>>>>>> &zswpout_attr.attr, >>>>>>>> &swpin_attr.attr, >>>>>>>> + &swpin_fallback_attr.attr, >>>>>>>> + &swpin_fallback_charge_attr.attr, >>>>>>>> &swpout_attr.attr, >>>>>>>> &swpout_fallback_attr.attr, >>>>>>>> #endif >>>>>>>> @@ -669,6 +673,8 @@ static struct attribute *any_stats_attrs[] = { >>>>>>>> #ifdef CONFIG_SHMEM >>>>>>>> &zswpout_attr.attr, >>>>>>>> &swpin_attr.attr, >>>>>>>> + &swpin_fallback_attr.attr, >>>>>>>> + &swpin_fallback_charge_attr.attr, >>>>>>>> &swpout_attr.attr, >>>>>>>> &swpout_fallback_attr.attr, >>>>>>>> #endif >>>>>>>> diff --git a/mm/memory.c b/mm/memory.c >>>>>>>> index 209885a4134f..774dfd309cfe 100644 >>>>>>>> --- a/mm/memory.c >>>>>>>> +++ b/mm/memory.c >>>>>>>> @@ -4189,8 +4189,10 @@ static struct folio *alloc_swap_folio(struct vm_fault *vmf) >>>>>>>> if (!mem_cgroup_swapin_charge_folio(folio, vma->vm_mm, >>>>>>>> gfp, entry)) >>>>>>>> return folio; >>>>>>>> + count_mthp_stat(order, MTHP_STAT_SWPIN_FALLBACK_CHARGE); >>>>>>>> folio_put(folio); >>>>>>>> } >>>>>>>> + count_mthp_stat(order, MTHP_STAT_SWPIN_FALLBACK); >>>>>>>> order = next_order(&orders, order); >>>>>>>> } >>>>>>>> >>>>>>>> -- >>>>>>>> 2.45.0 >>>>>>>> >>>>>> >> >> Thanks >> Barry