From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AB9EE6ADFF for ; Sat, 23 Nov 2024 13:17:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1D7716B0082; Sat, 23 Nov 2024 08:17:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 184DF6B0083; Sat, 23 Nov 2024 08:17:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 024976B0085; Sat, 23 Nov 2024 08:17:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id D6E856B0082 for ; Sat, 23 Nov 2024 08:17:48 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 88436121AB3 for ; Sat, 23 Nov 2024 13:17:48 +0000 (UTC) X-FDA: 82817412216.11.F4CCD68 Received: from mail-pj1-f51.google.com (mail-pj1-f51.google.com [209.85.216.51]) by imf15.hostedemail.com (Postfix) with ESMTP id B5BB2A0009 for ; Sat, 23 Nov 2024 13:17:45 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=JZ91TXKm; spf=pass (imf15.hostedemail.com: domain of haowenchao22@gmail.com designates 209.85.216.51 as permitted sender) smtp.mailfrom=haowenchao22@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732367865; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=BM4r1HEBrTnTWqy/E6iTOqwcoKUdMj3yu2jdBDSd560=; b=A9S2R1QcLiqH5UsxS6ZIQeL/+K0if1jy9Gkkdhk8LP+mhoPbMazpF2Bzl6jlbHlRYoLAzX 6cdSqZs5hV7MXeFG7znZIrJGUCgKI3nZL1Qy2hCaymOcV1vSkn3V508PbcRSTE5x0aJPr3 gu5DlbwzljYTAjy3UuVeBuzBq0vEi+M= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=JZ91TXKm; spf=pass (imf15.hostedemail.com: domain of haowenchao22@gmail.com designates 209.85.216.51 as permitted sender) smtp.mailfrom=haowenchao22@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732367865; a=rsa-sha256; cv=none; b=6kmP8HceAZcyc8P3QxqtNinQBPV0nQFvlxuT6eqLNHCLuhA8JHMIa8eUykXtgL1XGYMtjz hhZQJqBPO94PFFU3wT+zJHr6AavCDTj57ZQmwlNt/WMJEMhrhyDLryfFaebdY6QGGqTLTc IDx/jnvRV2grHSO7jg18P93JTrOOnaQ= Received: by mail-pj1-f51.google.com with SMTP id 98e67ed59e1d1-2e59746062fso2568347a91.2 for ; Sat, 23 Nov 2024 05:17:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732367865; x=1732972665; darn=kvack.org; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=BM4r1HEBrTnTWqy/E6iTOqwcoKUdMj3yu2jdBDSd560=; b=JZ91TXKmRExUhaqlg2s9xXk8oHT6WjFj8wj5pLTTOLw0dqWGgsExtXpaLApK44Kb0J YrkHafZpnfYgBB/8Q+btT5KOYzNT5rCVJkwe7NZGUiJYLGROvStfijE+aTVaBDG0Kvc6 a+YsKCYG3H3i69G+XUlg1O/lF3A3MyKX1+OnQK2tu0UU1zPzIXttk91qjB/kKmaOpFIQ ECPHZZvUkPecQLha+jmPeor/UuMAXcPuH6ZLWiv8200Nv48G8AvNWbkEP/zRj0RgSBxG xpZOt88risdR435CIYSOblHFYhqqbrfBeGtHeXhuYBQ9Ruf6mM3B4M28ATBbo327z6l5 PJ1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732367865; x=1732972665; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BM4r1HEBrTnTWqy/E6iTOqwcoKUdMj3yu2jdBDSd560=; b=vR0wV1Vh5g05Kwdi/JZ3wIAcPG0rfkvskH3IyekyKGkp7IjyTXaZLa92NhrwlQx+PX x8Aqx2MrqOtH/t/u33kpe4jC//2+3rZpaY3E8cEsivbgoSLmK4NE7kA5R4WoHJNU6mbL BABaOJNOSp+J7mKtQ3nPXZm5skL78iTPSZpkmiLvH3cTPWUJlTtuj0hGwlP/xvaD9Fy7 OIPvQLs4vmhHBAk2c4ULG43x0QCkTzjcXtRvC6826/IB4kgv4vcHqa+6LXfNl+xcmV74 Tuf7v5X1D28QD3A9km2dQtOg1SgfgfzdDzAC8/34SRWCLWomMj2Fi2Y97bTemq0uyu8/ 44EA== X-Forwarded-Encrypted: i=1; AJvYcCVyKdqemjmYWxCrNABbGcmILeCsuiSB8khw62AIfAE29eCBctxyUkXRzt+mwpMQfmDJ255AaclCDg==@kvack.org X-Gm-Message-State: AOJu0Yx1/cPPyzlO479locDP6pgIZt8tz5oNWjblcNFX6QUhiIHmbLoT DhPY2l76x+V6OzD2sr9eSIOUvPzUD/arnomElVTn3xg0LKfwUppw X-Gm-Gg: ASbGncuTPibX2lJHf640lbxrDijU3rWicbJFNk5XvhexFCFYugeDjQirfQj6u+pnZdH A0tgHoovniAkrYGwspkO/sQTAqU30n40vYHJr4sK/HZs4eTWf1YkgVYvYjELyVtq19wV+ctMgLQ tw2cSPuXzyaD2dNkLne1NT6FLMoo86dXWnblft6APuyjv/1IFMf9oOzzATI+KtIoc8kGLIjlIKQ Yz1gGgy8hVvIp4aYwqnmHEvXMHQl6wn03+YoE5cgs2QgztBgKgDLE5WkXddZlcYLcpVxZqFbb1w UaEDzVMSeE7rc2KEk/yTzhhBNjOIPq0PWj4vwUQEbENo X-Google-Smtp-Source: AGHT+IFquSV2OHVBBP4S/5O/ly8yiowqeOIk+QFZcCeDyWYWH1WyuRZ2Dca8iG3StAYanBwj/x92ag== X-Received: by 2002:a17:90a:d40b:b0:2ea:5dcf:6f5d with SMTP id 98e67ed59e1d1-2eb0e528054mr8675157a91.16.1732367864945; Sat, 23 Nov 2024 05:17:44 -0800 (PST) Received: from ?IPV6:2408:843e:480:254b:48e9:fef4:2ffc:1729? ([2408:843e:480:254b:48e9:fef4:2ffc:1729]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2eb0d048c41sm3654599a91.41.2024.11.23.05.17.35 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sat, 23 Nov 2024 05:17:44 -0800 (PST) Message-ID: <24ea047a-7294-4e7a-bf51-66b7f79f5085@gmail.com> Date: Sat, 23 Nov 2024 21:17:32 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2] mm: add per-order mTHP swap-in fallback/fallback_charge counters To: Lance Yang , Barry Song <21cnbao@gmail.com> Cc: Jonathan Corbet , Andrew Morton , David Hildenbrand , Ryan Roberts , Baolin Wang , Usama Arif , Matthew Wilcox , Peter Xu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <20241122161443.34667-1-haowenchao22@gmail.com> Content-Language: en-US From: Wenchao Hao In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: B5BB2A0009 X-Stat-Signature: d8u3cpcyutbpjz36dnup4iix8zgpc9is X-Rspam-User: X-HE-Tag: 1732367865-631567 X-HE-Meta: U2FsdGVkX1+R4B7lYWjI9FEt1pofD9ZUW/zGtgJzROtMh3NQeiM3r0uZ/q4PJmEmnz3DRneW/EAzNge0dkEUAYZ3CDH8d1SDr9v2KW8VaXjKR/biB1dNCKWw/UVEf2TnirxwDlz4KNVDaz/vOs+vdQRMVs5x833EcsJzuWyFkm5nSODna6ClHNIQLk1R5LPzehtV3R60RyG4VfTh2FMlACxBK/yMXrBPsb8kqWHoB3PpLOHGq4x9/Z1ipXrUpgDEl9qoib+cINYn2BtvsAHGsZ+iAVivOe67ZqiEDlzZsnC+KNa0Y8+Ce+AptIjvX/o/NCMRmJuDww2VbFhZ+YwW9vGmsl4NtgilmKAnN8dfF47rnfZIRkxTvM1nkkd4wGAR96eKU8mKHO0HjY+4+x6Lz2TUANaMDo+kir78+4YT4M1dPGPz5cRFixlDBf9F70cjS4PZKaGfD771nQCyWTT2U49+ZVGYWL/lN21IJ9YWfhvry8P3ExQiw6QxS45bKtTzPnrZct52QJUI1v+XsjY+Uvi+rbZkIONvUctLiA77yAG+53/ulSXyrttOvjmcMoVs5Tlj+tlDB3kPdZ9bevIUHJxjULRJrG8vONrwx0LhLZm6CXRxsJqZqkxBy6KjKpZk+V1pH7+OtnJyMS5ojtHNxTWcHU7DaoHbdIfx0SagXuU1zPu2Ho56kUrNPMBblbvGL5/XShLtUAVfgsLdf3l37u05ds716KVQ7TKKoDzazstfULsmg0YIcQmgi8l14yj0PWvwf0da09zvwnYxs1IxaSHnUbAP1cbZZ6H8WY9ic4B36Bgh5zXksaaLcvzClGjEzNHrOe+oKnWDVPQuqz4Dh7MnGzU0flveVhUZqzf/F2eCZLrJj/4XWA9ydszv9TAMJ2VeX3vZ8bufUTzFvJSW3tH1CDQlGu1C0aIt6uD7DRJbpMsQJCCqJ5iqHEjeWHZyYTt6w0K1B9hk2+QCjvk wh4iyX2K 37qkbIfouicFuUf+tgX5BiYwYHsZEWSzgWbAsCl9/b+tZA42fUkThFy/O2WdMIz4yus0FW3n9l4SgjHr/pJ1ZBoIJPKzYDKx3zN2iRYnJuoutZvcM7BIYK6qnOSgH6fg5LyTCxuZzyWBWuGz7W9MaUfIdA0BDdNe59d65us4LW/hf5+wseZHCU0uTVO5CFj5kGZ/rMJsBvT5fCIRkLDzu6iau3Mf3e6rFlTyetxG4GcHZMwyWDLXgBKeAeaS+BGwmbqWGlp7S7Mr7jvEfopy3EY/yWGTL3gzk8m7Idh/f644LdNukBriDSEd8PoeCbFrz7nrXPbjXqfTbwIMhbMXK0Omd5UL42jL3Qzttrz3E4ddCGIBv8zcJVAC2M7mUS9IvAq092YNz3fAGVWnDvpawRewGp6N+eu/zjrt0z51Ochg3Rhe7THTorv6pcUMF2/KuC3jXCQkctFuMZACDMnFz8lCrKKufYmDYBzBPnF3gCA9Yslh181/k5F7AGPS+qllfOX+p X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/11/23 19:52, Lance Yang wrote: > On Sat, Nov 23, 2024 at 6:27 PM Barry Song <21cnbao@gmail.com> wrote: >> >> On Sat, Nov 23, 2024 at 10:36 AM Lance Yang wrote: >>> >>> Hi Wenchao, >>> >>> On Sat, Nov 23, 2024 at 12:14 AM Wenchao Hao wrote: >>>> >>>> Currently, large folio swap-in is supported, but we lack a method to >>>> analyze their success ratio. Similar to anon_fault_fallback, we introduce >>>> per-order mTHP swpin_fallback and swpin_fallback_charge counters for >>>> calculating their success ratio. The new counters are located at: >>>> >>>> /sys/kernel/mm/transparent_hugepage/hugepages-/stats/ >>>> swpin_fallback >>>> swpin_fallback_charge >>>> >>>> Signed-off-by: Wenchao Hao >>>> --- >>>> V2: >>>> Introduce swapin_fallback_charge, which increments if it fails to >>>> charge a huge page to memory despite successful allocation. >>>> >>>> Documentation/admin-guide/mm/transhuge.rst | 10 ++++++++++ >>>> include/linux/huge_mm.h | 2 ++ >>>> mm/huge_memory.c | 6 ++++++ >>>> mm/memory.c | 2 ++ >>>> 4 files changed, 20 insertions(+) >>>> >>>> diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst >>>> index 5034915f4e8e..9c07612281b5 100644 >>>> --- a/Documentation/admin-guide/mm/transhuge.rst >>>> +++ b/Documentation/admin-guide/mm/transhuge.rst >>>> @@ -561,6 +561,16 @@ swpin >>>> is incremented every time a huge page is swapped in from a non-zswap >>>> swap device in one piece. >>>> >>> >>> Would the following be better? >>> >>> +swpin_fallback >>> + is incremented if a huge page swapin fails to allocate or charge >>> + it and instead falls back to using small pages. >>> >>> +swpin_fallback_charge >>> + is incremented if a huge page swapin fails to charge it and instead >>> + falls back to using small pages even though the allocation was >>> + successful. >> >> much better, but it is better to align with "huge pages with >> lower orders or small pages", not necessarily small pages: >> >> anon_fault_fallback >> is incremented if a page fault fails to allocate or charge >> a huge page and instead falls back to using huge pages with >> lower orders or small pages. >> >> anon_fault_fallback_charge >> is incremented if a page fault fails to charge a huge page and >> instead falls back to using huge pages with lower orders or >> small pages even though the allocation was successful. > > Right, I clearly overlooked that ;) > Hi Lance and Barry, Do you think the following expression is clear? Compared to my original version, I’ve removed the word “huge” from the first line, and it now looks almost identical to anon_fault_fallback/anon_fault_fallback_charge. swpin_fallback is incremented if a page swapin fails to allocate or charge a huge page and instead falls back to using huge pages with lower orders or small pages. swpin_fallback_charge is incremented if a page swapin fails to charge a huge page and instead falls back to using huge pages with lower orders or small pages even though the allocation was successful. Thanks, Wencaho > Thanks, > Lance > >> >>> >>> Thanks, >>> Lance >>> >>>> +swpin_fallback >>>> + is incremented if a huge page swapin fails to allocate or charge >>>> + a huge page and instead falls back to using huge pages with >>>> + lower orders or small pages. >>>> + >>>> +swpin_fallback_charge >>>> + is incremented if a page swapin fails to charge a huge page and >>>> + instead falls back to using huge pages with lower orders or >>>> + small pages even though the allocation was successful. >>>> + >>>> swpout >>>> is incremented every time a huge page is swapped out to a non-zswap >>>> swap device in one piece without splitting. >>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h >>>> index b94c2e8ee918..93e509b6c00e 100644 >>>> --- a/include/linux/huge_mm.h >>>> +++ b/include/linux/huge_mm.h >>>> @@ -121,6 +121,8 @@ enum mthp_stat_item { >>>> MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE, >>>> MTHP_STAT_ZSWPOUT, >>>> MTHP_STAT_SWPIN, >>>> + MTHP_STAT_SWPIN_FALLBACK, >>>> + MTHP_STAT_SWPIN_FALLBACK_CHARGE, >>>> MTHP_STAT_SWPOUT, >>>> MTHP_STAT_SWPOUT_FALLBACK, >>>> MTHP_STAT_SHMEM_ALLOC, >>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >>>> index ee335d96fc39..46749dded1c9 100644 >>>> --- a/mm/huge_memory.c >>>> +++ b/mm/huge_memory.c >>>> @@ -617,6 +617,8 @@ DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_ANON_FAULT_FALLBACK); >>>> DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE); >>>> DEFINE_MTHP_STAT_ATTR(zswpout, MTHP_STAT_ZSWPOUT); >>>> DEFINE_MTHP_STAT_ATTR(swpin, MTHP_STAT_SWPIN); >>>> +DEFINE_MTHP_STAT_ATTR(swpin_fallback, MTHP_STAT_SWPIN_FALLBACK); >>>> +DEFINE_MTHP_STAT_ATTR(swpin_fallback_charge, MTHP_STAT_SWPIN_FALLBACK_CHARGE); >>>> DEFINE_MTHP_STAT_ATTR(swpout, MTHP_STAT_SWPOUT); >>>> DEFINE_MTHP_STAT_ATTR(swpout_fallback, MTHP_STAT_SWPOUT_FALLBACK); >>>> #ifdef CONFIG_SHMEM >>>> @@ -637,6 +639,8 @@ static struct attribute *anon_stats_attrs[] = { >>>> #ifndef CONFIG_SHMEM >>>> &zswpout_attr.attr, >>>> &swpin_attr.attr, >>>> + &swpin_fallback_attr.attr, >>>> + &swpin_fallback_charge_attr.attr, >>>> &swpout_attr.attr, >>>> &swpout_fallback_attr.attr, >>>> #endif >>>> @@ -669,6 +673,8 @@ static struct attribute *any_stats_attrs[] = { >>>> #ifdef CONFIG_SHMEM >>>> &zswpout_attr.attr, >>>> &swpin_attr.attr, >>>> + &swpin_fallback_attr.attr, >>>> + &swpin_fallback_charge_attr.attr, >>>> &swpout_attr.attr, >>>> &swpout_fallback_attr.attr, >>>> #endif >>>> diff --git a/mm/memory.c b/mm/memory.c >>>> index 209885a4134f..774dfd309cfe 100644 >>>> --- a/mm/memory.c >>>> +++ b/mm/memory.c >>>> @@ -4189,8 +4189,10 @@ static struct folio *alloc_swap_folio(struct vm_fault *vmf) >>>> if (!mem_cgroup_swapin_charge_folio(folio, vma->vm_mm, >>>> gfp, entry)) >>>> return folio; >>>> + count_mthp_stat(order, MTHP_STAT_SWPIN_FALLBACK_CHARGE); >>>> folio_put(folio); >>>> } >>>> + count_mthp_stat(order, MTHP_STAT_SWPIN_FALLBACK); >>>> order = next_order(&orders, order); >>>> } >>>> >>>> -- >>>> 2.45.0 >>>> >> >> Thanks >> Barry