From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36E5AE668A1 for ; Sun, 24 Nov 2024 06:56:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8E2586B0082; Sun, 24 Nov 2024 01:56:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 891736B0083; Sun, 24 Nov 2024 01:56:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 759796B0085; Sun, 24 Nov 2024 01:56:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 595646B0082 for ; Sun, 24 Nov 2024 01:56:10 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id CB8571A013C for ; Sun, 24 Nov 2024 06:56:09 +0000 (UTC) X-FDA: 82820079216.25.0E846EA Received: from mail-ed1-f51.google.com (mail-ed1-f51.google.com [209.85.208.51]) by imf21.hostedemail.com (Postfix) with ESMTP id B6C501C0007 for ; Sun, 24 Nov 2024 06:56:04 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=TBClV1VG; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf21.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.208.51 as permitted sender) smtp.mailfrom=ioworker0@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732431367; a=rsa-sha256; cv=none; b=BA8qBXUNiPMnSUuG04hsytmdIcQtBCbRoyZ4IOwpqDaNDkwzJCxgQf7Q4FNcL9d+Ty5yUA 4tYH0TuFeMxVhHcMXhmlsVRcPssyzylKsI0YKBiJzakdyU0Jtkt71hrxAnkvIkeci+AVtU +jscjY/bPGESPlq93zAAbONr1LsYmsk= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=TBClV1VG; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf21.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.208.51 as permitted sender) smtp.mailfrom=ioworker0@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732431367; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hZMrYJW3fr4hyXvkEVlCX5Cs2x5+zPWq/vgYLWx4O1U=; b=DIuA5b+cClZ2RwAoNBat1QrnoMB1zmKwhKSwtiI5WWKosYt8cqRlAP4raxAuI966KxYtdS Emx8XRk+abo1AUsz8ZDuAHFir7i3HVw2148Bq7TO2pxxbQ6eMZgeN1ubrC2e0W1PKL2F0w kf88xqVjdCd4sH2v+2pKjAtbPItato0= Received: by mail-ed1-f51.google.com with SMTP id 4fb4d7f45d1cf-5cfc035649bso4658575a12.2 for ; Sat, 23 Nov 2024 22:56:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732431366; x=1733036166; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=hZMrYJW3fr4hyXvkEVlCX5Cs2x5+zPWq/vgYLWx4O1U=; b=TBClV1VGrDZPJnWt5ozupk0Jis+i65cakl4X8Em6wbm+zSzRvoS6xun+OUw8x5vNvm fU6KqCBakfGgmUWzYPMtWNTK0YoEbgngZu+Bn0F6k8ULyerxVk46e2CKf5zT8aef6hOJ ngbwXAmYgIAyKZS+0i3GLzchU3SFlAfXNnavkDQxqS7b/kEhZxCS6k+ZLlMO6iR1ksWz qdLQs7siGpuECnhbwnk3narMI6frEABN6XWcNh/aXdTZuE5YxtDcZ9yvUwIyfbQXhFgg pB51IoCHEySfKVR90AdwNt3SefBxUIE7OhL6reyInD9oTlyXxxHcmOS2Q3H5M5MMOGoZ /tWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732431366; x=1733036166; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hZMrYJW3fr4hyXvkEVlCX5Cs2x5+zPWq/vgYLWx4O1U=; b=RL8ni/BjwubBkkdmPscr5QzHpOHNr9/dX5fSwgxX/9XD/dMFD8VsvohfJMSu9LzBEx M5DXJEIHuSwpvjdKDW6zvOCB+nFYY02RyovbodzptcTe+Cm08Wl+CvGhmYyyK6Ht1Dx2 d+A915q8WLizee+D3blt6uRjO9K+YINNEGb5uA/WHWIEJfwxRePVaSKmdPI1yoC4JDTH Lrnd9zNBqYbVr8Q4netxNOR2ekPCGEyhbMHTUd5k0U8braMXPIH3TpOe5t7t80XlwL3O DA3HyZOH5A+5B+ysOyDS/SH0KAe10cZ6T0RwGPjpgDyHg4t+c7D0iaGu2RTJuvhBKCyo kksA== X-Forwarded-Encrypted: i=1; AJvYcCVrV5Bc1mdi9nkY7emgIuhEpJiCfSlRDrJPDcRZy0iXAcPsaxeyzUChILs3BRLs2VkbSlZBwnKNvg==@kvack.org X-Gm-Message-State: AOJu0YyfAhAL9DuAz1/44biWKit0RkHiei5tC6GbbD8Ooj7OCA6054kP FV3yVmspDKXB/VK3bsQK0SRSMfd1X103s0aShUNuM3tW+Qp9CN/IKA72ASlSj7HsLIUz2qeVGeT 0ERmqJ0GBNXDmMkyaolSRMfw627k= X-Gm-Gg: ASbGncvh9IfcmM7XPLOylNW8edy8bfmVpga01p5ZQimD9DOOLD/8ZtQHSXOMT6AlJCR 6QhH68wNNklDy+ZB1xXx/PbVTAZgY2YY= X-Google-Smtp-Source: AGHT+IFbxUU/f/KL6GhITyqXaHQ+rfg8Hdle3mjV/ep9AVq3aimLQV09GJHEn4AgzoiE6QEnRBHlN8/41whXH4MRKqk= X-Received: by 2002:a05:6402:1e93:b0:5cf:e26b:566c with SMTP id 4fb4d7f45d1cf-5d0205f8ec7mr6180594a12.10.1732431365982; Sat, 23 Nov 2024 22:56:05 -0800 (PST) MIME-Version: 1.0 References: <20241122161443.34667-1-haowenchao22@gmail.com> <24ea047a-7294-4e7a-bf51-66b7f79f5085@gmail.com> In-Reply-To: <24ea047a-7294-4e7a-bf51-66b7f79f5085@gmail.com> From: Lance Yang Date: Sun, 24 Nov 2024 14:55:29 +0800 Message-ID: Subject: Re: [PATCH v2] mm: add per-order mTHP swap-in fallback/fallback_charge counters To: Wenchao Hao Cc: Barry Song <21cnbao@gmail.com>, Jonathan Corbet , Andrew Morton , David Hildenbrand , Ryan Roberts , Baolin Wang , Usama Arif , Matthew Wilcox , Peter Xu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Queue-Id: B6C501C0007 X-Rspamd-Server: rspam01 X-Stat-Signature: cr5u4bhbirz8okchet15yrxm4gukbpoq X-HE-Tag: 1732431364-990383 X-HE-Meta: U2FsdGVkX1/WpFlus0upBMPusSkjZhDTB0VqI395NcpG7N4A/qm+syeetujdGYPciQYxdV5gmOYcFgXXL7Mauqf1F8ea4lCD1pagTGhk5T0ju46kJRjt22yZJQwdUXQ375H6eCAEFecpI/HsePSv+L+lk1HuO9l3JnZvsxm6iUdH+jh8u+pueKKreyOVr7zKhCIAl8mkimfgUmnZdHO4IyXxGPugYPmkIB80x02Tdx9s6Tn5CmN7Uyarpnn96EhyxIdhBCRqCS79J+t4wbuu+86XRa7174m+m5e+GWRW2OIzhn/aa/ZnEv7nS36aMxMm5rNCMNljYW5GJy7qaDhA/sn16EF19ZEPrbj9zOajr4Eceua5VmT0RxR/jccx5nz40f/hbE2M4GtSdGyaCBwn2ELzUDO/bvSt9CenyNqEIE3clL5KkwwaoIdy4gv3qPnOZZ2CIejZkyOgo6q7BCRK67cmc/vzP56spVC6Myd31yXuu+RFGJdu73c0frdKiQDmFSSeS9tvDgLuJCu/CqUlAJWKcIgwPv0c/2fgZPsK8sEr6SUMc6IQndJHFQ0Qe/OLo5QF0yjKoU85qbHiPbQ2XlX9SR7REHscSF75dfM4VuJDYCPczWeIa+tHYY1bKfktk1UUQAhnRmGKjSsJzYXE9hTMSlZr7VStwr0AXX7hU/tNPFKI22OeHobkbguHO/F4hFW3AUKAJ9zH72ho4hvTdy153aaA7ZJoowEa0PXpuzrQ8JwmYVxt9i6DujIKkxqYCSzDmLx2ttu836bMpX2Co1LI5q4TJ5fIWZApp+SmnOwjY8iWC39T6PzMc7bwZBX/3vQhw+AsLFoc/HrPRHxqI+SEvDUp+A2Bwnmaus/HBMWAUGNmg5jFaPkzqPKvrmG1/OqAj7fD9km2XTuyZM8GyFVRDRqp8iTTDO+YaTV8QCQvtNrd4BhZPaMM8sPc8+GKIb5oTV2lxmzwMakx+86 AMGbffTm GQojUqPokzCXZKICYHP6EHpu1pOEfa6HQx09DIVsDZgXwF3TjJGB03pRN82dJMikTMlo4ovbvdMK21PwEeg4lm0MKujf2x4iUpbhGxpOtEuPtuT2mmvq+XKknf4xEwsGbCTYGEeAQknHjZZhyojLYjCekoR7MfQ4LcfNqr7A9ApqhqrZAdn+5s8hzrNxYvfySMWF5NB8eNf/zKFqFc+UfRsaHEGuwNPMrI4QsCuQcVAHm3wbx2EjHnjZRLhfstWNtFys0wtADTQ3X3mQzMRb5Fbj59j1/g6cUSjiCW1pzlH0gSTJ4LNJe9A0Y4vp71Wf9Dffgt1SbF918DchZWOvLqNNaY34BOnsyv8aH/CfEoQ3ZpNWYJgHdsT+3sgTjWiSuiq+9BW6msVF2l9U= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sat, Nov 23, 2024 at 9:17=E2=80=AFPM Wenchao Hao wrote: > > On 2024/11/23 19:52, Lance Yang wrote: > > On Sat, Nov 23, 2024 at 6:27=E2=80=AFPM Barry Song <21cnbao@gmail.com> = wrote: > >> > >> On Sat, Nov 23, 2024 at 10:36=E2=80=AFAM Lance Yang wrote: > >>> > >>> Hi Wenchao, > >>> > >>> On Sat, Nov 23, 2024 at 12:14=E2=80=AFAM Wenchao Hao wrote: > >>>> > >>>> Currently, large folio swap-in is supported, but we lack a method to > >>>> analyze their success ratio. Similar to anon_fault_fallback, we intr= oduce > >>>> per-order mTHP swpin_fallback and swpin_fallback_charge counters for > >>>> calculating their success ratio. The new counters are located at: > >>>> > >>>> /sys/kernel/mm/transparent_hugepage/hugepages-/stats/ > >>>> swpin_fallback > >>>> swpin_fallback_charge > >>>> > >>>> Signed-off-by: Wenchao Hao > >>>> --- > >>>> V2: > >>>> Introduce swapin_fallback_charge, which increments if it fails to > >>>> charge a huge page to memory despite successful allocation. > >>>> > >>>> Documentation/admin-guide/mm/transhuge.rst | 10 ++++++++++ > >>>> include/linux/huge_mm.h | 2 ++ > >>>> mm/huge_memory.c | 6 ++++++ > >>>> mm/memory.c | 2 ++ > >>>> 4 files changed, 20 insertions(+) > >>>> > >>>> diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentat= ion/admin-guide/mm/transhuge.rst > >>>> index 5034915f4e8e..9c07612281b5 100644 > >>>> --- a/Documentation/admin-guide/mm/transhuge.rst > >>>> +++ b/Documentation/admin-guide/mm/transhuge.rst > >>>> @@ -561,6 +561,16 @@ swpin > >>>> is incremented every time a huge page is swapped in from a n= on-zswap > >>>> swap device in one piece. > >>>> > >>> > >>> Would the following be better? > >>> > >>> +swpin_fallback > >>> + is incremented if a huge page swapin fails to allocate or cha= rge > >>> + it and instead falls back to using small pages. > >>> > >>> +swpin_fallback_charge > >>> + is incremented if a huge page swapin fails to charge it and i= nstead > >>> + falls back to using small pages even though the allocation wa= s > >>> + successful. > >> > >> much better, but it is better to align with "huge pages with > >> lower orders or small pages", not necessarily small pages: > >> > >> anon_fault_fallback > >> is incremented if a page fault fails to allocate or charge > >> a huge page and instead falls back to using huge pages with > >> lower orders or small pages. > >> > >> anon_fault_fallback_charge > >> is incremented if a page fault fails to charge a huge page and > >> instead falls back to using huge pages with lower orders or > >> small pages even though the allocation was successful. > > > > Right, I clearly overlooked that ;) > > > > Hi Lance and Barry, > > Do you think the following expression is clear? Compared to my original > version, I=E2=80=99ve removed the word =E2=80=9Chuge=E2=80=9D from the fi= rst line, and it now > looks almost identical to anon_fault_fallback/anon_fault_fallback_charge. Well, that's fine with me. And let's see Barry's opinion as well ;) Thanks, Lance > > swpin_fallback > is incremented if a page swapin fails to allocate or charge > a huge page and instead falls back to using huge pages with > lower orders or small pages. > > swpin_fallback_charge > is incremented if a page swapin fails to charge a huge page and > instead falls back to using huge pages with lower orders or > small pages even though the allocation was successful. > > Thanks, > Wencaho > > > Thanks, > > Lance > > > >> > >>> > >>> Thanks, > >>> Lance > >>> > >>>> +swpin_fallback > >>>> + is incremented if a huge page swapin fails to allocate or ch= arge > >>>> + a huge page and instead falls back to using huge pages with > >>>> + lower orders or small pages. > >>>> + > >>>> +swpin_fallback_charge > >>>> + is incremented if a page swapin fails to charge a huge page = and > >>>> + instead falls back to using huge pages with lower orders or > >>>> + small pages even though the allocation was successful. > >>>> + > >>>> swpout > >>>> is incremented every time a huge page is swapped out to a no= n-zswap > >>>> swap device in one piece without splitting. > >>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > >>>> index b94c2e8ee918..93e509b6c00e 100644 > >>>> --- a/include/linux/huge_mm.h > >>>> +++ b/include/linux/huge_mm.h > >>>> @@ -121,6 +121,8 @@ enum mthp_stat_item { > >>>> MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE, > >>>> MTHP_STAT_ZSWPOUT, > >>>> MTHP_STAT_SWPIN, > >>>> + MTHP_STAT_SWPIN_FALLBACK, > >>>> + MTHP_STAT_SWPIN_FALLBACK_CHARGE, > >>>> MTHP_STAT_SWPOUT, > >>>> MTHP_STAT_SWPOUT_FALLBACK, > >>>> MTHP_STAT_SHMEM_ALLOC, > >>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c > >>>> index ee335d96fc39..46749dded1c9 100644 > >>>> --- a/mm/huge_memory.c > >>>> +++ b/mm/huge_memory.c > >>>> @@ -617,6 +617,8 @@ DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_= STAT_ANON_FAULT_FALLBACK); > >>>> DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FA= ULT_FALLBACK_CHARGE); > >>>> DEFINE_MTHP_STAT_ATTR(zswpout, MTHP_STAT_ZSWPOUT); > >>>> DEFINE_MTHP_STAT_ATTR(swpin, MTHP_STAT_SWPIN); > >>>> +DEFINE_MTHP_STAT_ATTR(swpin_fallback, MTHP_STAT_SWPIN_FALLBACK); > >>>> +DEFINE_MTHP_STAT_ATTR(swpin_fallback_charge, MTHP_STAT_SWPIN_FALLBA= CK_CHARGE); > >>>> DEFINE_MTHP_STAT_ATTR(swpout, MTHP_STAT_SWPOUT); > >>>> DEFINE_MTHP_STAT_ATTR(swpout_fallback, MTHP_STAT_SWPOUT_FALLBACK); > >>>> #ifdef CONFIG_SHMEM > >>>> @@ -637,6 +639,8 @@ static struct attribute *anon_stats_attrs[] =3D = { > >>>> #ifndef CONFIG_SHMEM > >>>> &zswpout_attr.attr, > >>>> &swpin_attr.attr, > >>>> + &swpin_fallback_attr.attr, > >>>> + &swpin_fallback_charge_attr.attr, > >>>> &swpout_attr.attr, > >>>> &swpout_fallback_attr.attr, > >>>> #endif > >>>> @@ -669,6 +673,8 @@ static struct attribute *any_stats_attrs[] =3D { > >>>> #ifdef CONFIG_SHMEM > >>>> &zswpout_attr.attr, > >>>> &swpin_attr.attr, > >>>> + &swpin_fallback_attr.attr, > >>>> + &swpin_fallback_charge_attr.attr, > >>>> &swpout_attr.attr, > >>>> &swpout_fallback_attr.attr, > >>>> #endif > >>>> diff --git a/mm/memory.c b/mm/memory.c > >>>> index 209885a4134f..774dfd309cfe 100644 > >>>> --- a/mm/memory.c > >>>> +++ b/mm/memory.c > >>>> @@ -4189,8 +4189,10 @@ static struct folio *alloc_swap_folio(struct = vm_fault *vmf) > >>>> if (!mem_cgroup_swapin_charge_folio(folio, v= ma->vm_mm, > >>>> gfp, ent= ry)) > >>>> return folio; > >>>> + count_mthp_stat(order, MTHP_STAT_SWPIN_FALLB= ACK_CHARGE); > >>>> folio_put(folio); > >>>> } > >>>> + count_mthp_stat(order, MTHP_STAT_SWPIN_FALLBACK); > >>>> order =3D next_order(&orders, order); > >>>> } > >>>> > >>>> -- > >>>> 2.45.0 > >>>> > >> > >> Thanks > >> Barry >