From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63546E7716D for ; Thu, 5 Dec 2024 15:32:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3B5FD6B0111; Thu, 5 Dec 2024 10:19:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CD0336B0127; Thu, 5 Dec 2024 10:19:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E305C6B0103; Thu, 5 Dec 2024 10:19:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 8C0EA6B0082 for ; Sat, 23 Nov 2024 05:26:10 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 3900F141440 for ; Sat, 23 Nov 2024 10:26:10 +0000 (UTC) X-FDA: 82816979658.03.92D60EC Received: from mail-vk1-f176.google.com (mail-vk1-f176.google.com [209.85.221.176]) by imf04.hostedemail.com (Postfix) with ESMTP id 87F6D40003 for ; Sat, 23 Nov 2024 10:26:07 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=quarantine); spf=pass (imf04.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.176 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732357568; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HWUfXqOrZo2/I4JzB9FKdD4EUvZ7gqdQ6KPKW0PSx94=; b=19IIa+CREsDbdKUr/Grr6lRtUb45S4PHmgxTwgk4OTfg9JhGcmzY0i+T1Mmqjtjp2rU9rq E507bfvHuQ/Sqb+u6T2dSjmVmvKxymXsLvNN1gN5any8dJrSUrfFOE9b9noGFmcMfkAfwh qUsq4MO/h8Vhulx/yPB5BglqWIV35LY= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=quarantine); spf=pass (imf04.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.176 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732357568; a=rsa-sha256; cv=none; b=C7f8f4mo7254DapOIjD02d9ttay0k0E49bpD7Wg95DlgLAETu9iBOvy6heMJJ4xYcDw+Eb cCDnAtsRXPnn/gFHivk5eqmCDDjgAwZFU4tt0UTgMS2RVJSXa5+kjXyooSzHY0D6l1Vpsy KLgUWedqsQ7Mf3jQK0g0gXzr6pDajU8= Received: by mail-vk1-f176.google.com with SMTP id 71dfb90a1353d-515192490c3so65453e0c.3 for ; Sat, 23 Nov 2024 02:26:08 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732357567; x=1732962367; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HWUfXqOrZo2/I4JzB9FKdD4EUvZ7gqdQ6KPKW0PSx94=; b=X9mCS3wdTfK/+U6RZQoFtjXnsFWk/iKtpxVJnAIZMKSLbr4W7R+jJegbYh9vVhZrK0 1KWtja0HwYPjqZAMgaNeyMRjutSIeOXFbDt4g55oWvLhZGMODIBeKXkPuONYFX1zFlM6 FLawK+U1JZa+Y6Pt4e2UFsEFvULZu2MEzIzdAqOAbjW2/LTcjC3DZnqD+2YIc+SMabY6 kk3Sx8kHpNG11LYM9ldXxtvemFWtebeL8ACCkZEi4Bf0XUhU0SO07PUcMBLntM7tazol DwTRY/2kozkgkXUu+IGNmiYMyCBkDpRp5pilSmA78oaAR3U6zV8ty3aw5ZIWAubpZhx/ 2PoA== X-Forwarded-Encrypted: i=1; AJvYcCX70xfZZrJLYr+/ZdigkPtEpxzeI1JND8K5dsGTEzrV40KQDaTrY1X9lOirLd9RJ+h/Ga6NFhGwNg==@kvack.org X-Gm-Message-State: AOJu0YxOQJ+aLD+vztCFSykGhVaHxCaNr196p6E9Nzs0u5vaAbyQHUwf jm9KVH613NQomldT22Fhfpbv4J3GYrTTGZCHRdABI9xD5Q8MqrQlmBFuc/1t91rwuV3wrtsK/VQ VVHJu0JQXkL0QKz7O3ddhyaVGD6A= X-Gm-Gg: ASbGncuVlnPrTEqCFSuO2Hi0xQCW3nApkec30/ZXYSZiBmtNcyKczHkIX9Fq7VUANMU 7eerESAR+fl37A95sgcY1nm6tk5PhPXlMnBddXZVE7h2fSJo9ryVdNPpif1uxzYld3Q== X-Google-Smtp-Source: AGHT+IEJzdZzyU/zaYBfMHxaG78xR9zWpUSwUDkfhOQ63vKtC6AXbm4seazdiWm8jn8eO3JJ3GN533F0XKPgFPSOWe0= X-Received: by 2002:a05:6122:2687:b0:50d:35d9:ad60 with SMTP id 71dfb90a1353d-515007edcc6mr6450173e0c.5.1732357567428; Sat, 23 Nov 2024 02:26:07 -0800 (PST) MIME-Version: 1.0 References: <20241122161443.34667-1-haowenchao22@gmail.com> In-Reply-To: From: Barry Song Date: Sat, 23 Nov 2024 18:25:53 +0800 Message-ID: Subject: Re: [PATCH v2] mm: add per-order mTHP swap-in fallback/fallback_charge counters To: Lance Yang Cc: Wenchao Hao , Jonathan Corbet , Andrew Morton , David Hildenbrand , Ryan Roberts , Baolin Wang , Usama Arif , Matthew Wilcox , Peter Xu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 87F6D40003 X-Rspamd-Pre-Result: action=add header; module=dmarc; Action set by DMARC X-Stat-Signature: uzzbpp5xzrfbgcebtoe3mx7fj8jcz7pu X-Rspam: Yes X-HE-Tag: 1732357567-895668 X-HE-Meta: U2FsdGVkX19tW0zYR4VGa8Eo6LqsTOc1CqdEA70EwMjhhPwdk9zsOaBXXDflaJWV7Tmcex8bDoPCSV9gE2J7+b3xtgd7flWShWQUHmPLVlSCrTvPyVu8CnvGKikrDC1FxqrsbRn6/LlQMfRO7ouvt5rjtm4LTJRiSAhMX8/9OVhUYlY7mz5ogQAcP0MWH9YLyq/ob0bpv/rQepF08GPHCwW6bclGt/PMfbKrXrsPyIe0GICm7YOBr5WFN9+xVjeB8zC5qIwsi7QatX1p1LoOtB3TRsQpwdmKHKEA6H4aEtKf9q4hLISk3U7YFQ2LnQ9Kv45vgChu/+3csZS4npRrbi78KgYNq4mqHRwY/KtdLUoEQ8sZk/YG0BW+EHwbkONOlyZrwLdzOMmVJhgDcD3LsfS1x8f/C7ljzK6kSb/HXIMD1jSqJTckQZnzW2D2OWiZgcUdNLQzuoYUjOF0g0EhXTSRSVWdscrKPjmR0v9C/jfOJHeXZ0KWn82IWszsZIH9bcLtNh922DIFKKXVblNnjCU1jspz0iUN61G7hZhZ9wxPgcGMOTobiewSGjwLXMqQ8sXtgJUUxyFch4fLsqbCtke36wYBsbrefeuhrjoIZypA/RcCGjLL26cgsySz+0Fz3KFrcOZMED6e5LoP1vnyhR2tYiKwvKRiypCH18fVTRLJSer/+4S8SerquIrHcLKFniI5AET0uYp2Sgf+74MB5TEnsEANB1aWXFGvXA4g5DPqX9CAiYtLKPaXeJ2umEjAfOHp4YYmtdEw8KbUMSpVLNm9VStRE0j/r99SI4b9+3t9R66FfR5A9MEvh642E7/HtYDxUTaqA4xSl780+pqbl8ZagaQjvt5aa7+B44pvPC/Jpt0EcATxNi0wO/OQx5HL0jHf6k2WyJdCnIv9UsaHkX3jWqNJwvrPrI+zT6OfRMJbjgI/cZe+yR0KbGelc2sqLgFmDNyUQJkJWAJBsqH RYh4dkXq /VJMTG97/mu4rLJRsmHHnqkK2X6edqpgcVjwd1hDJ/RX4B3Bz6gFlVFNkjoNZLNZye4t3FnkIqZXB+wLIHP9Nk8zBRdTE1mXkqKRU6/9rdBK9qaJ8fO5Q+i06rsNkI01xkePNMhkVY63ub1E6U78EL5Djs2/zYe9LZzt36wTjXXNwrDfmey9JBFPIMMCh2DyHaYUSHS5OhPjrpf8L3v1dlZOGjGIaPwIjPprbeds7R6AjNggVmg6Fr56QCFiz7fWH106jBpX2O+9g4yoxQ3/+GIDgxxsbpBXA5i1n61jGHeyk06uch8REzcB65xlMOB0/0CT09dD578u4sfV1tsZVgb0Yp0OZsxQlRaHF9fZy7lcUgLY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sat, Nov 23, 2024 at 10:36=E2=80=AFAM Lance Yang w= rote: > > Hi Wenchao, > > On Sat, Nov 23, 2024 at 12:14=E2=80=AFAM Wenchao Hao wrote: > > > > Currently, large folio swap-in is supported, but we lack a method to > > analyze their success ratio. Similar to anon_fault_fallback, we introdu= ce > > per-order mTHP swpin_fallback and swpin_fallback_charge counters for > > calculating their success ratio. The new counters are located at: > > > > /sys/kernel/mm/transparent_hugepage/hugepages-/stats/ > > swpin_fallback > > swpin_fallback_charge > > > > Signed-off-by: Wenchao Hao > > --- > > V2: > > Introduce swapin_fallback_charge, which increments if it fails to > > charge a huge page to memory despite successful allocation. > > > > Documentation/admin-guide/mm/transhuge.rst | 10 ++++++++++ > > include/linux/huge_mm.h | 2 ++ > > mm/huge_memory.c | 6 ++++++ > > mm/memory.c | 2 ++ > > 4 files changed, 20 insertions(+) > > > > diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation= /admin-guide/mm/transhuge.rst > > index 5034915f4e8e..9c07612281b5 100644 > > --- a/Documentation/admin-guide/mm/transhuge.rst > > +++ b/Documentation/admin-guide/mm/transhuge.rst > > @@ -561,6 +561,16 @@ swpin > > is incremented every time a huge page is swapped in from a non-= zswap > > swap device in one piece. > > > > Would the following be better? > > +swpin_fallback > + is incremented if a huge page swapin fails to allocate or charge > + it and instead falls back to using small pages. > > +swpin_fallback_charge > + is incremented if a huge page swapin fails to charge it and inste= ad > + falls back to using small pages even though the allocation was > + successful. much better, but it is better to align with "huge pages with lower orders or small pages." anon_fault_fallback is incremented if a page fault fails to allocate or charge a huge page and instead falls back to using huge pages with lower orders or small pages. anon_fault_fallback_charge is incremented if a page fault fails to charge a huge page and instead falls back to using huge pages with lower orders or small pages even though the allocation was successful. > > Thanks, > Lance > > > +swpin_fallback > > + is incremented if a huge page swapin fails to allocate or charg= e > > + a huge page and instead falls back to using huge pages with > > + lower orders or small pages. > > + > > +swpin_fallback_charge > > + is incremented if a page swapin fails to charge a huge page and > > + instead falls back to using huge pages with lower orders or > > + small pages even though the allocation was successful. > > + > > swpout > > is incremented every time a huge page is swapped out to a non-z= swap > > swap device in one piece without splitting. > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > > index b94c2e8ee918..93e509b6c00e 100644 > > --- a/include/linux/huge_mm.h > > +++ b/include/linux/huge_mm.h > > @@ -121,6 +121,8 @@ enum mthp_stat_item { > > MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE, > > MTHP_STAT_ZSWPOUT, > > MTHP_STAT_SWPIN, > > + MTHP_STAT_SWPIN_FALLBACK, > > + MTHP_STAT_SWPIN_FALLBACK_CHARGE, > > MTHP_STAT_SWPOUT, > > MTHP_STAT_SWPOUT_FALLBACK, > > MTHP_STAT_SHMEM_ALLOC, > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > > index ee335d96fc39..46749dded1c9 100644 > > --- a/mm/huge_memory.c > > +++ b/mm/huge_memory.c > > @@ -617,6 +617,8 @@ DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STA= T_ANON_FAULT_FALLBACK); > > DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT= _FALLBACK_CHARGE); > > DEFINE_MTHP_STAT_ATTR(zswpout, MTHP_STAT_ZSWPOUT); > > DEFINE_MTHP_STAT_ATTR(swpin, MTHP_STAT_SWPIN); > > +DEFINE_MTHP_STAT_ATTR(swpin_fallback, MTHP_STAT_SWPIN_FALLBACK); > > +DEFINE_MTHP_STAT_ATTR(swpin_fallback_charge, MTHP_STAT_SWPIN_FALLBACK_= CHARGE); > > DEFINE_MTHP_STAT_ATTR(swpout, MTHP_STAT_SWPOUT); > > DEFINE_MTHP_STAT_ATTR(swpout_fallback, MTHP_STAT_SWPOUT_FALLBACK); > > #ifdef CONFIG_SHMEM > > @@ -637,6 +639,8 @@ static struct attribute *anon_stats_attrs[] =3D { > > #ifndef CONFIG_SHMEM > > &zswpout_attr.attr, > > &swpin_attr.attr, > > + &swpin_fallback_attr.attr, > > + &swpin_fallback_charge_attr.attr, > > &swpout_attr.attr, > > &swpout_fallback_attr.attr, > > #endif > > @@ -669,6 +673,8 @@ static struct attribute *any_stats_attrs[] =3D { > > #ifdef CONFIG_SHMEM > > &zswpout_attr.attr, > > &swpin_attr.attr, > > + &swpin_fallback_attr.attr, > > + &swpin_fallback_charge_attr.attr, > > &swpout_attr.attr, > > &swpout_fallback_attr.attr, > > #endif > > diff --git a/mm/memory.c b/mm/memory.c > > index 209885a4134f..774dfd309cfe 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -4189,8 +4189,10 @@ static struct folio *alloc_swap_folio(struct vm_= fault *vmf) > > if (!mem_cgroup_swapin_charge_folio(folio, vma-= >vm_mm, > > gfp, entry)= ) > > return folio; > > + count_mthp_stat(order, MTHP_STAT_SWPIN_FALLBACK= _CHARGE); > > folio_put(folio); > > } > > + count_mthp_stat(order, MTHP_STAT_SWPIN_FALLBACK); > > order =3D next_order(&orders, order); > > } > > > > -- > > 2.45.0 > > Thanks Barry