From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 889B2E743D0 for ; Fri, 29 Sep 2023 00:33:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 012878D0093; Thu, 28 Sep 2023 20:33:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F04ED8D0002; Thu, 28 Sep 2023 20:33:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DA5918D0093; Thu, 28 Sep 2023 20:33:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id C524C8D0002 for ; Thu, 28 Sep 2023 20:33:28 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 97DE216079C for ; Fri, 29 Sep 2023 00:33:28 +0000 (UTC) X-FDA: 81287761296.21.20CAD1A Received: from mail-il1-f181.google.com (mail-il1-f181.google.com [209.85.166.181]) by imf29.hostedemail.com (Postfix) with ESMTP id D187112000E for ; Fri, 29 Sep 2023 00:33:26 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=KLd4aJ6b; spf=pass (imf29.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.166.181 as permitted sender) smtp.mailfrom=nphamcs@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695947606; a=rsa-sha256; cv=none; b=7xpF+46zZu0/7iD6WnOxrLUTWlYutOo4vwrnoXHGwl7Sh33EYAKg3+l6QdZYVr8sXz0bZx zoZwbTHwZG7ZQzJoTK270zI7ulQl+1cSFABW4tdRAk2+/sf1GauPH20lkJ06W6Sk5H3RaR Bl/xqvi1BOEPT7dY9USd6A84Xe6PSYg= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=KLd4aJ6b; spf=pass (imf29.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.166.181 as permitted sender) smtp.mailfrom=nphamcs@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695947606; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KNbnaLjNtFiIrRRS0pHdaiHbnS+IUz+O2MUXbjew1yQ=; b=SLEtYyUyEdLK3fbEw2qUlaK/s7JGFZ1ooyjRg9QkFnv6nm9/L5iMTO5HxzQC3wkoRmrESj a5XExc64eDzWys9WjDD45/KNsRaRKKKRZVJj5cmmUBoS40y44kXUtM7xRAyOTBdECztg8Z E4+B5itKEL0KSazfTvoDvNMWZmbL1K8= Received: by mail-il1-f181.google.com with SMTP id e9e14a558f8ab-35140cc9187so27898385ab.2 for ; Thu, 28 Sep 2023 17:33:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1695947606; x=1696552406; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=KNbnaLjNtFiIrRRS0pHdaiHbnS+IUz+O2MUXbjew1yQ=; b=KLd4aJ6bfE4i3tP9U5kg91meOT1Mdqbe2hNn8vJ6X+C5ayTWapwUXavJQiLTmssWHD wNwotxCKPbJh3p7TfNsPIgdmMSqf2EUpzm3UoMLEIfA6RNVQg1nSJg8PpBpiGhxd8CiP KdhRJ89ANFRJKg8q02fgY4Ix3ADzUhESW/nQjBxYD4PrD8qVMS+2kqIB6rto+rOmgoZ7 Kvot407krhkoNRJYJa9UJ6kFgYJnfD2uzJcAH3/dmGZtWQ7mc7VpPm527HIZ2W90+zsK w71OGcbXRU/WT9/E/aQRqAvIdgRSHG7AaxKffzihZOXb5865FsDaC8TbUDJs1Fkrivnh r0Qw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695947606; x=1696552406; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KNbnaLjNtFiIrRRS0pHdaiHbnS+IUz+O2MUXbjew1yQ=; b=U2TPxV73xpgzznH0gok8i6LNEl9gwvUcmS0/VAEEdQnLqLxpvKfBtKjQhfMjTdxgke eeyrjvkk96Nzau9t77BLQkfiaoqgfRAJCyofTQ9Mkml4Ynx+QIELw+N58CNS63OUgicI KdFKxqM0P3NsehJmyjbERc8iXV9SjaF2yQ0Ocy/grGyB08paaSJu9hoqSAx9uohJaUDe 6aoiBZC9bfmIaXAl/7NysLFxwyUw0mhctmCHJpu1/db1EfbprOUR+GbH423gVCuCO7cr yB72tUOgYWiriPe4Tq9GivfTotPenFlYskgwAvVAMLdqKKfm+Vltg0OWjOFeLh2I0NDo 4CYA== X-Gm-Message-State: AOJu0YyOeurMsQrXkb/UcoKp/1Zkny2Th/2hGJTNWB7iWoeIDwySbm0P lTovWiCIxGVGekbdafGhJby8sYQGIZ1zfTzXx9o= X-Google-Smtp-Source: AGHT+IEYVK29lACB2o/eVMYuiqfP9Z4Dv8VjAhD3GPuuM6DLrJFWS23Sfg8kAJ7AlpLXZ9FZSdrYTJnt2Fv9Jyoxopg= X-Received: by 2002:a05:6e02:1b8b:b0:351:5acb:281 with SMTP id h11-20020a056e021b8b00b003515acb0281mr3206172ili.31.1695947605832; Thu, 28 Sep 2023 17:33:25 -0700 (PDT) MIME-Version: 1.0 References: <20230928005723.1709119-1-nphamcs@gmail.com> <20230928005723.1709119-2-nphamcs@gmail.com> In-Reply-To: From: Nhat Pham Date: Thu, 28 Sep 2023 17:33:14 -0700 Message-ID: Subject: Re: [PATCH v2 1/2] hugetlb: memcg: account hugetlb-backed memory in memory controller To: Frank van der Linden Cc: akpm@linux-foundation.org, riel@surriel.com, hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, tj@kernel.org, lizefan.x@bytedance.com, shuah@kernel.org, mike.kravetz@oracle.com, yosryahmed@google.com, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: D187112000E X-Stat-Signature: cixskswdix9wzh8xtci9s1x68p5bh1eu X-Rspam-User: X-HE-Tag: 1695947606-306952 X-HE-Meta: U2FsdGVkX1/ZpiKVhSJl8dIH7Pqdr5dyscmyJlLpDx16p47jlxfwdUXmw2soySW3Rmv1snp8EPjPpgwtjDpYJhAh7nVEbvoyHJ/LX+PeqOKus9BdS2w4/KO7VFKqsCtf1GqNtoY5CqX9FyjABTqgGmH/0vAk0P81+1fpEG9249dfACU1G0E97GgW6Awt6QhsBbXA9oSl7pHGCxedazFmC4ppzhlz5zGvgaZYtE4wv8TPCyUGX4qm6EykZYgLwuhT0M2z/TfTNs8H5llfS2zhVA8IGtZOMaGjNZuQFbNFqIN1LlxYdZ1BXvV4BtihJY0GLV0IZQzwkQWZ7bDjaVyi3tfBO9hdY6CD2+16JYxL2ABojA1kJx03T7Xn0DDHx6ppmFF1vFRCaD6rxEOtnW6JBdg5M7UTiwBwI265dIHHKMYruhqT3DD+KVFAtXk94S4ejmJR3dx0wTloRyxkCaciOYGQ0xrzi3+ggxCe0B1cq/E/wBw+VHC2e1AOVZ8DSNwN4d/PPX7Nh7M4fAQGsmcExPubm/+BmPahJXC9kBdlo6N+3ymppl6AJ2fF8X2WgebTAyVWiolKk9OII8W3b9LYc6Iv5u4d2nLPhfAJKLXm+lASWtKsa6XSb4h+859w2t5Is7uJFGAFoo36LXgJOiSXIZNvyMtV9XRGPGy+yJk9VnQJXUfSI7clqgJBLJldXOpvvb8yqikSahfLxTiwYMZHoOZzDTQx20hWbRCGJsEmAYrTplXQfrUH3nSGcLBydXg1SEVrmCYjiy7odrMyA8pW8ZDxW3xNNhLvU3ZLH+zCUJQtA8/NBT5NLMMvJWv4XCf56NJzcjKv2IxD1fE1An4kqCmZrkaxjTdzBHyBH9xHCKNceySyZ9aLK2lU7G2rkqgMPMC+r6qHHVs+mcN11MVVVAPLNgIlGGJtHGOWxAib/PwhIt8uu9FSz/ELp3Lo3FOiv6Qy6JhsdSoezg/GWtr TpbseKtT cu6cgYmnIWHPEPe35dJ6Z24TBmwzAl9bQ2hQxEzRVjRkkS2Wa0XUJuJN6knD5ywSHEeFRShLxT2prx27KtCYQsbzMrE4loS87OsRsPjVJGGoOWRnZvhiHaD5vrDJrb2/dRQmzhT4X2ypaI01aCtQjPfyUyp4jFgjPeVtlik2RMepAr90dXzO4W4LTJuXRw/6ddfmbNP8QCxhhD44wozn0Daox22uzsrbLo7YPS5upF4uSBGuj1v1J2t4VMycFA4X5sRS70k4uYazHUmFdf6iGDaOZ0eRJ5jXFHavoMFLTQCnsO3rgby8LFa5hHAVUVtxSjRnWeaEzGZ/UKpXGHiKsfLMJzUbcrgzjtrMneo3D4FxyNX8h14O23nL7nzYpcy1FDY5fcQRe6++NARUpicZx6nRgew== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Sep 28, 2023 at 3:59=E2=80=AFPM Frank van der Linden wrote: > > On Wed, Sep 27, 2023 at 5:57=E2=80=AFPM Nhat Pham wro= te: >> >> Currently, hugetlb memory usage is not acounted for in the memory >> controller, which could lead to memory overprotection for cgroups with >> hugetlb-backed memory. This has been observed in our production system. >> >> This patch rectifies this issue by charging the memcg when the hugetlb >> folio is allocated, and uncharging when the folio is freed (analogous to >> the hugetlb controller). >> >> Signed-off-by: Nhat Pham >> --- >> Documentation/admin-guide/cgroup-v2.rst | 9 ++++++ >> fs/hugetlbfs/inode.c | 2 +- >> include/linux/cgroup-defs.h | 5 +++ >> include/linux/hugetlb.h | 6 ++-- >> include/linux/memcontrol.h | 8 +++++ >> kernel/cgroup/cgroup.c | 15 ++++++++- >> mm/hugetlb.c | 23 ++++++++++---- >> mm/memcontrol.c | 41 +++++++++++++++++++++++++ >> 8 files changed, 99 insertions(+), 10 deletions(-) >> >> diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/adm= in-guide/cgroup-v2.rst >> index 622a7f28db1f..e6267b8cbd1d 100644 >> --- a/Documentation/admin-guide/cgroup-v2.rst >> +++ b/Documentation/admin-guide/cgroup-v2.rst >> @@ -210,6 +210,15 @@ cgroup v2 currently supports the following mount op= tions. >> relying on the original semantics (e.g. specifying bogusly >> high 'bypass' protection values at higher tree levels). >> >> + memory_hugetlb_accounting >> + Count hugetlb memory usage towards the cgroup's overall >> + memory usage for the memory controller. This is a new behavior >> + that could regress existing setups, so it must be explicitly >> + opted in with this mount option. Note that hugetlb pages >> + allocated while this option is not selected will not be >> + tracked by the memory controller (even if cgroup v2 is >> + remounted later on). >> + >> >> Organizing Processes and Threads >> -------------------------------- >> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c >> index 60fce26ff937..034967319955 100644 >> --- a/fs/hugetlbfs/inode.c >> +++ b/fs/hugetlbfs/inode.c >> @@ -902,7 +902,7 @@ static long hugetlbfs_fallocate(struct file *file, i= nt mode, loff_t offset, >> * to keep reservation accounting consistent. >> */ >> hugetlb_set_vma_policy(&pseudo_vma, inode, index); >> - folio =3D alloc_hugetlb_folio(&pseudo_vma, addr, 0); >> + folio =3D alloc_hugetlb_folio(&pseudo_vma, addr, 0, true= ); >> hugetlb_drop_vma_policy(&pseudo_vma); >> if (IS_ERR(folio)) { >> mutex_unlock(&hugetlb_fault_mutex_table[hash]); >> diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h >> index f1b3151ac30b..8641f4320c98 100644 >> --- a/include/linux/cgroup-defs.h >> +++ b/include/linux/cgroup-defs.h >> @@ -115,6 +115,11 @@ enum { >> * Enable recursive subtree protection >> */ >> CGRP_ROOT_MEMORY_RECURSIVE_PROT =3D (1 << 18), >> + >> + /* >> + * Enable hugetlb accounting for the memory controller. >> + */ >> + CGRP_ROOT_MEMORY_HUGETLB_ACCOUNTING =3D (1 << 19), >> }; >> >> /* cftype->flags */ >> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h >> index a30686e649f7..9b73db1605a2 100644 >> --- a/include/linux/hugetlb.h >> +++ b/include/linux/hugetlb.h >> @@ -713,7 +713,8 @@ struct huge_bootmem_page { >> >> int isolate_or_dissolve_huge_page(struct page *page, struct list_head *= list); >> struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, >> - unsigned long addr, int avoid_reserve); >> + unsigned long addr, int avoid_reserve, >> + bool restore_reserve_on_memcg_failure); >> struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int prefer= red_nid, >> nodemask_t *nmask, gfp_t gfp_mask); >> struct folio *alloc_hugetlb_folio_vma(struct hstate *h, struct vm_area_= struct *vma, >> @@ -1016,7 +1017,8 @@ static inline int isolate_or_dissolve_huge_page(st= ruct page *page, >> >> static inline struct folio *alloc_hugetlb_folio(struct vm_area_struct *= vma, >> unsigned long addr, >> - int avoid_reserve) >> + int avoid_reserve, >> + bool restore_reserve_on_memcg= _failure) >> { >> return NULL; >> } >> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h >> index e0cfab58ab71..8094679c99dd 100644 >> --- a/include/linux/memcontrol.h >> +++ b/include/linux/memcontrol.h >> @@ -677,6 +677,8 @@ static inline int mem_cgroup_charge(struct folio *fo= lio, struct mm_struct *mm, >> return __mem_cgroup_charge(folio, mm, gfp); >> } >> >> +int mem_cgroup_hugetlb_charge_folio(struct folio *folio, gfp_t gfp); >> + >> int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struc= t *mm, >> gfp_t gfp, swp_entry_t entry); >> void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry); >> @@ -1251,6 +1253,12 @@ static inline int mem_cgroup_charge(struct folio = *folio, >> return 0; >> } >> >> +static inline int mem_cgroup_hugetlb_charge_folio(struct folio *folio, >> + gfp_t gfp) >> +{ >> + return 0; >> +} >> + >> static inline int mem_cgroup_swapin_charge_folio(struct folio *folio, >> struct mm_struct *mm, gfp_t gfp, swp_entry_t ent= ry) >> { >> diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c >> index 1fb7f562289d..f11488b18ceb 100644 >> --- a/kernel/cgroup/cgroup.c >> +++ b/kernel/cgroup/cgroup.c >> @@ -1902,6 +1902,7 @@ enum cgroup2_param { >> Opt_favordynmods, >> Opt_memory_localevents, >> Opt_memory_recursiveprot, >> + Opt_memory_hugetlb_accounting, >> nr__cgroup2_params >> }; >> >> @@ -1910,6 +1911,7 @@ static const struct fs_parameter_spec cgroup2_fs_p= arameters[] =3D { >> fsparam_flag("favordynmods", Opt_favordynmods), >> fsparam_flag("memory_localevents", Opt_memory_localevents), >> fsparam_flag("memory_recursiveprot", Opt_memory_recursiveprot= ), >> + fsparam_flag("memory_hugetlb_accounting", Opt_memory_hugetlb_acc= ounting), >> {} >> }; >> >> @@ -1936,6 +1938,9 @@ static int cgroup2_parse_param(struct fs_context *= fc, struct fs_parameter *param >> case Opt_memory_recursiveprot: >> ctx->flags |=3D CGRP_ROOT_MEMORY_RECURSIVE_PROT; >> return 0; >> + case Opt_memory_hugetlb_accounting: >> + ctx->flags |=3D CGRP_ROOT_MEMORY_HUGETLB_ACCOUNTING; >> + return 0; >> } >> return -EINVAL; >> } >> @@ -1960,6 +1965,11 @@ static void apply_cgroup_root_flags(unsigned int = root_flags) >> cgrp_dfl_root.flags |=3D CGRP_ROOT_MEMORY_RECURS= IVE_PROT; >> else >> cgrp_dfl_root.flags &=3D ~CGRP_ROOT_MEMORY_RECUR= SIVE_PROT; >> + >> + if (root_flags & CGRP_ROOT_MEMORY_HUGETLB_ACCOUNTING) >> + cgrp_dfl_root.flags |=3D CGRP_ROOT_MEMORY_HUGETL= B_ACCOUNTING; >> + else >> + cgrp_dfl_root.flags &=3D ~CGRP_ROOT_MEMORY_HUGET= LB_ACCOUNTING; >> } >> } >> >> @@ -1973,6 +1983,8 @@ static int cgroup_show_options(struct seq_file *se= q, struct kernfs_root *kf_root >> seq_puts(seq, ",memory_localevents"); >> if (cgrp_dfl_root.flags & CGRP_ROOT_MEMORY_RECURSIVE_PROT) >> seq_puts(seq, ",memory_recursiveprot"); >> + if (cgrp_dfl_root.flags & CGRP_ROOT_MEMORY_HUGETLB_ACCOUNTING) >> + seq_puts(seq, ",memory_hugetlb_accounting"); >> return 0; >> } >> >> @@ -7050,7 +7062,8 @@ static ssize_t features_show(struct kobject *kobj,= struct kobj_attribute *attr, >> "nsdelegate\n" >> "favordynmods\n" >> "memory_localevents\n" >> - "memory_recursiveprot\n"); >> + "memory_recursiveprot\n" >> + "memory_hugetlb_accounting\n"); >> } >> static struct kobj_attribute cgroup_features_attr =3D __ATTR_RO(feature= s); >> >> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >> index de220e3ff8be..ff88ea4df11a 100644 >> --- a/mm/hugetlb.c >> +++ b/mm/hugetlb.c >> @@ -1902,6 +1902,7 @@ void free_huge_folio(struct folio *folio) >> pages_per_huge_page(h), folio); >> hugetlb_cgroup_uncharge_folio_rsvd(hstate_index(h), >> pages_per_huge_page(h), folio)= ; >> + mem_cgroup_uncharge(folio); >> if (restore_reserve) >> h->resv_huge_pages++; >> >> @@ -3004,7 +3005,8 @@ int isolate_or_dissolve_huge_page(struct page *pag= e, struct list_head *list) >> } >> >> struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, >> - unsigned long addr, int avoid_reserv= e) >> + unsigned long addr, int avoid_re= serve, >> + bool restore_reserve_on_memcg_fa= ilure) >> { >> struct hugepage_subpool *spool =3D subpool_vma(vma); >> struct hstate *h =3D hstate_vma(vma); >> @@ -3119,6 +3121,15 @@ struct folio *alloc_hugetlb_folio(struct vm_area_= struct *vma, >> hugetlb_cgroup_uncharge_folio_rsvd(hstate_index(= h), >> pages_per_huge_page(h), folio); >> } >> + >> + /* undo allocation if memory controller disallows it. */ >> + if (mem_cgroup_hugetlb_charge_folio(folio, GFP_KERNEL)) { >> + if (restore_reserve_on_memcg_failure) >> + restore_reserve_on_error(h, vma, addr, folio); >> + folio_put(folio); >> + return ERR_PTR(-ENOMEM); >> + } >> + >> return folio; >> >> out_uncharge_cgroup: >> @@ -5179,7 +5190,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst,= struct mm_struct *src, >> spin_unlock(src_ptl); >> spin_unlock(dst_ptl); >> /* Do not use reserve as it's private ow= ned */ >> - new_folio =3D alloc_hugetlb_folio(dst_vm= a, addr, 1); >> + new_folio =3D alloc_hugetlb_folio(dst_vm= a, addr, 1, false); >> if (IS_ERR(new_folio)) { >> folio_put(pte_folio); >> ret =3D PTR_ERR(new_folio); >> @@ -5656,7 +5667,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm,= struct vm_area_struct *vma, >> * be acquired again before returning to the caller, as expected= . >> */ >> spin_unlock(ptl); >> - new_folio =3D alloc_hugetlb_folio(vma, haddr, outside_reserve); >> + new_folio =3D alloc_hugetlb_folio(vma, haddr, outside_reserve, t= rue); >> >> if (IS_ERR(new_folio)) { >> /* >> @@ -5930,7 +5941,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct= *mm, >> VM_UFFD_MISSING)= ; >> } >> >> - folio =3D alloc_hugetlb_folio(vma, haddr, 0); >> + folio =3D alloc_hugetlb_folio(vma, haddr, 0, true); >> if (IS_ERR(folio)) { >> /* >> * Returning error will result in faulting task = being >> @@ -6352,7 +6363,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, >> goto out; >> } >> >> - folio =3D alloc_hugetlb_folio(dst_vma, dst_addr, 0); >> + folio =3D alloc_hugetlb_folio(dst_vma, dst_addr, 0, true= ); >> if (IS_ERR(folio)) { >> ret =3D -ENOMEM; >> goto out; >> @@ -6394,7 +6405,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, >> goto out; >> } >> >> - folio =3D alloc_hugetlb_folio(dst_vma, dst_addr, 0); >> + folio =3D alloc_hugetlb_folio(dst_vma, dst_addr, 0, fals= e); >> if (IS_ERR(folio)) { >> folio_put(*foliop); >> ret =3D -ENOMEM; >> diff --git a/mm/memcontrol.c b/mm/memcontrol.c >> index d1a322a75172..d5dfc9b36acb 100644 >> --- a/mm/memcontrol.c >> +++ b/mm/memcontrol.c >> @@ -7050,6 +7050,47 @@ int __mem_cgroup_charge(struct folio *folio, stru= ct mm_struct *mm, gfp_t gfp) >> return ret; >> } >> >> +static struct mem_cgroup *get_mem_cgroup_from_current(void) >> +{ >> + struct mem_cgroup *memcg; >> + >> +again: >> + rcu_read_lock(); >> + memcg =3D mem_cgroup_from_task(current); >> + if (!css_tryget(&memcg->css)) { >> + rcu_read_unlock(); >> + goto again; >> + } >> + rcu_read_unlock(); >> + return memcg; >> +} >> + >> +/** >> + * mem_cgroup_hugetlb_charge_folio - Charge a newly allocated hugetlb f= olio. >> + * @folio: folio to charge. >> + * @gfp: reclaim mode >> + * >> + * This function charges an allocated hugetlbf folio to the memcg of th= e >> + * current task. >> + * >> + * Returns 0 on success. Otherwise, an error code is returned. >> + */ >> +int mem_cgroup_hugetlb_charge_folio(struct folio *folio, gfp_t gfp) >> +{ >> + struct mem_cgroup *memcg; >> + int ret; >> + >> + if (mem_cgroup_disabled() || >> + !(cgrp_dfl_root.flags & CGRP_ROOT_MEMORY_HUGETLB_ACCOUNT= ING)) >> + return 0; >> + >> + memcg =3D get_mem_cgroup_from_current(); >> + ret =3D charge_memcg(folio, memcg, gfp); >> + mem_cgroup_put(memcg); >> + >> + return ret; >> +} >> + >> /** >> * mem_cgroup_swapin_charge_folio - Charge a newly allocated folio for = swapin. >> * @folio: folio to charge. >> -- >> 2.34.1 >> > > With the mount option added, I'm fine with this. There are reasons to wan= t and reasons not to want this, so everybody's happy! And the default is no accounting, so this should be safe on impact! > > Out of curiosity: is anyone aware of any code that may behave badly when = folio_memcg(hugetlb_folio) !=3D NULL, not expecting it? Good point. My understanding of the memory controller mechanism is that it should be fine - we're just essentially storing some memcg metadata in the struct folio, and then charging values towards the memcg counters. I don't think we fiddle with anything else in the folio itself that could be ruinous? I also did my best to trace the code paths that go through alloc_hugetlb_folio and free_huge_folio (the places where charging and uncharging happens) to make sure no funny business is going on, and it seems a lot of these paths have special, dedicated handling for hugetlb folio. The usual pattern is checking if the folio is a hugetlb one first, so we're unlikely to even call folio_memcg on a hugetlb folio in existing code in the first place. But if anyone knows something I missed please let me know! And feel free to loop more people in if there's anyone I miss in the cc list :) > > - Frank