From: Zi Yan <zi.yan@sent.com>
To: Matthew Wilcox <willy@infradead.org>, linux-mm@kvack.org
Cc: Roman Gushchin <roman.gushchin@linux.dev>,
Shuah Khan <shuah@kernel.org>, Yang Shi <shy828301@gmail.com>,
Miaohe Lin <linmiaohe@huawei.com>,
Hugh Dickins <hughd@google.com>,
"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
linux-kselftest@vger.kernel.org, Zi Yan <ziy@nvidia.com>
Subject: [RFC PATCH 1/5] mm: memcg: make memcg huge page split support any order split.
Date: Mon, 21 Mar 2022 10:21:24 -0400 [thread overview]
Message-ID: <20220321142128.2471199-2-zi.yan@sent.com> (raw)
In-Reply-To: <20220321142128.2471199-1-zi.yan@sent.com>
From: Zi Yan <ziy@nvidia.com>
It sets memcg information for the pages after the split. A new parameter
new_order is added to tell the new page order, always 0 for now. It
prepares for upcoming changes to support split huge page to any lower
order.
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
include/linux/memcontrol.h | 2 +-
mm/huge_memory.c | 2 +-
mm/memcontrol.c | 10 +++++-----
mm/page_alloc.c | 2 +-
4 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 89b14729d59f..e71189454bf0 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1116,7 +1116,7 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm,
rcu_read_unlock();
}
-void split_page_memcg(struct page *head, unsigned int nr);
+void split_page_memcg(struct page *head, unsigned int nr, unsigned int new_order);
unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order,
gfp_t gfp_mask,
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2fe38212e07c..640040c386f0 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2371,7 +2371,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
int i;
/* complete memcg works before add pages to LRU */
- split_page_memcg(head, nr);
+ split_page_memcg(head, nr, 0);
if (PageAnon(head) && PageSwapCache(head)) {
swp_entry_t entry = { .val = page_private(head) };
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 43b2a22ce812..e7da413ac174 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3262,22 +3262,22 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size)
/*
* Because page_memcg(head) is not set on tails, set it now.
*/
-void split_page_memcg(struct page *head, unsigned int nr)
+void split_page_memcg(struct page *head, unsigned int nr, unsigned int new_order)
{
struct folio *folio = page_folio(head);
struct mem_cgroup *memcg = folio_memcg(folio);
- int i;
+ int i, new_nr = 1 << new_order;
if (mem_cgroup_disabled() || !memcg)
return;
- for (i = 1; i < nr; i++)
+ for (i = new_nr; i < nr; i += new_nr)
folio_page(folio, i)->memcg_data = folio->memcg_data;
if (folio_memcg_kmem(folio))
- obj_cgroup_get_many(__folio_objcg(folio), nr - 1);
+ obj_cgroup_get_many(__folio_objcg(folio), nr / new_nr - 1);
else
- css_get_many(&memcg->css, nr - 1);
+ css_get_many(&memcg->css, nr / new_nr - 1);
}
#ifdef CONFIG_MEMCG_SWAP
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index f648decfe39d..d982919b9e51 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3515,7 +3515,7 @@ void split_page(struct page *page, unsigned int order)
for (i = 1; i < (1 << order); i++)
set_page_refcounted(page + i);
split_page_owner(page, 1 << order);
- split_page_memcg(page, 1 << order);
+ split_page_memcg(page, 1 << order, 0);
}
EXPORT_SYMBOL_GPL(split_page);
--
2.35.1
next prev parent reply other threads:[~2022-03-21 14:21 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-21 14:21 [RFC PATCH 0/5] Split a huge page to any lower order pages Zi Yan
2022-03-21 14:21 ` Zi Yan [this message]
2022-03-21 18:57 ` [RFC PATCH 1/5] mm: memcg: make memcg huge page split support any order split Roman Gushchin
2022-03-21 19:07 ` Zi Yan
2022-03-21 19:54 ` Matthew Wilcox
2022-03-21 20:26 ` Zi Yan
2022-03-21 14:21 ` [RFC PATCH 2/5] mm: page_owner: add support for splitting to any order in split page_owner Zi Yan
2022-03-21 19:02 ` Roman Gushchin
2022-03-21 19:08 ` Zi Yan
2022-03-21 14:21 ` [RFC PATCH 3/5] mm: thp: split huge page to any lower order pages Zi Yan
2022-03-21 22:18 ` Roman Gushchin
2022-03-22 14:21 ` Zi Yan
2022-03-22 3:21 ` Miaohe Lin
2022-03-22 14:30 ` Zi Yan
2022-03-23 2:31 ` Miaohe Lin
2022-03-23 22:10 ` Zi Yan
2022-03-24 2:02 ` Miaohe Lin
2022-03-22 20:57 ` Yang Shi
2022-03-21 14:21 ` [RFC PATCH 4/5] mm: truncate: split huge page cache page to a non-zero order if possible Zi Yan
2022-03-21 22:32 ` Roman Gushchin
2022-03-22 14:19 ` Zi Yan
2022-03-23 6:40 ` [mm] 2757cee2d6: UBSAN:shift-out-of-bounds_in_include/linux/log2.h kernel test robot
2022-03-21 14:21 ` [RFC PATCH 5/5] mm: huge_memory: enable debugfs to split huge pages to any order Zi Yan
2022-03-21 22:23 ` Roman Gushchin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220321142128.2471199-2-zi.yan@sent.com \
--to=zi.yan@sent.com \
--cc=cgroups@vger.kernel.org \
--cc=hughd@google.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linmiaohe@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=roman.gushchin@linux.dev \
--cc=shuah@kernel.org \
--cc=shy828301@gmail.com \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox