From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4AE91C5519F for ; Thu, 12 Nov 2020 17:58:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BC5F621D40 for ; Thu, 12 Nov 2020 17:58:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="G/Xrg7dG" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BC5F621D40 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DBAF16B005C; Thu, 12 Nov 2020 12:58:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CF61F6B005D; Thu, 12 Nov 2020 12:58:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B6F396B006E; Thu, 12 Nov 2020 12:58:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0141.hostedemail.com [216.40.44.141]) by kanga.kvack.org (Postfix) with ESMTP id 852C46B005C for ; Thu, 12 Nov 2020 12:58:39 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 35E1F181AC9CB for ; Thu, 12 Nov 2020 17:58:39 +0000 (UTC) X-FDA: 77476526358.05.page65_3713f9f27308 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin05.hostedemail.com (Postfix) with ESMTP id 1CAEC180163B8 for ; Thu, 12 Nov 2020 17:58:39 +0000 (UTC) X-HE-Tag: page65_3713f9f27308 X-Filterd-Recvd-Size: 5362 Received: from hqnvemgate25.nvidia.com (hqnvemgate25.nvidia.com [216.228.121.64]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Thu, 12 Nov 2020 17:58:38 +0000 (UTC) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Thu, 12 Nov 2020 09:58:30 -0800 Received: from rcampbell-dev.nvidia.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 12 Nov 2020 17:58:32 +0000 Subject: Re: [RFC PATCH 2/6] mm: memcg: make memcg huge page split support any order split. To: Zi Yan , , Matthew Wilcox CC: "Kirill A . Shutemov" , Roman Gushchin , Andrew Morton , , , Yang Shi , Michal Hocko , John Hubbard , David Nellans References: <20201111204008.21332-1-zi.yan@sent.com> <20201111204008.21332-3-zi.yan@sent.com> X-Nvconfidentiality: public From: Ralph Campbell Message-ID: <021b000f-dfc9-59fc-77e4-fdeaee1c108e@nvidia.com> Date: Thu, 12 Nov 2020 09:58:31 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.2.2 MIME-Version: 1.0 In-Reply-To: <20201111204008.21332-3-zi.yan@sent.com> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1605203910; bh=mo4zzPeTclJa4REBjpKyto1Jte0H5cbXgM268bgYOOs=; h=Subject:To:CC:References:X-Nvconfidentiality:From:Message-ID:Date: User-Agent:MIME-Version:In-Reply-To:Content-Type:Content-Language: Content-Transfer-Encoding:X-Originating-IP:X-ClientProxiedBy; b=G/Xrg7dGj13QYBvaEftMU91cOiZYbAaF91IuPDmZVpPev8LYULjmPLQDCJL9aM3y3 pzFuuRabki4JyNuwkXb9UUOCG3UVE153K88EMJ9/18qVEtQP2ucEmLHJA4FBK3Ju8k RFEH6Y34H9UDHZlHiupOyfBf5g/sZkgitnulKEThOTW2zn36iJVOo7VLlgsK0KU8HP vQwO0ZO+Ds2ZkoDbeqSGkU837eolrtQMu8VOsuv5yaB75+74Ons24GdE1kxPBdAWb4 J2MKjfzKYuoHsnNcZdkCcmfwHIKTQu6ep2HbC/IkdoxD8BNdtJ+S2Dvad1SecgFLYl D+AXAu2SzSrCA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 11/11/20 12:40 PM, Zi Yan wrote: > From: Zi Yan > > It reads thp_nr_pages and splits to provided new_nr. It prepares for > upcoming changes to support split huge page to any lower order. > > Signed-off-by: Zi Yan Looks OK to me. Reviewed-by: Ralph Campbell > --- > include/linux/memcontrol.h | 5 +++-- > mm/huge_memory.c | 2 +- > mm/memcontrol.c | 4 ++-- > 3 files changed, 6 insertions(+), 5 deletions(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index 0f4dd7829fb2..b3bac79ceed6 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -1105,7 +1105,7 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm, > } > > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > -void mem_cgroup_split_huge_fixup(struct page *head); > +void mem_cgroup_split_huge_fixup(struct page *head, unsigned int new_nr); > #endif > > #else /* CONFIG_MEMCG */ > @@ -1451,7 +1451,8 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, > return 0; > } > > -static inline void mem_cgroup_split_huge_fixup(struct page *head) > +static inline void mem_cgroup_split_huge_fixup(struct page *head, > + unsigned int new_nr) > { > } > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index c4fead5ead31..f599f5b9bf7f 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -2429,7 +2429,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, > lruvec = mem_cgroup_page_lruvec(head, pgdat); > > /* complete memcg works before add pages to LRU */ > - mem_cgroup_split_huge_fixup(head); > + mem_cgroup_split_huge_fixup(head, 1); > > if (PageAnon(head) && PageSwapCache(head)) { > swp_entry_t entry = { .val = page_private(head) }; > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 33f632689cee..e9705ba6bbcc 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -3247,7 +3247,7 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) > * Because tail pages are not marked as "used", set it. We're under > * pgdat->lru_lock and migration entries setup in all page mappings. > */ > -void mem_cgroup_split_huge_fixup(struct page *head) > +void mem_cgroup_split_huge_fixup(struct page *head, unsigned int new_nr) > { > struct mem_cgroup *memcg = page_memcg(head); > int i; > @@ -3255,7 +3255,7 @@ void mem_cgroup_split_huge_fixup(struct page *head) > if (mem_cgroup_disabled()) > return; > > - for (i = 1; i < thp_nr_pages(head); i++) { > + for (i = new_nr; i < thp_nr_pages(head); i += new_nr) { > css_get(&memcg->css); > head[i].memcg_data = (unsigned long)memcg; > } >