From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B190BC3DA7A for ; Thu, 29 Dec 2022 20:36:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B3C698E0002; Thu, 29 Dec 2022 15:36:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AEA808E0001; Thu, 29 Dec 2022 15:36:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9B20E8E0002; Thu, 29 Dec 2022 15:36:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 872CE8E0001 for ; Thu, 29 Dec 2022 15:36:28 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 4FAC1AAED1 for ; Thu, 29 Dec 2022 20:36:28 +0000 (UTC) X-FDA: 80296501656.11.31EE302 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 19FB120006 for ; Thu, 29 Dec 2022 20:36:25 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=sWzLqxG6; dmarc=none; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672346187; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1tXPNTjQrGpQJnxsISBWx0T+RRuQWBm24hZghH8jCs0=; b=6dBuROw4KQQjov3K6wZ/B/Kb4ZC5/cSJi4jJ401eyWDpvLoowSJhrQgCTLHVTqeQch9R9T FMytq0d6CGZc4GnIT8hmzrSdahg4GnfxzsW6esd7vJS1XadwPajMrkYSWwfLkYIsc0xUCw ddqMdfS2eFwinkBxFCr4owqtULYGBmU= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=sWzLqxG6; dmarc=none; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672346187; a=rsa-sha256; cv=none; b=QLPBLgl1ag6xyHI786c3ZYPxd+noOkYfRc3YhXEAfDV4K94SFBXk7EDH6Q9eMZhhQT68Ps X9k1uwilcC2O6RZKe0ZdCYJ5wtOY7eUwKZEC7zUnNWRqcxm6ADhY5affQ0J59qbRNj0QQ6 OzJRK9zX1mi+7y2LhN8yFtM4UylxJuY= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=1tXPNTjQrGpQJnxsISBWx0T+RRuQWBm24hZghH8jCs0=; b=sWzLqxG6JdyOQOHnQf2TJkBjWF MiBWkGdRrdJzylSZE4ndx6TsChBYs+iCLdt/Ksx0J+yetnr8avR69SWJjguY4vTK/eELea7WI6LVE 1w8xLB2hpa0bydmFTEbJXlZ92xJ69NXHJ2vf2K7+NQEgskK46lpYTTB5RsqZEGCsT4xdG9ODKmnSY KVRSKHJTmwdR0xJIGMIqAo3SSyj8nr6/yC2YQgVS4VQ16PyjUvH1so9BRC6JlmpBuUrL906qi1dW9 gWE0g9cRQy8HovR0/Vp0aGNYhFbF94tOjdXgq9yzf/lOf3MrCiEzgDr7PwLNX0y2nAe4Ptlicc3Vg Vedl/Q/A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pAzdN-00A6C5-9I; Thu, 29 Dec 2022 20:36:13 +0000 Date: Thu, 29 Dec 2022 20:36:13 +0000 From: Matthew Wilcox To: Kefeng Wang Cc: Andrew Morton , SeongJae Park , damon@lists.linux.dev, linux-mm@kvack.org, vishal.moola@gmail.com, david@redhat.com Subject: Re: [PATCH -next v3 4/7] mm/damon/paddr: convert damon_pa_*() to use folios Message-ID: References: <20221228113413.10329-1-wangkefeng.wang@huawei.com> <20221228113413.10329-5-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20221228113413.10329-5-wangkefeng.wang@huawei.com> X-Rspamd-Queue-Id: 19FB120006 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: ez4hbis9wza7fqzt4715ob5dtk474yum X-HE-Tag: 1672346185-184261 X-HE-Meta: U2FsdGVkX18+2urrht3Bh+++ECVD4ra28SJnAMZY/VchDmRmYUCEwbbUnZKq9kDeSH83lTuqT5f/GQuzzVuU/ydutoh2kPaJba3oj4iPfimA2sBoNmWUe58G8VdfCUbmZ0xmVle7Qau47vnD4A834Z4WGD0lqTf48Mr6MnpMhGlK202QLYmZQcVU1+MuUeQyUNRoULTJqLWOVGCsmBw8StuDuvF8IQHm6eV2rN/lYC96QOADEQtPUsSYWL1RBtnmJmPZtIZTsrlOfQ7wBvH6yY0ncmgW29+Q9kbfrRF3Z1cDXOH/33C/LA0xWwClJbhB+9QZ/yMY1EjS1Ta96zhf2ZuvrHf1vw9i6MZfIJ5I9zWu56gD3L3KEmdgJMi9ImKieaYxUsC3yovLrM00dABImGNwFaTebb+1S38Kg84CiVNE4oqJtGoufc192WM3WmwhKmcl5G8Ug+/mgjp0fGMVxnXw7CfbOC3R6IHf/0cXJ9GbpkahWEahBESK4V3PaM9kDgepUipkn+ckpxhiZ0HPMvEwAkpLoiDnUnKgAVyazHO2XGY5gEGbO4DYj0yi3gh+2QJxmhFfLNnhjgqk6BHRBuu4N0YdILN/yEdvYtHhZVHW5rnkS/VEzPrJGZZz7W/Ejrhx0OmPya7W53Rc3wfeBGuvEKJ8SkldZAYRgt28bTP8fZ+WRGNMy1cTTpLXnJ0qQI8+eeA9tkNvv97ccS7pK59Ott5shVKTsFNBVcRo9atXpcNm+sMbMhr8bpxDmBWTdxTYRauuh3MFy5NUKjyFVEchCpgHLS/RbRP/5wijdLSwMwIknr/Uk9b+pF2aAwVaPBt09i6h+DTYShI3tlBwsrorIUrziAqXudvfFimmq0s/DuYOduehcTDANrs3s4h0l4zzkGwRttGEBf+Bf9IR0yCfXGWURFbrCXPpqylttXpyNU5eeLSINQCCWocY6ktGi1KKu17GVGtOfhH5OKD 1/6JoaCf aIfAE X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Dec 28, 2022 at 07:34:10PM +0800, Kefeng Wang wrote: > - memcg = page_memcg_check(page); > + memcg = page_memcg_check(folio_page(folio, 0)); I doubly don't like this. First, it should have been &folio->page. Second, we should have a folio_memcg_check(). The only reason we don't is that I hadn't needed one before now. Try adding this patch on first. --- 8< --- >From 5fca3ae2278b72d96d99fad5c433cd429a11989d Mon Sep 17 00:00:00 2001 From: "Matthew Wilcox (Oracle)" Date: Thu, 29 Dec 2022 12:59:41 -0500 Subject: [PATCH] memcg: Add folio_memcg_check() Convert page_memcg_check() into folio_memcg_check() and add a page_memcg_check() wrapper. The behaviour of page_memcg_check() is unchanged; tail pages always had a NULL ->memcg_data. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/memcontrol.h | 40 +++++++++++++++++++++++++------------- mm/memcontrol.c | 6 +++--- 2 files changed, 29 insertions(+), 17 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index d3c8203cab6c..a2ebb4e2da63 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -466,34 +466,34 @@ static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) } /* - * page_memcg_check - get the memory cgroup associated with a page - * @page: a pointer to the page struct + * folio_memcg_check - Get the memory cgroup associated with a folio. + * @folio: Pointer to the folio. * - * Returns a pointer to the memory cgroup associated with the page, - * or NULL. This function unlike page_memcg() can take any page - * as an argument. It has to be used in cases when it's not known if a page + * Returns a pointer to the memory cgroup associated with the folio, + * or NULL. This function unlike folio_memcg() can take any folio + * as an argument. It has to be used in cases when it's not known if a folio * has an associated memory cgroup pointer or an object cgroups vector or * an object cgroup. * - * For a non-kmem page any of the following ensures page and memcg binding + * For a non-kmem folio any of the following ensures folio and memcg binding * stability: * - * - the page lock + * - the folio lock * - LRU isolation - * - lock_page_memcg() + * - lock_folio_memcg() * - exclusive reference * - mem_cgroup_trylock_pages() * - * For a kmem page a caller should hold an rcu read lock to protect memcg - * associated with a kmem page from being released. + * For a kmem folio a caller should hold an rcu read lock to protect memcg + * associated with a kmem folio from being released. */ -static inline struct mem_cgroup *page_memcg_check(struct page *page) +static inline struct mem_cgroup *folio_memcg_check(struct folio *folio) { /* - * Because page->memcg_data might be changed asynchronously - * for slab pages, READ_ONCE() should be used here. + * Because folio->memcg_data might be changed asynchronously + * for slabs, READ_ONCE() should be used here. */ - unsigned long memcg_data = READ_ONCE(page->memcg_data); + unsigned long memcg_data = READ_ONCE(folio->memcg_data); if (memcg_data & MEMCG_DATA_OBJCGS) return NULL; @@ -508,6 +508,13 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page) return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); } +static inline struct mem_cgroup *page_memcg_check(struct page *page) +{ + if (PageTail(page)) + return NULL; + return folio_memcg_check((struct folio *)page); +} + static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg) { struct mem_cgroup *memcg; @@ -1165,6 +1172,11 @@ static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) return NULL; } +static inline struct mem_cgroup *folio_memcg_check(struct folio *folio) +{ + return NULL; +} + static inline struct mem_cgroup *page_memcg_check(struct page *page) { return NULL; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 92f319ef6c99..259bc0a48d16 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2939,13 +2939,13 @@ struct mem_cgroup *mem_cgroup_from_obj_folio(struct folio *folio, void *p) } /* - * page_memcg_check() is used here, because in theory we can encounter + * folio_memcg_check() is used here, because in theory we can encounter * a folio where the slab flag has been cleared already, but * slab->memcg_data has not been freed yet - * page_memcg_check(page) will guarantee that a proper memory + * folio_memcg_check() will guarantee that a proper memory * cgroup pointer or NULL will be returned. */ - return page_memcg_check(folio_page(folio, 0)); + return folio_memcg_check(folio); } /* -- 2.35.1