From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DBA7C7EE24 for ; Mon, 8 May 2023 07:13:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ACE4B6B007E; Mon, 8 May 2023 03:13:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A57AB6B0085; Mon, 8 May 2023 03:13:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8D1296B0087; Mon, 8 May 2023 03:13:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 771686B007E for ; Mon, 8 May 2023 03:13:21 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 3F6C4C0785 for ; Mon, 8 May 2023 07:13:21 +0000 (UTC) X-FDA: 80766221802.21.DC24D13 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by imf11.hostedemail.com (Postfix) with ESMTP id 1DB6840007 for ; Mon, 8 May 2023 07:13:18 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=NxmlWxiy; spf=pass (imf11.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.151 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1683529999; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hC+FHc3FrcZDBN0PPEY3J83n6c45EGHC71OpOnj3f3Q=; b=Rh0TQk90tVGOfE9rsBQEIwH8kS5K4gxezN+YTR5IuH2aUf72Q+lrWHq774mQfhXVzTz3Y3 +CGnZTpyXGcWXtDFqQn8YiYkMS6fk6G2nv2KsbEri5pdj+cr0ZFN/2pU4/bOgfmUxuEN+3 pK0TPTqGu0N8Y4uC29xbaD3Ph8tFjRw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1683529999; a=rsa-sha256; cv=none; b=Fz98tb7ljZpJqlc3ILwomR7swYdcdryYpPlyH91976GEF6Ii0TE52WG2brPvn782CMsJ9e Q4aNhE4yMJPxEp2RZ6NClds2XGTYyM3/v6EnCss9wo+YMFRVsedGVF9ycVtjUj1bMlSJG1 HQ4j4ySx4nopAKJ67eXA27rxrX4DKjA= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=NxmlWxiy; spf=pass (imf11.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.151 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1683529999; x=1715065999; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=RoOthzliZsr3LKOZyuzYp5kyWs7Vrsjx6KCun+y22zw=; b=NxmlWxiyMUnHMQhmbJfZM9i3BusYKB9TtwLnag4uvs26Hh9B2dJymBNg hPG2wixHGy/oXJXl21b0aqjLkZcyrTAm0ZtHuGCVzjTId8zL66fjBpQvV D0E1Qce60HkPsHd7Fyhu5LGtefiaJnakKo8zHo6Quyc4CeJMVv54dFxLx 662BHWUcCCusgYDEDZsHhABgXYxBAqXShA7SvvcE/whH2wNSadfvNfdbp 2+Bl7p/C0BGokQq5YurjgmKjlDVArArpHBDWz64HfuY4NM645DSMTz5rH E4lV5UWkZaAHlsh7Ru3Ri5q0mQVC3ENoGrOzdFtSnAQVnREgSMu4wn+5J g==; X-IronPort-AV: E=McAfee;i="6600,9927,10703"; a="329937466" X-IronPort-AV: E=Sophos;i="5.99,258,1677571200"; d="scan'208";a="329937466" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 May 2023 00:13:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10703"; a="698419425" X-IronPort-AV: E=Sophos;i="5.99,258,1677571200"; d="scan'208";a="698419425" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 May 2023 00:13:13 -0700 From: "Huang, Ying" To: Kefeng Wang Cc: Andrew Morton , Mike Rapoport , , David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , Subject: Re: [PATCH 03/12] mm: page_alloc: move set_zone_contiguous() into mm_init.c References: <20230508071200.123962-1-wangkefeng.wang@huawei.com> <20230508071200.123962-4-wangkefeng.wang@huawei.com> Date: Mon, 08 May 2023 15:12:08 +0800 In-Reply-To: <20230508071200.123962-4-wangkefeng.wang@huawei.com> (Kefeng Wang's message of "Mon, 8 May 2023 15:11:51 +0800") Message-ID: <87jzxj9u0n.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspamd-Queue-Id: 1DB6840007 X-Rspam-User: X-Rspamd-Server: rspam06 X-Stat-Signature: qcat94ekf6bb96nbine5zfaos4xpu14p X-HE-Tag: 1683529998-790821 X-HE-Meta: U2FsdGVkX1+vwsA84bh9M34y/nbiaeewnubakug7rXngLZufu1Fqq22DceVGw+7qwUG24Pt4P9U+r77g37RHp+/FGrpzI/U89ytdiOzFXTxJ1ymOizdS1XeKfjsAXRAnlqGIDyoatCBb7Wb2h22wqp80A2+IiYqFRkKxpobwWnfVWSSjJBrUgaiRy9YT0rKwzf52D8XANZi1UgxxxYREL2IKQYjA4gGtWvXDqxc5/mPdOx+JZw/itROGjXC/bIdGezoXwUfO8dcmDNkEZ16/rL3LMXCIz9IS9QN0a9EAP8RfvXfg4T+/XuFH2fXPsFgg9SIsDx5WzVb0yFAT3saCEkQzdtD2PjHeyUR6jt0YMfJvKuH0Lp+DPhjsUjpsNe3omj3viy7X0t+/UqGaUpxTZnjyvVxkhS2xPA1j4fxljEbgMhBqhlkKJDh4BiBRMwjQ05UZ+lbqON8Rm6PMOJylMbIim4O4y+DKLsbtbECXIOJOCMcg2XDhpMZ7G01UzuBWgWPI74AcT2MX/YQsEX4aj22CMA4hrGqs6xeMnH8ayENMc0TiTYibYU0f6yQnxK96RbqryirO2levY3RZn6W+zL9t4dT74tdzNjn0yzppF8aJiQTdi80hPPbAzd5Z8kaokR0MGX4+EriUd0fq7aQ3KJzjtSfliEu6zrWSxE9BydZEExlSwyPxAU+CT3HsMA4w6mQ24w2KawYTd+wHdwb1XJhB1Mcp4nhuTrFlnhf3pAJd6OE1QSp9IStB/Hl010Pr820r2BENQziHRhOzFlCerSEcvARYXhu4Jmx9uZb5a+8/dRzHcteJy1OSSiY5I/Lh+cXosZiCU+TyPLvms8Lk+yms0NB4US5o0NnfSBXEZhxS7Dc0axt484Lm4fyXCg3Bx4G3NhEE3MzGIRYykAijg4/X2aLKozrdBg43nW4g1nVG8xf3JJp8BF/+b2SQl7jziFQG7AmGwtTtVbpyEws dhzpq+LD k0ZwT0/mBgpYEMd+7V1jLZZVGEjAkjz1/3moz4hg3vELD1gXlsEKzE/AmmpHtQ7ymDWHc+zqHvmxNaZx+isNQumjwcunSy688tPu0LW8vvSwU6/JxvZVkfnwdiHbrs/0+MmrqBBt5zIKIAW/fjjx4JywglLYxgq9JLAJGtuiDp67rL10z1VHsVVFAiNa4jxaiCYToC/iiC7fqI8fWuKXq1OaQIgTCF8Ka+alr X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Kefeng Wang writes: > set_zone_contiguous() is only used in mm init/hotplug, and > clear_zone_contiguous() only used in hotplug, move them from > page_alloc.c to the more appropriate file. > > Signed-off-by: Kefeng Wang > --- > include/linux/memory_hotplug.h | 3 -- > mm/internal.h | 7 +++ > mm/mm_init.c | 74 +++++++++++++++++++++++++++++++ > mm/page_alloc.c | 79 ---------------------------------- > 4 files changed, 81 insertions(+), 82 deletions(-) > > diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h > index 9fcbf5706595..04bc286eed42 100644 > --- a/include/linux/memory_hotplug.h > +++ b/include/linux/memory_hotplug.h > @@ -326,9 +326,6 @@ static inline int remove_memory(u64 start, u64 size) > static inline void __remove_memory(u64 start, u64 size) {} > #endif /* CONFIG_MEMORY_HOTREMOVE */ > > -extern void set_zone_contiguous(struct zone *zone); > -extern void clear_zone_contiguous(struct zone *zone); > - > #ifdef CONFIG_MEMORY_HOTPLUG > extern void __ref free_area_init_core_hotplug(struct pglist_data *pgdat); > extern int __add_memory(int nid, u64 start, u64 size, mhp_t mhp_flags); > diff --git a/mm/internal.h b/mm/internal.h > index e28442c0858a..9482862b28cc 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -371,6 +371,13 @@ static inline struct page *pageblock_pfn_to_page(unsigned long start_pfn, > return __pageblock_pfn_to_page(start_pfn, end_pfn, zone); > } > > +void set_zone_contiguous(struct zone *zone); > + > +static inline void clear_zone_contiguous(struct zone *zone) > +{ > + zone->contiguous = false; > +} > + > extern int __isolate_free_page(struct page *page, unsigned int order); > extern void __putback_isolated_page(struct page *page, unsigned int order, > int mt); > diff --git a/mm/mm_init.c b/mm/mm_init.c > index 15201887f8e0..1f30b9e16577 100644 > --- a/mm/mm_init.c > +++ b/mm/mm_init.c > @@ -2330,6 +2330,80 @@ void __init init_cma_reserved_pageblock(struct page *page) > } > #endif > > +/* > + * Check that the whole (or subset of) a pageblock given by the interval of > + * [start_pfn, end_pfn) is valid and within the same zone, before scanning it > + * with the migration of free compaction scanner. > + * > + * Return struct page pointer of start_pfn, or NULL if checks were not passed. > + * > + * It's possible on some configurations to have a setup like node0 node1 node0 > + * i.e. it's possible that all pages within a zones range of pages do not > + * belong to a single zone. We assume that a border between node0 and node1 > + * can occur within a single pageblock, but not a node0 node1 node0 > + * interleaving within a single pageblock. It is therefore sufficient to check > + * the first and last page of a pageblock and avoid checking each individual > + * page in a pageblock. > + * > + * Note: the function may return non-NULL struct page even for a page block > + * which contains a memory hole (i.e. there is no physical memory for a subset > + * of the pfn range). For example, if the pageblock order is MAX_ORDER, which > + * will fall into 2 sub-sections, and the end pfn of the pageblock may be hole > + * even though the start pfn is online and valid. This should be safe most of > + * the time because struct pages are still initialized via init_unavailable_range() > + * and pfn walkers shouldn't touch any physical memory range for which they do > + * not recognize any specific metadata in struct pages. > + */ > +struct page *__pageblock_pfn_to_page(unsigned long start_pfn, > + unsigned long end_pfn, struct zone *zone) __pageblock_pfn_to_page() is also called by compaction code too (e.g., isolate_freepages_range() -> pageblock_pfn_to_page() -> __pageblock_pfn_to_page()). So, it is used not only by initialization and hotplug? Best Regards, Huang, Ying > +{ > + struct page *start_page; > + struct page *end_page; > + > + /* end_pfn is one past the range we are checking */ > + end_pfn--; > + > + if (!pfn_valid(end_pfn)) > + return NULL; > + > + start_page = pfn_to_online_page(start_pfn); > + if (!start_page) > + return NULL; > + > + if (page_zone(start_page) != zone) > + return NULL; > + > + end_page = pfn_to_page(end_pfn); > + > + /* This gives a shorter code than deriving page_zone(end_page) */ > + if (page_zone_id(start_page) != page_zone_id(end_page)) > + return NULL; > + > + return start_page; > +} > + > +void set_zone_contiguous(struct zone *zone) > +{ > + unsigned long block_start_pfn = zone->zone_start_pfn; > + unsigned long block_end_pfn; > + > + block_end_pfn = pageblock_end_pfn(block_start_pfn); > + for (; block_start_pfn < zone_end_pfn(zone); > + block_start_pfn = block_end_pfn, > + block_end_pfn += pageblock_nr_pages) { > + > + block_end_pfn = min(block_end_pfn, zone_end_pfn(zone)); > + > + if (!__pageblock_pfn_to_page(block_start_pfn, > + block_end_pfn, zone)) > + return; > + cond_resched(); > + } > + > + /* We confirm that there is no hole */ > + zone->contiguous = true; > +} > + > void __init page_alloc_init_late(void) > { > struct zone *zone; > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 4f094ba7c8fb..fe7c1ee5becd 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1480,85 +1480,6 @@ void __free_pages_core(struct page *page, unsigned int order) > __free_pages_ok(page, order, FPI_TO_TAIL); > } > > -/* > - * Check that the whole (or subset of) a pageblock given by the interval of > - * [start_pfn, end_pfn) is valid and within the same zone, before scanning it > - * with the migration of free compaction scanner. > - * > - * Return struct page pointer of start_pfn, or NULL if checks were not passed. > - * > - * It's possible on some configurations to have a setup like node0 node1 node0 > - * i.e. it's possible that all pages within a zones range of pages do not > - * belong to a single zone. We assume that a border between node0 and node1 > - * can occur within a single pageblock, but not a node0 node1 node0 > - * interleaving within a single pageblock. It is therefore sufficient to check > - * the first and last page of a pageblock and avoid checking each individual > - * page in a pageblock. > - * > - * Note: the function may return non-NULL struct page even for a page block > - * which contains a memory hole (i.e. there is no physical memory for a subset > - * of the pfn range). For example, if the pageblock order is MAX_ORDER, which > - * will fall into 2 sub-sections, and the end pfn of the pageblock may be hole > - * even though the start pfn is online and valid. This should be safe most of > - * the time because struct pages are still initialized via init_unavailable_range() > - * and pfn walkers shouldn't touch any physical memory range for which they do > - * not recognize any specific metadata in struct pages. > - */ > -struct page *__pageblock_pfn_to_page(unsigned long start_pfn, > - unsigned long end_pfn, struct zone *zone) > -{ > - struct page *start_page; > - struct page *end_page; > - > - /* end_pfn is one past the range we are checking */ > - end_pfn--; > - > - if (!pfn_valid(end_pfn)) > - return NULL; > - > - start_page = pfn_to_online_page(start_pfn); > - if (!start_page) > - return NULL; > - > - if (page_zone(start_page) != zone) > - return NULL; > - > - end_page = pfn_to_page(end_pfn); > - > - /* This gives a shorter code than deriving page_zone(end_page) */ > - if (page_zone_id(start_page) != page_zone_id(end_page)) > - return NULL; > - > - return start_page; > -} > - > -void set_zone_contiguous(struct zone *zone) > -{ > - unsigned long block_start_pfn = zone->zone_start_pfn; > - unsigned long block_end_pfn; > - > - block_end_pfn = pageblock_end_pfn(block_start_pfn); > - for (; block_start_pfn < zone_end_pfn(zone); > - block_start_pfn = block_end_pfn, > - block_end_pfn += pageblock_nr_pages) { > - > - block_end_pfn = min(block_end_pfn, zone_end_pfn(zone)); > - > - if (!__pageblock_pfn_to_page(block_start_pfn, > - block_end_pfn, zone)) > - return; > - cond_resched(); > - } > - > - /* We confirm that there is no hole */ > - zone->contiguous = true; > -} > - > -void clear_zone_contiguous(struct zone *zone) > -{ > - zone->contiguous = false; > -} > - > /* > * The order of subdivision here is critical for the IO subsystem. > * Please do not alter this order without good reasons and regression