From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04F32C3DA59 for ; Fri, 19 Jul 2024 18:16:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9140A6B008C; Fri, 19 Jul 2024 14:16:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8C4706B0092; Fri, 19 Jul 2024 14:16:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 78BFD6B0093; Fri, 19 Jul 2024 14:16:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 586036B008C for ; Fri, 19 Jul 2024 14:16:54 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id F3A971C10BD for ; Fri, 19 Jul 2024 18:16:53 +0000 (UTC) X-FDA: 82357308306.07.381E436 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by imf21.hostedemail.com (Postfix) with ESMTP id 02A2C1C0025 for ; Fri, 19 Jul 2024 18:16:51 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf21.hostedemail.com: domain of jonathan.cameron@huawei.com designates 185.176.79.56 as permitted sender) smtp.mailfrom=jonathan.cameron@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721412977; a=rsa-sha256; cv=none; b=eIjqXPcEpsK77vfGYiXm0WKcMRh09qz6lfP8zIzbl9Y+xryNZcayhYPihMdCov/CTDHtyK 91gCDjcCv76UBhKAcNX2IDnwkxO+7/b8W8EEWIllTnJLllN1qGnbksVcQBwDCFbXb2N72F iBs4NeL+X9+eV8cxKA64NFvDnBkhALE= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf21.hostedemail.com: domain of jonathan.cameron@huawei.com designates 185.176.79.56 as permitted sender) smtp.mailfrom=jonathan.cameron@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721412977; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gLjTq7U13fWkprxQyfd7FJ025lYtBrCJf/AYsZLuQIM=; b=n0l7I1VUjitwAwJEROHpzavgMNbpmLvQB8xtmHFBPQpaTjZE3+Kbm9+ij2gjF/1xW28mrr yJN/e+/s3hMEiF6n7Yjkcxd4zmyYEm9am2BfisCW1/vH58/JwnqF7WuSKDwxH/PwY6+xFc 09dN03IKZxIbF6+bw9Dh2ZI+p2WpZNU= Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4WQdBj2QNxz6JBGZ; Sat, 20 Jul 2024 02:15:25 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id 158871408FE; Sat, 20 Jul 2024 02:16:50 +0800 (CST) Received: from localhost (10.48.157.16) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Fri, 19 Jul 2024 19:16:48 +0100 Date: Fri, 19 Jul 2024 19:16:47 +0100 From: Jonathan Cameron To: Mike Rapoport CC: , Alexander Gordeev , Andreas Larsson , "Andrew Morton" , Arnd Bergmann , "Borislav Petkov" , Catalin Marinas , Christophe Leroy , Dan Williams , Dave Hansen , David Hildenbrand , "David S. Miller" , Greg Kroah-Hartman , Heiko Carstens , Huacai Chen , Ingo Molnar , Jiaxun Yang , "John Paul Adrian Glaubitz" , Michael Ellerman , Palmer Dabbelt , "Rafael J. Wysocki" , Rob Herring , "Thomas Bogendoerfer" , Thomas Gleixner , Vasily Gorbik , Will Deacon , , , , , , , , , , , , , , , Subject: Re: [PATCH 12/17] mm: introduce numa_memblks Message-ID: <20240719191647.000072f6@Huawei.com> In-Reply-To: <20240716111346.3676969-13-rppt@kernel.org> References: <20240716111346.3676969-1-rppt@kernel.org> <20240716111346.3676969-13-rppt@kernel.org> Organization: Huawei Technologies Research and Development (UK) Ltd. X-Mailer: Claws Mail 4.1.0 (GTK 3.24.33; x86_64-w64-mingw32) MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.48.157.16] X-ClientProxiedBy: lhrpeml500001.china.huawei.com (7.191.163.213) To lhrpeml500005.china.huawei.com (7.191.163.240) X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 02A2C1C0025 X-Stat-Signature: 9m8dx1ug71sze1uife6iuwd61ksx6je4 X-Rspam-User: X-HE-Tag: 1721413011-9871 X-HE-Meta: U2FsdGVkX18tgHKUOUnmSrBn58R+TsF97AzL2k7bsVRdzJmWy/Lmyl1Lq8U0g3Zs0nV1sMGchHKxxqY6tiFGIpJgxuyWoKm8yynIWiZcgu0M8U4Nu54bDI+geRbiylGvzW8HRdOxy4bBveBub5B6z9amb8PyLtuTFsYw2OIEAM0BHLKvZ6bHV4pbyyIETT3lcZ88CIrvkasEwSeG45VO6dDFzoXUXQhrA/aTOP1aLAFQvun4Ww/ul/ADt4UVhVT/APzJpXxfwsB4GP2UAjJ86/sSMeEYytUGYA9ztYLbs0BIomy2Ezps1dG8a3qytSPLOsB/KBiWmdpB13eLinVni91euxDdwUxSQFpqamE9+9N23vRmabYb1T+rSmLrx6seyweNL0OnCSn4VAyBH3ogPgPO6oLX0E/O3alEAi056rvQbjnDzqB/u+696IUA7YX1AxD0tzhEPVGxynOo2A0ErCG/1bt5UmziijYOm0Kx/TC0qOAdal//JUhivvaUqcZnFumFTkBj9U3FQGCoTjmn8bjZ0P3Q6SgWFlCQ1V7jbIKFC/obXk/rgJaOuAfIiF/DpF1KUUoTHKFNQvfP4nBh9ail3PZ9uwTAuBar3sRADwp/rfo76MOOzGIxMc9EYnM/7ZbLP1PJyqcqSFPhaawUE6AxeXu9ZyQoMlxgvA1VIgieTa2AliJiEXY8UXDIVhep5x2lhqucaML6Tu/uUuqRYbjb2ZK0hHPB3Kn+lKvrOAuzjI+SlYwCx5ccuKJ4DVDFzTcR0kXp8GQZS9X9oUz+k+8PCNceufUNZU0z5V6cRto9XyDzXT+wyEF5geILlRBNsp3ZMUfHQhTccDr6M6eyJfaVPzQvZMZ7dvYnr11hxKx6eRPOHodj84dA2dqCLBRDcsbf5zKHUIj7ehB0ey+kJwW/fw30KWFL1lNilaqf8ID0cQjm4l8mlFJELe9h8ktB9vOrytFFIeDi1CSRTn6 I4RJWJw+ 5FU1U0qjvb7sNiAU0wtg5OkKm1ngYBgRQ53Sy8DtPnRPGH1Mj82Q7/WVQhPz0Hw/Q5YyERCYiWWh5o3L2w89ipH8TXWqCq2+BS7824GsC2/dmbDacsTZfAA4XitrNRfL2Mmw3EBMna9CFOgfqEDWBa0Pca7Q1kcevFLCUIAmSwU/c+3HC9muPaU5w0gFo0qIE1EgHeFlevidmWDo5M38roys/hGtzwz2erRX3BPl4N8SO5IdsJ7yjyiPYBSswIGMrouocQs1ANQroOU64na2X7vXa8a8dUvdkHlmlbhlm353HcFQN887pnAWmPsMugkweIbrBolQ8EGEoxxLxDSznTt4CLFbgxoz94nOj X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, 16 Jul 2024 14:13:41 +0300 Mike Rapoport wrote: > From: "Mike Rapoport (Microsoft)" > > Move code dealing with numa_memblks from arch/x86 to mm/ and add Kconfig > options to let x86 select it in its Kconfig. > > This code will be later reused by arch_numa. > > No functional changes. > > Signed-off-by: Mike Rapoport (Microsoft) Hi Mike, My only real concern in here is there are a few places where the lifted code makes changes to memblocks that are x86 only today. I need to do some more digging to work out if those are safe in all cases. Jonathan > +/** > + * numa_cleanup_meminfo - Cleanup a numa_meminfo > + * @mi: numa_meminfo to clean up > + * > + * Sanitize @mi by merging and removing unnecessary memblks. Also check for > + * conflicts and clear unused memblks. > + * > + * RETURNS: > + * 0 on success, -errno on failure. > + */ > +int __init numa_cleanup_meminfo(struct numa_meminfo *mi) > +{ > + const u64 low = 0; Given always zero, why not just use that value inline? > + const u64 high = PFN_PHYS(max_pfn); > + int i, j, k; > + > + /* first, trim all entries */ > + for (i = 0; i < mi->nr_blks; i++) { > + struct numa_memblk *bi = &mi->blk[i]; > + > + /* move / save reserved memory ranges */ > + if (!memblock_overlaps_region(&memblock.memory, > + bi->start, bi->end - bi->start)) { > + numa_move_tail_memblk(&numa_reserved_meminfo, i--, mi); > + continue; > + } > + > + /* make sure all non-reserved blocks are inside the limits */ > + bi->start = max(bi->start, low); > + > + /* preserve info for non-RAM areas above 'max_pfn': */ > + if (bi->end > high) { > + numa_add_memblk_to(bi->nid, high, bi->end, > + &numa_reserved_meminfo); > + bi->end = high; > + } > + > + /* and there's no empty block */ > + if (bi->start >= bi->end) > + numa_remove_memblk_from(i--, mi); > + } > + > + /* merge neighboring / overlapping entries */ > + for (i = 0; i < mi->nr_blks; i++) { > + struct numa_memblk *bi = &mi->blk[i]; > + > + for (j = i + 1; j < mi->nr_blks; j++) { > + struct numa_memblk *bj = &mi->blk[j]; > + u64 start, end; > + > + /* > + * See whether there are overlapping blocks. Whine > + * about but allow overlaps of the same nid. They > + * will be merged below. > + */ > + if (bi->end > bj->start && bi->start < bj->end) { > + if (bi->nid != bj->nid) { > + pr_err("node %d [mem %#010Lx-%#010Lx] overlaps with node %d [mem %#010Lx-%#010Lx]\n", > + bi->nid, bi->start, bi->end - 1, > + bj->nid, bj->start, bj->end - 1); > + return -EINVAL; > + } > + pr_warn("Warning: node %d [mem %#010Lx-%#010Lx] overlaps with itself [mem %#010Lx-%#010Lx]\n", > + bi->nid, bi->start, bi->end - 1, > + bj->start, bj->end - 1); > + } > + > + /* > + * Join together blocks on the same node, holes > + * between which don't overlap with memory on other > + * nodes. > + */ > + if (bi->nid != bj->nid) > + continue; > + start = min(bi->start, bj->start); > + end = max(bi->end, bj->end); > + for (k = 0; k < mi->nr_blks; k++) { > + struct numa_memblk *bk = &mi->blk[k]; > + > + if (bi->nid == bk->nid) > + continue; > + if (start < bk->end && end > bk->start) > + break; > + } > + if (k < mi->nr_blks) > + continue; > + pr_info("NUMA: Node %d [mem %#010Lx-%#010Lx] + [mem %#010Lx-%#010Lx] -> [mem %#010Lx-%#010Lx]\n", > + bi->nid, bi->start, bi->end - 1, bj->start, > + bj->end - 1, start, end - 1); > + bi->start = start; > + bi->end = end; > + numa_remove_memblk_from(j--, mi); > + } > + } > + > + /* clear unused ones */ > + for (i = mi->nr_blks; i < ARRAY_SIZE(mi->blk); i++) { > + mi->blk[i].start = mi->blk[i].end = 0; > + mi->blk[i].nid = NUMA_NO_NODE; > + } > + > + return 0; > +} ... > +/* > + * Mark all currently memblock-reserved physical memory (which covers the > + * kernel's own memory ranges) as hot-unswappable. > + */ > +static void __init numa_clear_kernel_node_hotplug(void) This will be a change for non x86 architectures. 'should' be fine but I'm not 100% sure. > +{ > + nodemask_t reserved_nodemask = NODE_MASK_NONE; > + struct memblock_region *mb_region; > + int i; > + > + /* > + * We have to do some preprocessing of memblock regions, to > + * make them suitable for reservation. > + * > + * At this time, all memory regions reserved by memblock are > + * used by the kernel, but those regions are not split up > + * along node boundaries yet, and don't necessarily have their > + * node ID set yet either. > + * > + * So iterate over all memory known to the x86 architecture, Comment needs an update at least given not x86 specific any more. > + * and use those ranges to set the nid in memblock.reserved. > + * This will split up the memblock regions along node > + * boundaries and will set the node IDs as well. > + */ > + for (i = 0; i < numa_meminfo.nr_blks; i++) { > + struct numa_memblk *mb = numa_meminfo.blk + i; > + int ret; > + > + ret = memblock_set_node(mb->start, mb->end - mb->start, > + &memblock.reserved, mb->nid); > + WARN_ON_ONCE(ret); > + } > + > + /* > + * Now go over all reserved memblock regions, to construct a > + * node mask of all kernel reserved memory areas. > + * > + * [ Note, when booting with mem=nn[kMG] or in a kdump kernel, > + * numa_meminfo might not include all memblock.reserved > + * memory ranges, because quirks such as trim_snb_memory() > + * reserve specific pages for Sandy Bridge graphics. ] > + */ > + for_each_reserved_mem_region(mb_region) { > + int nid = memblock_get_region_node(mb_region); > + > + if (nid != MAX_NUMNODES) > + node_set(nid, reserved_nodemask); > + } > + > + /* > + * Finally, clear the MEMBLOCK_HOTPLUG flag for all memory > + * belonging to the reserved node mask. > + * > + * Note that this will include memory regions that reside > + * on nodes that contain kernel memory - entire nodes > + * become hot-unpluggable: > + */ > + for (i = 0; i < numa_meminfo.nr_blks; i++) { > + struct numa_memblk *mb = numa_meminfo.blk + i; > + > + if (!node_isset(mb->nid, reserved_nodemask)) > + continue; > + > + memblock_clear_hotplug(mb->start, mb->end - mb->start); > + } > +}