From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D366C4828E for ; Fri, 2 Feb 2024 08:17:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F3EC66B00A8; Fri, 2 Feb 2024 03:17:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EED146B00AA; Fri, 2 Feb 2024 03:17:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D8E036B00AC; Fri, 2 Feb 2024 03:17:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C33416B00A8 for ; Fri, 2 Feb 2024 03:17:55 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 95096A0F05 for ; Fri, 2 Feb 2024 08:17:55 +0000 (UTC) X-FDA: 81746160510.07.64B3056 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by imf12.hostedemail.com (Postfix) with ESMTP id 39B6040003 for ; Fri, 2 Feb 2024 08:17:52 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=RWbrFHaE; dkim=pass header.d=suse.com header.s=susede1 header.b=RWbrFHaE; dmarc=pass (policy=quarantine) header.from=suse.com; spf=pass (imf12.hostedemail.com: domain of mhocko@suse.com designates 195.135.223.131 as permitted sender) smtp.mailfrom=mhocko@suse.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706861873; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VeDY4msC87J87omZPSEn3RtAr9us8DFHZOwa+nJG8Dk=; b=6ZIkSsWgD2/0iXWGczaf3NOXz+SeaZ2ZgK1BUVuH+CwFNzk7UkeUJw4sdPKa5zJCGMNBKA NexnIpubQAxtRO0L+9nwAn+JOx699ixk+Ir6KGW5/DHJhTdE7uQ+F3LGFnMe/anzYBjeGp p0LErUDXQiruQGHyBIMD2mpBydO6LSw= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=RWbrFHaE; dkim=pass header.d=suse.com header.s=susede1 header.b=RWbrFHaE; dmarc=pass (policy=quarantine) header.from=suse.com; spf=pass (imf12.hostedemail.com: domain of mhocko@suse.com designates 195.135.223.131 as permitted sender) smtp.mailfrom=mhocko@suse.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706861873; a=rsa-sha256; cv=none; b=Wvx+qvzGqDMekkUtd8us41EKgCcBij4vqU7bvli8QA6UgmFdYjlY9/Hvp6RXw2r6AGjuE2 78MaQki6qPucx5wzNfsGtTp4s7Gjv0TgBKw7m5CILnNubPbCxW7da6czP00mPzKtxBTUbf P5z7zXE68eMlV67BlbPDJQ69Dopm+EM= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 6EDDD1F6E6; Fri, 2 Feb 2024 08:17:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1706861871; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=VeDY4msC87J87omZPSEn3RtAr9us8DFHZOwa+nJG8Dk=; b=RWbrFHaEjt7bJJ3ikVASgmv/k7Zh7ZxKlGoAn4Jy7agFJSw5bkBpA/rI8XtUejWKKz3qBg uOCd2tzbg/ZaMCK9C9iLk2WlXcqKpIBMvOwt9/a8CMp+60d4hu2LqEP2mm6ypzNYijie91 oWlfNWGHwnFpdUjtXDEUdO1gKf9lPZ8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1706861871; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=VeDY4msC87J87omZPSEn3RtAr9us8DFHZOwa+nJG8Dk=; b=RWbrFHaEjt7bJJ3ikVASgmv/k7Zh7ZxKlGoAn4Jy7agFJSw5bkBpA/rI8XtUejWKKz3qBg uOCd2tzbg/ZaMCK9C9iLk2WlXcqKpIBMvOwt9/a8CMp+60d4hu2LqEP2mm6ypzNYijie91 oWlfNWGHwnFpdUjtXDEUdO1gKf9lPZ8= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 5131913A58; Fri, 2 Feb 2024 08:17:51 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id tRH2ES+lvGVAPAAAD6G6ig (envelope-from ); Fri, 02 Feb 2024 08:17:51 +0000 Date: Fri, 2 Feb 2024 09:17:42 +0100 From: Michal Hocko To: Baolin Wang Cc: akpm@linux-foundation.org, muchun.song@linux.dev, osalvador@suse.de, david@redhat.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [RFC PATCH] mm: hugetlb: remove __GFP_THISNODE flag when dissolving the old hugetlb Message-ID: References: <6f26ce22d2fcd523418a085f2c588fe0776d46e7.1706794035.git.baolin.wang@linux.alibaba.com> <3f31cd89-f349-4f9e-bc29-35f29f489633@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <3f31cd89-f349-4f9e-bc29-35f29f489633@linux.alibaba.com> X-Rspamd-Queue-Id: 39B6040003 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: i3rkp9t7b9s5iez687u4gity9y1d1jxd X-HE-Tag: 1706861872-26023 X-HE-Meta: U2FsdGVkX18dgzhOq2uUR4sJoKH5rygDF1WBv/RmYAnOjlyUFwLTnZ2OgHosj+4UqWoxTM7ONzawAUrvScDfdeb9BIaeqEDR2JlCMqzps5zP4nlInu33aV9y2f3l3Mp4grH66RQ+/QDGa7OnuVRvA5lH0N6BiXkwGtcq7vFooDHgzaUWOEqh0VqRIof6BmVEP8wHD+4d+4BcmnkcYDxeZ1zqtww+0fm3N7IK5xlaRl0k8q+WCc6G/4I22F1tySCbRZCXVAK8OgfYWyX3pDVeuMr3C+2EJj3O0ZDzHtFKqCQ7tyM8kENjq1VjzY+B13aOv0E/vqbQ7znqOEo/dkGjXCjJNrmAAxadBvIlSFrqHbfEuVP/J8BdAgtuIQ42togmoYnTC/pYjB/tmLAfzEjdQIuJuMdl6gnxVRp+zmbSNLKMy03UWFzfBJ8j4/KQ7h40X8iFjfW3PNrwe9PBaomHqF+pHQAdF+D2hfXXksz/xwt5f7YFUANhI96cbI1/hdKzuncm4uevogYVgj8/TqFe5cyFUwGieameEGIaWAvXnVfHqCVXuRNHla1j1jr+BMOIWgk1botroMx/ljrO8OoRpQJLLJ+WkB2kjq0zGLlJA7uXcQo4Of53xtKKo2Z9tYEnOfhCows3FY5mjNS7fx2ftZcEFK5sFE/l7TPwIzNHj1iT7UhLAjoMY0mJBOD3JYvNW1aeN3EyOzk3AkCE9S7+kgAQFoLOJaVXe9jcaW7jB/FDqpBOryPPkLm8OqJgL1cksrrmJy5GqLhBEJCmX6d5zrdmf6OvdIXL+bFfhWWGgY2asVSeg5m1cgS4TqtKPVfa7/RiX2aGr+jT5Kajnc9zAZAoHPv3PO0Ewtfkh1dyEHGTWPbrEJabet64t3SieJ+ma5jNO751TMlxc88czQbBjEZ6TVO4K0KXxuvbwfdaOxRteNIL3b8cEcBBXo2sFuFp3LydJiRbT5lMGNfuTGV jJ4OT8xK 0K1xpNZJ1fBUNy3RN2YYMAFxK49bONkcXa6WKeUFIq4joS4vEDe498TpGsnogP7x3q+KSN7fdUKh9ZgWxgmsRAdAa84abZCU3frkcvQ3kIaSg+O0ItUldpHFTympFr2m79SE/LGwzDDKo3UunPuUXcjFAkNTdSTh9A5BlGGE2a3ceE6ADjeek2VxO9CFeLy/Dvr8cDeNVeZQgp/kRj4TFqpmTKL4j85WrRD5iu5jQ+HF2FtB5GM1/7t6SZiHNFnIf5n825z1crqRgKkc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri 02-02-24 09:35:58, Baolin Wang wrote: > > > On 2/1/2024 11:27 PM, Michal Hocko wrote: > > On Thu 01-02-24 21:31:13, Baolin Wang wrote: > > > Since commit 369fa227c219 ("mm: make alloc_contig_range handle free > > > hugetlb pages"), the alloc_contig_range() can handle free hugetlb pages > > > by allocating a new fresh hugepage, and replacing the old one in the > > > free hugepage pool. > > > > > > However, our customers can still see the failure of alloc_contig_range() > > > when seeing a free hugetlb page. The reason is that, there are few memory > > > on the old hugetlb page's node, and it can not allocate a fresh hugetlb > > > page on the old hugetlb page's node in isolate_or_dissolve_huge_page() with > > > setting __GFP_THISNODE flag. This makes sense to some degree. > > > > > > Later, the commit ae37c7ff79f1 (" mm: make alloc_contig_range handle > > > in-use hugetlb pages") handles the in-use hugetlb pages by isolating it > > > and doing migration in __alloc_contig_migrate_range(), but it can allow > > > fallbacking to other numa node when allocating a new hugetlb in > > > alloc_migration_target(). > > > > > > This introduces inconsistency to handling free and in-use hugetlb. > > > Considering the CMA allocation and memory hotplug relying on the > > > alloc_contig_range() are important in some scenarios, as well as keeping > > > the consistent hugetlb handling, we should remove the __GFP_THISNODE flag > > > in isolate_or_dissolve_huge_page() to allow fallbacking to other numa node, > > > which can solve the failure of alloc_contig_range() in our case. > > > > I do agree that the inconsistency is not really good but I am not sure > > dropping __GFP_THISNODE is the right way forward. Breaking pre-allocated > > per-node pools might result in unexpected failures when node bound > > workloads doesn't get what is asssumed available. Keep in mind that our > > user APIs allow to pre-allocate per-node pools separately. > > Yes, I agree, that is also what I concered. But sometimes users don't care > about the distribution of per-node hugetlb, instead they are more concerned > about the success of cma allocation or memory hotplug. Yes, sometimes the exact per-node distribution is not really important. But the kernel has no way of knowing that right now. And we have to make a conservative guess here. > > The in-use hugetlb is a very similar case. While having a temporarily > > misplaced page doesn't really look terrible once that hugetlb page is > > released back into the pool we are back to the case above. Either we > > make sure that the node affinity is restored later on or it shouldn't be > > migrated to a different node at all. > > Agree. So how about below changing? > (1) disallow fallbacking to other nodes when handing in-use hugetlb, which > can ensure consistent behavior in handling hugetlb. I can see two cases here. alloc_contig_range which is an internal kernel user and then we have memory offlining. The former shouldn't break the per-node hugetlb pool reservations, the latter might not have any other choice (the whole node could get offline and that resembles breaking cpu affininty if the cpu is gone). Now I can see how a hugetlb page sitting inside a CMA region breaks CMA users expectations but hugetlb migration already tries hard to allocate a replacement hugetlb so the system must be under a heavy memory pressure if that fails, right? Is it possible that the hugetlb reservation is just overshooted here? Maybe the memory is just terribly fragmented though? Could you be more specific about numbers in your failure case? > (2) introduce a new sysctl (may be named as "hugetlb_allow_fallback_nodes") > for users to control to allow fallbacking, that can solve the CMA or memory > hotplug failures that users are more concerned about. I do not think this is a good idea. The policy might be different on each node and this would get messy pretty quickly. If anything we could try to detect a dedicated per node pool allocation instead. It is quite likely that if admin preallocates pool without any memory policy then the exact distribution of pages doesn't play a huge role. -- Michal Hocko SUSE Labs