From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1500C54E68 for ; Thu, 21 Mar 2024 07:12:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 445F46B0088; Thu, 21 Mar 2024 03:12:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3CF176B0089; Thu, 21 Mar 2024 03:12:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 26F996B008A; Thu, 21 Mar 2024 03:12:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 169A96B0088 for ; Thu, 21 Mar 2024 03:12:54 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id D9296A14A6 for ; Thu, 21 Mar 2024 07:12:53 +0000 (UTC) X-FDA: 81920179026.22.7B6D7BD Received: from out30-101.freemail.mail.aliyun.com (out30-101.freemail.mail.aliyun.com [115.124.30.101]) by imf27.hostedemail.com (Postfix) with ESMTP id AEAB840016 for ; Thu, 21 Mar 2024 07:12:49 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=NwfEpFgA; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf27.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.101 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711005172; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LJLz95zB5IfCEFx0NOpBepvRP5+Xo9tzfNvsPiZl1EA=; b=uFSnp/t2cQHYIEPFDnKpLD6tLSE62sDehNvFVllO/JCbcFe7rkiQDQjwJrY2tVIBSQSPBd fSY/wuVeq5646fQNCs7ekBblEOxnHG4GTCKeneNPbmG5flgzsNz+XlaYysTCQhfkQ8m0OU 8ycK7i2mklAyAT/rkRpU1n5fKID6wMw= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=NwfEpFgA; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf27.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.101 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711005172; a=rsa-sha256; cv=none; b=N+ShjiqvtlJpsFrKCcHW67AiWhp9fW+c+piQKZOU4S4DRf98slD8HjnmDBHf9AuPyJ2Nxg 6tALp1oFBB0gZQ8BOp6Qb7hdIV/V/eT+SjfKYiWk7yBMXx48DnBiGnUHlfkmbrLT2OCiyu 8HGmy9uI0cNQWYSiXona2cK8OGnj0vc= DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1711005165; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=LJLz95zB5IfCEFx0NOpBepvRP5+Xo9tzfNvsPiZl1EA=; b=NwfEpFgAzbaTcIpGNCLC0+v/JuD6A8FHiR8VXtK8BMkzyGgwBYZs/MTX4Iq7RKDz3mKLpn+2Qa64+W9eu//li72uk2WnQ/Lo+zuOI+F/QPrmv8Xct98J9OfxiMnVKtCi/4c3Y0q4hfW3gCbit/OjOb6jQlLP0Rg5Ww1gXrrSTvs= X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R721e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046059;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=10;SR=0;TI=SMTPD_---0W2zxvbB_1711005164; Received: from 30.97.56.66(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W2zxvbB_1711005164) by smtp.aliyun-inc.com; Thu, 21 Mar 2024 15:12:45 +0800 Message-ID: Date: Thu, 21 Mar 2024 15:12:43 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC PATCH v2] mm: support multi-size THP numa balancing To: "Huang, Ying" Cc: akpm@linux-foundation.org, david@redhat.com, mgorman@techsingularity.net, wangkefeng.wang@huawei.com, jhubbard@nvidia.com, 21cnbao@gmail.com, ryan.roberts@arm.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <903bf13fc3e68b8dc1f256570d78b55b2dd9c96f.1710493587.git.baolin.wang@linux.alibaba.com> <871q88vzc4.fsf@yhuang6-desk2.ccr.corp.intel.com> <87sf0mvg1c.fsf@yhuang6-desk2.ccr.corp.intel.com> From: Baolin Wang In-Reply-To: <87sf0mvg1c.fsf@yhuang6-desk2.ccr.corp.intel.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: AEAB840016 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: wxoutc4dkuw5n9u61ifw5zx6f7d6fbrh X-HE-Tag: 1711005169-496164 X-HE-Meta: U2FsdGVkX18ykN9A81h4cjl9jtaF2lmo2Fy9bgadgEWAZmXnH+bb1PlqnFzmHD7PXdwJiCo9AZ+ZvrwoKQ9SLVmzddmjSBsmof7+goyxhyBOmPY6Qer88Y+Vy8J+NZhVmfC5NuILRgp9T9GbEHmjAETg0KuR006QAJVcW+RIh3gDWXqRRh6QGy9itygky9WEOyKLSwBcBrr9+pQJkqJITbLU6nleWwWbqfJ8HfDnwEDLSvkXLzWMSZRfvreygxUXfpbDIROKNOQKOSKWv7ZZ89vtU6LR3caNEkg5+ZjAGq+xlamCdkTQFRxbuRrv+WhVf9rKwqj78BoWI5WCUxHitNKz6CsRn7a6aBbsfBOI0oXuOykvazSpaG4ZWoyzTKOytWYNz9hiBlGF0XiB72udU7O47ldtvXfonCxEbJx0h9+5H0nZdy4GKlTIVLqJk8HQ/FOwOOSX55UFoLZUGCfwah45VEMfL12bO1+smOIIxdwGztWftpoXCGEdfue3616lV3obP+As9Ae6GCpyQrvCiOC4Pv8+PvC/0a4pMQHAw34UMEs/r8/fAL6cPxggERMMWgJdIvq6sqxDs/Vo7XKGM96An9VGEMqwu6/Th5vpmRpSY2p3681F1CYmOLLYiIV3BYC9IyUuQkZM8tTM4y7oA5E6KQKeOH4Oh7rmIWHUOIgZBQhxek659BaMct78d/LlwMWkv183Fd+92O8WwwIS831o8KUSsNzrCcM7WyYK3LkQAKyEqZjTHgzXlIpLjP4xnHonQBLBNkVMJfgseds6LQMffYfO1mJtBqawUcVA6jpOOos7ksi0xj3a7DdjTNC/AUxV9l0O4J8O6s8YDvi68rzR4oFdMFY/e0wtwob3oDLEntru8JFDvVwzEy2z21alllIHgSfUae6Sar94E/nwbqvQp3gUBDih0141lbtVp9PZcOZpzlI9fvOSJ6PuYsN8SGrPhT0KcU7e0EkDHJB IOCDExiw EQyIZInSLioxxKtUkYlFzADChWPIG4QKCya7LCc8R5bSfDrF708E2Ma8+hrjZBm+XRkeu677mWoLhjhWqWFyWSkIaR/Tmk6bHfvDc/FgaM7OX9A0jy7wPVpleGdKpCAocqrWeRCxd3+h4IL/mdhODYM4w+ZzYQTHBs4Fw10FiAv32RnwvV3jCX6N7aa7mEnNC1mYshyj2uBCORWl8DKuZzoBgjg2gAxp1fscwi8ojOUlo1YfXDxeWCBbbXhqk3r4EcF+vb5i2IVlHHePIs3Kr7K2+fWKKO4dmDpBd44qgFZL79nIs/ZnT1rszOEjac1reQngV X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: (sorry for late reply) On 2024/3/19 15:26, Huang, Ying wrote: > Baolin Wang writes: > >> On 2024/3/18 14:16, Huang, Ying wrote: >>> Baolin Wang writes: >>> >>>> Now the anonymous page allocation already supports multi-size THP (mTHP), >>>> but the numa balancing still prohibits mTHP migration even though it is an >>>> exclusive mapping, which is unreasonable. Thus let's support the exclusive >>>> mTHP numa balancing firstly. >>>> >>>> Allow scanning mTHP: >>>> Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section >>>> pages") skips shared CoW pages' NUMA page migration to avoid shared data >>>> segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to >>>> NUMA-migrate COW pages that have other uses") change to use page_count() >>>> to avoid GUP pages migration, that will also skip the mTHP numa scaning. >>>> Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP >>>> issue, although there is still a GUP race, the issue seems to have been >>>> resolved by commit 80d47f5de5e3. Meanwhile, use the folio_estimated_sharers() >>>> to skip shared CoW pages though this is not a precise sharers count. To >>>> check if the folio is shared, ideally we want to make sure every page is >>>> mapped to the same process, but doing that seems expensive and using >>>> the estimated mapcount seems can work when running autonuma benchmark. >>>> >>>> Allow migrating mTHP: >>>> As mentioned in the previous thread[1], large folios are more susceptible >>>> to false sharing issues, leading to pages ping-pong back and forth during >>>> numa balancing, which is currently hard to resolve. Therefore, as a start to >>>> support mTHP numa balancing, only exclusive mappings are allowed to perform >>>> numa migration to avoid the false sharing issues with large folios. Similarly, >>>> use the estimated mapcount to skip shared mappings, which seems can work >>>> in most cases (?), and we've used folio_estimated_sharers() to skip shared >>>> mappings in migrate_misplaced_folio() for numa balancing, seems no real >>>> complaints. >>> IIUC, folio_estimated_sharers() cannot identify multi-thread >>> applications. If some mTHP is shared by multiple threads in one >> >> Right. >> >>> process, how to deal with that? >> >> IMHO, seems the should_numa_migrate_memory() already did something to help? >> >> ...... >> if (!cpupid_pid_unset(last_cpupid) && >> cpupid_to_nid(last_cpupid) != dst_nid) >> return false; >> >> /* Always allow migrate on private faults */ >> if (cpupid_match_pid(p, last_cpupid)) >> return true; >> ...... >> >> If the node of the CPU that accessed the mTHP last time is different >> from this time, which means there is some contention for that mTHP >> among threads. So it will not allow migration. > > Yes. The two-stage filter in should_numa_migrate_memory() helps at some > degree. > > But the situation is somewhat different after your change. Previously, > in one round of NUMA balancing page table scanning, the number of the > hint page fault for one process and one folio is 1. After your change, > the number may become folio_nr_pages(). So we need to evaluate the Yes, this follows the same strategy as THP. > original algorithm in the new situation and revise. For example, use a > N-stage filter for mTHP. Yes, let me try if N-stage filter for mTHP can helpful. > > Anyway, the NUMA balancing algorithm adjustment needs to be based on > test results. > > Another possibility is to emulate the original behavior as much as > possible to try to reuse the original algorithm. For example, we can > restore all PTE maps upon the first hint page fault of a folio. Then, > the behavior is almost same as the original PMD mapped THP. Personally, > I prefer to use this as the first step. Then, try to adjust the > algorithm to take advantage of more information available. OK, sounds reasonable, I will try. >> >> If the contention for the mTHP among threads is light or the accessing >> is relatively stable, then we can allow migration? >> >>> For example, I think that we should avoid to migrate on the first fault >>> for mTHP in should_numa_migrate_memory(). > > I am referring to the following code in should_numa_migrate_memory(). > > /* > * Allow first faults or private faults to migrate immediately early in > * the lifetime of a task. The magic number 4 is based on waiting for > * two full passes of the "multi-stage node selection" test that is > * executed below. > */ > if ((p->numa_preferred_nid == NUMA_NO_NODE || p->numa_scan_seq <= 4) && > (cpupid_pid_unset(last_cpupid) || cpupid_match_pid(p, last_cpupid))) > return true; > > But, after thought about this again, I realized that the original PMD > mapped THP may be migrated on the first fault sometimes. So, this isn't > a new problem. We may "optimize" it. But it needn't to be part of this > series. Make sense. Thanks for your input.