From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7C94CD1283 for ; Mon, 1 Apr 2024 09:43:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0C1D06B0082; Mon, 1 Apr 2024 05:43:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 073026B0083; Mon, 1 Apr 2024 05:43:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E7C886B0085; Mon, 1 Apr 2024 05:43:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id CC9AC6B0082 for ; Mon, 1 Apr 2024 05:43:50 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 7D79F1203A1 for ; Mon, 1 Apr 2024 09:43:50 +0000 (UTC) X-FDA: 81960476220.06.B0A590F Received: from out30-113.freemail.mail.aliyun.com (out30-113.freemail.mail.aliyun.com [115.124.30.113]) by imf06.hostedemail.com (Postfix) with ESMTP id CD948180007 for ; Mon, 1 Apr 2024 09:43:46 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=BMfV0eWX; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf06.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.113 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711964628; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2zzCye5gdOoBiYrYFli6u1c9p3Kc3YRSBl8d6kTXqRk=; b=yeLvZdBEVOwzwnBrdyRzjK3ig0Sd5x+I5DsdJ4KUnjQv9YphLPw6iLPDSHKr2mlv2Hj1Ij jXaybrFWhMS1lhOWwZ8q8KxVcktH9hm3Jy0t3bzV9cy+2+lX2ykydK59m34zoBN0XVIP59 7pEIxfx7gdCrMykvp2sm5UJ2odqeLbE= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=BMfV0eWX; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf06.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.113 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711964628; a=rsa-sha256; cv=none; b=teNIcnquP6JDWQVaY7tv/SquTqKDCPMixDm1Ub0Aab1+4aeLqNvcXB1qrbLaJi0P9ARnfU kux9o2vns43qWtPHyP2nW3tYEYN/qkZsXAxRghw4jBpPhGyZn4+lbDZcjLyQHnf/VQaAD7 Ujy0pR7xZUszNQeBr1JrQa6kLeE1E/8= DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1711964623; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=2zzCye5gdOoBiYrYFli6u1c9p3Kc3YRSBl8d6kTXqRk=; b=BMfV0eWXWCA0E4dfU9pEecyXGxmLn0t+BdKqtYXr4EmmWH6fQF2IYWq5px+4aoOAY8QheJXe431TppgIi0088qItvhoO5rYzyUswKeeOh1+XBiCpOm9VWj8auDorYGjwY163EcuPoTbiNWB7V6RbZZ/77Z6BL4kv2fTNewbqbqU= X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R181e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046059;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=10;SR=0;TI=SMTPD_---0W3juXIk_1711964621; Received: from 30.97.56.92(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W3juXIk_1711964621) by smtp.aliyun-inc.com; Mon, 01 Apr 2024 17:43:42 +0800 Message-ID: <81d1cd03-f3dc-4549-b5b1-2dc4e4614ffe@linux.alibaba.com> Date: Mon, 1 Apr 2024 17:43:40 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 2/2] mm: support multi-size THP numa balancing To: "Huang, Ying" Cc: akpm@linux-foundation.org, david@redhat.com, mgorman@techsingularity.net, wangkefeng.wang@huawei.com, jhubbard@nvidia.com, 21cnbao@gmail.com, ryan.roberts@arm.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <87sf05kd8j.fsf@yhuang6-desk2.ccr.corp.intel.com> From: Baolin Wang In-Reply-To: <87sf05kd8j.fsf@yhuang6-desk2.ccr.corp.intel.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: CD948180007 X-Stat-Signature: hb8ks48zueonzk4mocjhjczr5zkhq6e4 X-Rspam-User: X-HE-Tag: 1711964626-342688 X-HE-Meta: U2FsdGVkX19jspDWT1JQqqUpkrSjZS2hsu3zX2nrPE5o3xvRkynwUz+xdjGdoILvx/AB2kQR9dan0cnui+xy42qRAYWAT86Y0G+UIMxbcdy2Q0yF8xipBQ/EBfC514zriGND5TZZPrz+6H1/2Efw8AuZRFnzIWyrUn0621tD75pvhkqOCtSmcMbJVtDGFNGgG34raFLH75dGF+oqj0sD7j9dXz0s9TZzf2/RujdLnkn2FsEUUZvtIMD8FsQvE6W5n7bWcZEVWBMVAEpAvfTmOjNPFItnxDbo0+gBx+/KxrukFWeDo8YZtFm33CoJjgkMpO7/0jLPq7DVeOQUk5tt+85Dp0T03mKCoPxuUAE1U7rUPSMpscdKeFikU4UPMEUObewKbQGISDG447KZOpZdR4KCvVdZFFOPnlPLOkIZKHDAiPGmUlcKtSYpGVOEU2mjjzSEvKRvQlIAvxSAszZcnqzhQ5uSRwkVOkvVigRMsMETc4VAP3MZA73JngMLjuGrk3Ety0PkNNtsxOqLPRfTOw1JVheJ1V1eTwvflpry6haMgk81pP2U1C/opuEEfCPdI4PPZCUFzzvy52yGNTKmmaLozD5rbYXOhGqAbCEC/0HzJ5AL7UXCbTQquhS9U4Ca0FjRoJUqFRO6dr6E5I3Iaa1N+c3rM7vk8QmLe0oomPMEhK6rcLD7QQBIB+uJrbe29Uir+tmVe5uGLelxDW9DcKNNSUeAjxnFjZmeWFNvepbEWfdJhvfAUHKQoo9DY/TXtxFJNH1P2SHW90F3obq6w6faLlw14e4VeBYPJb2KKggvYR9ieJEGh1BkCIy4DYwiwto7EtUZiyH4DJqAiU7v5CcSwEOpFihv2j0oLg12k6SI81mgCDzmdvYN85tqIbr/ZXteqrb3s1b6l3mOH7BfM/AG6ndKXikJDHuoVtGIfZnMwd5hUeUiB+gNXa3g+9QTYjN2b4Kmhx2TVMZ5cAi TMxuuccy hq7URrwQfMJJFPODYQD6R/kIP+ZeFKBlUCDL0HuhWy2rdePnIsW7qIJr9LannO08CZSNGNW1aFoqhSmy2xyUDFhX9zSRWFGOJhK+VZsC21VsuZqv+RiGCOWqHfkEvG2noC6AAWG+ZJZ7chUK+ioYX5Zpabkmz5vheRMZuz2SQFNXLuiWS1yJjqbVXFSDrvjdBIsG5mzcFcD943XVdDr283hXy1joPv0uIl0bTB92e9zS7SkYXuhJ4U5oXe/mamM6x8sZRtSWbCH3Mbo5RH45dIgh0wYXjLHIY1h92YSn1bmVDLciM+lYtM75qX+JgQBhciwGS/4q8ZYr58L5003e6HQGkwQAMJUoo+t0rNu01mW/1zsRqbowWmvxlw5smsfiWaZ2CtYtHrabgW2lpQAJkfgziMYg+tqF4Uh/9e9BRZPAC6+MypQy1IiOw9kxnL+npikLyHMK8fGx9j+58EKEhEo9YHw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/4/1 10:50, Huang, Ying wrote: > Baolin Wang writes: > >> Now the anonymous page allocation already supports multi-size THP (mTHP), >> but the numa balancing still prohibits mTHP migration even though it is an >> exclusive mapping, which is unreasonable. >> >> Allow scanning mTHP: >> Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section >> pages") skips shared CoW pages' NUMA page migration to avoid shared data >> segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to >> NUMA-migrate COW pages that have other uses") change to use page_count() >> to avoid GUP pages migration, that will also skip the mTHP numa scaning. >> Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP >> issue, although there is still a GUP race, the issue seems to have been >> resolved by commit 80d47f5de5e3. Meanwhile, use the folio_likely_mapped_shared() >> to skip shared CoW pages though this is not a precise sharers count. To >> check if the folio is shared, ideally we want to make sure every page is >> mapped to the same process, but doing that seems expensive and using >> the estimated mapcount seems can work when running autonuma benchmark. >> >> Allow migrating mTHP: >> As mentioned in the previous thread[1], large folios (including THP) are >> more susceptible to false sharing issues among threads than 4K base page, >> leading to pages ping-pong back and forth during numa balancing, which is >> currently not easy to resolve. Therefore, as a start to support mTHP numa >> balancing, we can follow the PMD mapped THP's strategy, that means we can >> reuse the 2-stage filter in should_numa_migrate_memory() to check if the >> mTHP is being heavily contended among threads (through checking the CPU id >> and pid of the last access) to avoid false sharing at some degree. Thus, >> we can restore all PTE maps upon the first hint page fault of a large folio >> to follow the PMD mapped THP's strategy. In the future, we can continue to >> optimize the NUMA balancing algorithm to avoid the false sharing issue with >> large folios as much as possible. >> >> Performance data: >> Machine environment: 2 nodes, 128 cores Intel(R) Xeon(R) Platinum >> Base: 2024-03-25 mm-unstable branch >> Enable mTHP to run autonuma-benchmark >> >> mTHP:16K >> Base Patched >> numa01 numa01 >> 224.70 143.48 >> numa01_THREAD_ALLOC numa01_THREAD_ALLOC >> 118.05 47.43 >> numa02 numa02 >> 13.45 9.29 >> numa02_SMT numa02_SMT >> 14.80 7.50 >> >> mTHP:64K >> Base Patched >> numa01 numa01 >> 216.15 114.40 >> numa01_THREAD_ALLOC numa01_THREAD_ALLOC >> 115.35 47.41 >> numa02 numa02 >> 13.24 9.25 >> numa02_SMT numa02_SMT >> 14.67 7.34 >> >> mTHP:128K >> Base Patched >> numa01 numa01 >> 205.13 144.45 >> numa01_THREAD_ALLOC numa01_THREAD_ALLOC >> 112.93 41.88 >> numa02 numa02 >> 13.16 9.18 >> numa02_SMT numa02_SMT >> 14.81 7.49 >> >> [1] https://lore.kernel.org/all/20231117100745.fnpijbk4xgmals3k@techsingularity.net/ >> Signed-off-by: Baolin Wang >> --- >> mm/memory.c | 57 +++++++++++++++++++++++++++++++++++++++++++-------- >> mm/mprotect.c | 3 ++- >> 2 files changed, 51 insertions(+), 9 deletions(-) >> >> diff --git a/mm/memory.c b/mm/memory.c >> index c30fb4b95e15..2aca19e4fbd8 100644 >> --- a/mm/memory.c >> +++ b/mm/memory.c >> @@ -5068,16 +5068,56 @@ static void numa_rebuild_single_mapping(struct vm_fault *vmf, struct vm_area_str >> update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); >> } >> >> +static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct vm_area_struct *vma, >> + struct folio *folio, pte_t fault_pte, bool ignore_writable) >> +{ >> + int nr = pte_pfn(fault_pte) - folio_pfn(folio); >> + unsigned long start = max(vmf->address - nr * PAGE_SIZE, vma->vm_start); >> + unsigned long end = min(vmf->address + (folio_nr_pages(folio) - nr) * PAGE_SIZE, vma->vm_end); >> + pte_t *start_ptep = vmf->pte - (vmf->address - start) / PAGE_SIZE; >> + bool pte_write_upgrade = vma_wants_manual_pte_write_upgrade(vma); > > We call vma_wants_manual_pte_write_upgrade() in do_numa_page() already. > It seems that we can make "ignore_writable = true" if > "vma_wants_manual_pte_write_upgrade() == false" in do_numa_page() to > remove one call. From the original logics, we should also call pte_mkwrite() for the new mapping if the pte_write() is true while vma_wants_manual_pte_write_upgrade() is false. But I can add a new boolean parameter for numa_rebuild_large_mapping() to remove the same function call. > Otherwise, the patchset LGTM, feel free to add > > Reviewed-by: "Huang, Ying" > > in the future versions. Thanks for your valuable input!