From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E2CEC54E67 for ; Wed, 27 Mar 2024 02:06:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A85126B0089; Tue, 26 Mar 2024 22:06:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A34E46B008A; Tue, 26 Mar 2024 22:06:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8FD856B0092; Tue, 26 Mar 2024 22:06:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 7D5066B0089 for ; Tue, 26 Mar 2024 22:06:54 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 254D1A0A90 for ; Wed, 27 Mar 2024 02:06:54 +0000 (UTC) X-FDA: 81941180748.29.3609F55 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by imf19.hostedemail.com (Postfix) with ESMTP id 5A7671A0003 for ; Wed, 27 Mar 2024 02:06:51 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=WdZxSVNN; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf19.hostedemail.com: domain of ying.huang@intel.com designates 192.198.163.8 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711505212; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UxaEj+laNAC9QwscjvExyYn6OvvS4WODe1gW0yNv/ZI=; b=gDA9cqmOmjGOFp/KivKVas8B1sI0XoyV4Jddb8ll65Dxm8skjjLOlzYwdsT557adbbiLCs LupRuwZ4H3X8Sl6lpBN4smB0xjCokfqAm9aT4YFUH1hpbqZy7aptrWhNHQQKLrBSR5bLpl wNSQSVevyav5KrzsfBifiZ0hvgo2oao= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=WdZxSVNN; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf19.hostedemail.com: domain of ying.huang@intel.com designates 192.198.163.8 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711505212; a=rsa-sha256; cv=none; b=jnVz+G2FyzFKtjqUyZoUasWU9GEHQQ5OvtHzZxrCPr2DTtlixIrlLRKNmrmhgDFkK/VGwA yh0jgjLOSTKjvMycZ/1QnrHkB2q5d14ipU11Y8PxuDmMQG1Wz6z24j0e9vNiwfHNCmpZqI J8no++k/t5ogD67zOb9XtqVjqr1BDPE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711505211; x=1743041211; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version; bh=3gnnr3PK6ah6ZPYmN8IAeDIVM2VGJfLz3txD1AY2Uiw=; b=WdZxSVNNV/GqiNGihzwiqQPPfo1ET6Vt/MUC44KGgVJJNVlIr2Mt4B0X aa2IwcLkHz/euDzN810rD75P8OmzPVDECsZpivbJMB+c4Yu1TLYgTS7Y8 ZU4XHZmeyvuRrWW80of0ddNIUwdVx6a5jjyfkVck11HTl7q4sSCZVAkog FXwL0mRlyvpOwLEHYK6voFa3d/K2Railx0yqAMMUJ5j4VRaNYkldwKhee V8puIenTQ7HLXAsmuBZlcGjQR9GiyUb+SITUDHyrO1I4CCMGBJ3m6aRMd iEYS0NcW45qknINGPm014o0u/5caRkAoEpxMhpROgsldBkWtFc1NOPvpS g==; X-CSE-ConnectionGUID: SBlfwEPNRwuM1yqIIhRF/Q== X-CSE-MsgGUID: RRvwri2EQ3mDtj8SYexkeA== X-IronPort-AV: E=McAfee;i="6600,9927,11025"; a="24077163" X-IronPort-AV: E=Sophos;i="6.07,157,1708416000"; d="scan'208";a="24077163" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Mar 2024 19:06:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,157,1708416000"; d="scan'208";a="16797573" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orviesa008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Mar 2024 19:06:46 -0700 From: "Huang, Ying" To: Baolin Wang Cc: akpm@linux-foundation.org, , , , , <21cnbao@gmail.com>, , , Subject: Re: [PATCH 2/2] mm: support multi-size THP numa balancing In-Reply-To: (Baolin Wang's message of "Tue, 26 Mar 2024 19:51:25 +0800") References: Date: Wed, 27 Mar 2024 10:04:52 +0800 Message-ID: <87cyrgo2ez.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspamd-Queue-Id: 5A7671A0003 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: aut15xekn6g3uyo9ns5jubo6tocantg1 X-HE-Tag: 1711505211-631675 X-HE-Meta: U2FsdGVkX1+UkQ45jfPa/lPv/kXRYmbmQcsVMtnwvOFClLODEPUVERQqrwtpJgOk2cVCOMcwhdKcVvjfeaomB9z4YxLPye6/zVkGtOOQR3TkEjKGdjdKuAPJqGxPCySnH9MQTEnWc7VZTdCcNy6ac/FPlAp+6DeEiKwq93E0wKN3YONe/j9L9/4wqo/LDaywfbVSG8jJSHMI19bkG04AyMUqcqfHcTthYLfiJGbYGISO05VmXJnX+5/5m+mMHeGYiQuaQEPcsXX6++n0oLIWrt22Y/+Rvt4qpK6nxjOA9XbXpP33/K2Df0iKZ14bG/nT8rqeEiCSMFG6U+6793RLL/tOefjWrhMtKmWq5L9u5HYLAfxk+RGChkB1+AvMK4yFeXqrNm2+9B4azgf6fP/TborcSZFlZOt0oZFi0FmFu46pKBy25bUCFX72FiS2SGfPUyDKcxCY+YIvW24gOT52E2m/b09kKJyrubQHR5PHJcp224W81x/M1U/Q4Vehm5T+HSEx/KdBHX+EhSciSYjYP0Sw6JyHveGj0Un1G1RCOV790Yy3SKYXAeSpDY3taIvrg9QXKNME/kGxyqvRLUCjOWMYuAYPlh8qxQcM7eKxZ9t06ggvaYGtwV6oezPRrHlWE52AwgzgP5+SGN8yHQwXNipWv4g8Tcg8OCqRvwooYFT54UbzEhCITtplYC32YpEaMoYVri0s3rvNhPT+FJZ7Xu47d3eBRErWUVincrjdb4drgUvtRTXwaNPV+baUAW1JN2B45owPM4tGFeVoU9iKR1N6Rf89u09ua+NjimAwM6a2HLFBJKg5KnpaxpPedavnwSdQlwMWs70jI8W2QNKENCMXRXTgsvdScfpxftr06mXAuhbSx3AOCD6kMTf3hohC68DChW7mszH/17Yxe08aRS9xBMUVtvVqzxsFq11l5fiC/9j45AULFeh2oK+Yu/esmjUhhLa9K2p01Axg49r r7/1REhu OuApekUxBdpmaJe+KAq5TG5EO6EHxfyqKIDFlLJe+nqKWVjsWuYHMACq6sLwN3fw/noXVJQgKhNoE6oqhwG0o9WWdgDtvP6mMDjvTsPp67qerS7f3kA+Ea9+ArGFn6ISwJDC8nIQSbINcSDEwcf3EAgzYiwgtiuIdhpuz+e0AKb0jY3hDA8nfPSPlKdhxxqp/TPgZwMxMiZIiYKrLu75MGxbH3SDD7F1eTFKeFfS4NAm2ICaHo7wqxDCEJ4qQ7zZ2XdRdF6mkPIkU2C3/dPtpBjgORYGrUNKk7YLG X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Baolin Wang writes: > Now the anonymous page allocation already supports multi-size THP (mTHP), > but the numa balancing still prohibits mTHP migration even though it is an > exclusive mapping, which is unreasonable. > > Allow scanning mTHP: > Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section > pages") skips shared CoW pages' NUMA page migration to avoid shared data > segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to > NUMA-migrate COW pages that have other uses") change to use page_count() > to avoid GUP pages migration, that will also skip the mTHP numa scaning. > Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP > issue, although there is still a GUP race, the issue seems to have been > resolved by commit 80d47f5de5e3. Meanwhile, use the folio_likely_mapped_shared() > to skip shared CoW pages though this is not a precise sharers count. To > check if the folio is shared, ideally we want to make sure every page is > mapped to the same process, but doing that seems expensive and using > the estimated mapcount seems can work when running autonuma benchmark. Because now we can deal with shared mTHP, it appears even possible to remove folio_likely_mapped_shared() check? > Allow migrating mTHP: > As mentioned in the previous thread[1], large folios (including THP) are > more susceptible to false sharing issues among threads than 4K base page, > leading to pages ping-pong back and forth during numa balancing, which is > currently not easy to resolve. Therefore, as a start to support mTHP numa > balancing, we can follow the PMD mapped THP's strategy, that means we can > reuse the 2-stage filter in should_numa_migrate_memory() to check if the > mTHP is being heavily contended among threads (through checking the CPU id > and pid of the last access) to avoid false sharing at some degree. Thus, > we can restore all PTE maps upon the first hint page fault of a large folio > to follow the PMD mapped THP's strategy. In the future, we can continue to > optimize the NUMA balancing algorithm to avoid the false sharing issue with > large folios as much as possible. > > Performance data: > Machine environment: 2 nodes, 128 cores Intel(R) Xeon(R) Platinum > Base: 2024-03-25 mm-unstable branch > Enable mTHP to run autonuma-benchmark > > mTHP:16K > Base Patched > numa01 numa01 > 224.70 137.23 > numa01_THREAD_ALLOC numa01_THREAD_ALLOC > 118.05 50.57 > numa02 numa02 > 13.45 9.30 > numa02_SMT numa02_SMT > 14.80 7.43 > > mTHP:64K > Base Patched > numa01 numa01 > 216.15 135.20 > numa01_THREAD_ALLOC numa01_THREAD_ALLOC > 115.35 46.93 > numa02 numa02 > 13.24 9.24 > numa02_SMT numa02_SMT > 14.67 7.31 > > mTHP:128K > Base Patched > numa01 numa01 > 205.13 140.41 > numa01_THREAD_ALLOC numa01_THREAD_ALLOC > 112.93 44.78 > numa02 numa02 > 13.16 9.19 > numa02_SMT numa02_SMT > 14.81 7.39 > > [1] https://lore.kernel.org/all/20231117100745.fnpijbk4xgmals3k@techsingularity.net/ > Signed-off-by: Baolin Wang > --- > mm/memory.c | 56 +++++++++++++++++++++++++++++++++++++++++++-------- > mm/mprotect.c | 3 ++- > 2 files changed, 50 insertions(+), 9 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index c30fb4b95e15..36191a9c799c 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -5068,16 +5068,55 @@ static void numa_rebuild_single_mapping(struct vm_fault *vmf, struct vm_area_str > update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); > } > > +static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct vm_area_struct *vma, > + struct folio *folio, pte_t fault_pte, bool ignore_writable) > +{ > + int nr = pte_pfn(fault_pte) - folio_pfn(folio); > + unsigned long start = max(vmf->address - nr * PAGE_SIZE, vma->vm_start); > + unsigned long end = min(start + folio_nr_pages(folio) * PAGE_SIZE, vma->vm_end); If start is in the middle of folio, it's possible for end to go beyond the end of folio. So, should be something like below? unsigned long end = min(vmf->address + (folio_nr_pages(folio) - nr) * PAGE_SIZE, vma->vm_end); > + pte_t *start_ptep = vmf->pte - (vmf->address - start) / PAGE_SIZE; > + bool pte_write_upgrade = vma_wants_manual_pte_write_upgrade(vma); > + unsigned long addr; > + > + /* Restore all PTEs' mapping of the large folio */ > + for (addr = start; addr != end; start_ptep++, addr += PAGE_SIZE) { > + pte_t pte, old_pte; > + pte_t ptent = ptep_get(start_ptep); > + bool writable = false; > + > + if (!pte_present(ptent) || !pte_protnone(ptent)) > + continue; > + > + if (vm_normal_folio(vma, addr, ptent) != folio) > + continue; > + > + if (!ignore_writable) { > + writable = pte_write(pte); > + if (!writable && pte_write_upgrade && > + can_change_pte_writable(vma, addr, pte)) > + writable = true; > + } > + > + old_pte = ptep_modify_prot_start(vma, addr, start_ptep); > + pte = pte_modify(old_pte, vma->vm_page_prot); > + pte = pte_mkyoung(pte); > + if (writable) > + pte = pte_mkwrite(pte, vma); > + ptep_modify_prot_commit(vma, addr, start_ptep, old_pte, pte); > + update_mmu_cache_range(vmf, vma, addr, start_ptep, 1); Can this be batched for the whole folio? > + } > +} > + [snip] -- Best Regards, Huang, Ying