From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE545C54E64 for ; Thu, 28 Mar 2024 11:34:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 605826B008A; Thu, 28 Mar 2024 07:34:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 58E486B008C; Thu, 28 Mar 2024 07:34:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4306F6B0092; Thu, 28 Mar 2024 07:34:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 1F7736B008A for ; Thu, 28 Mar 2024 07:34:44 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id D6165A0A30 for ; Thu, 28 Mar 2024 11:34:43 +0000 (UTC) X-FDA: 81946240446.20.DD3DAAB Received: from out30-100.freemail.mail.aliyun.com (out30-100.freemail.mail.aliyun.com [115.124.30.100]) by imf30.hostedemail.com (Postfix) with ESMTP id 335F480006 for ; Thu, 28 Mar 2024 11:34:39 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=ikhwzkTm; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf30.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.100 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711625681; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=N7LAp1xKPwjIQcsSOXSDyjWhh7oArMh2VFYbUdBe/bU=; b=G8GJsb1f9b7+S4cafXI0gHZ1nYXsKGcnjvs9xirWFk9BWuWhfz4tb6t0c/leRb+a72DdwW hjFIiGLDWV251OrP22tR0ggLnAYsjLYiVmrwssJ3DcCHthAASdflpp3luVKUCNyW3QjY9Y ZwUIF3QUtEKRY51yfjbEjuALoqD5dWs= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=ikhwzkTm; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf30.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.100 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711625681; a=rsa-sha256; cv=none; b=l+7cGDmdpY/1/8+gAJierRAyZ1QLPz+ZrR35bPUGAs9pK+xZlC+a1XXVijupmtNSsluRR6 wgRSwH/lbfSpQCgg1Nq9k5plqdMKYDJHKm9J4+97HjG05zqJLIEFs+/CoSIZWMmTOCxDZn AQmNNTh0y0Ya9nn8HDv5JVn9nbOl4gg= DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1711625676; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=N7LAp1xKPwjIQcsSOXSDyjWhh7oArMh2VFYbUdBe/bU=; b=ikhwzkTmn7E2Yneav8zsLR01rbE9qyt1I8CMERJh/0eTPVCulvYOs1/1rU04AQDTcajXx6rqtQlsPExtAYppB6jiTuQt/69fnfnNfGqt2sJz8pY4XIRpfnimWIPFYamLxz6SG7sRXLz7REGFzsDYvsfmM2dpf61PZ/dd8zyx8Xg= X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046051;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=10;SR=0;TI=SMTPD_---0W3TH2YB_1711625674; Received: from 30.97.56.91(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W3TH2YB_1711625674) by smtp.aliyun-inc.com; Thu, 28 Mar 2024 19:34:35 +0800 Message-ID: <0baa443a-7872-4ded-94c6-06af88a6a943@linux.alibaba.com> Date: Thu, 28 Mar 2024 19:34:34 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 2/2] mm: support multi-size THP numa balancing To: David Hildenbrand , akpm@linux-foundation.org Cc: mgorman@techsingularity.net, wangkefeng.wang@huawei.com, jhubbard@nvidia.com, ying.huang@intel.com, 21cnbao@gmail.com, ryan.roberts@arm.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: From: Baolin Wang In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspam-User: X-Stat-Signature: 4wc9dwm3nwu3phf9u3mnwsa1bmfs9otz X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 335F480006 X-HE-Tag: 1711625679-544009 X-HE-Meta: U2FsdGVkX18l9bjW+xEVN6aJfHa6rEdFSPDA7sF4d315iOabOQEn4VaWFdXrJS193KzaUjRZ7aKiq7lCwfdi4wPWRg1PmN5rkSriOnNxdx/QmvWa9trIS8jmios0DezZJHtyptS6fCDQ53LA+mtkRf3yRJo/WnMUCj4ubsgtLA3EgQcMFAK+LrGAHeNy0XjU28numMujhmglfXgARakCbHNSVx+vR7ahd/T87GjYPgpqKjSyyuDvui27KsDIwEDFn2qCCz314SRi74pmqFuW72h+lyMlzkYQU8S1cP1SgGl+VY+GFzCxptkzSFNFah8y9jGq/qryTjayHlRRXjOJwIMyg1f4Y7d8rVySu7v1BuBhO3sXA6Ceq/+/Lkh6KsAU0L6UqIybDiZzRzTTAlyEp4Qkd1Svsa60SEYzQDmRX3Vh7T7Ryrbtlf2Q7SqCO9WKYQORIZlkH80hxXpFU9BvXU9nSZyZWCXdXgdd+cG7tYKi/A5g5fEGODg1PThhw9BScnxpmzcY3qynCHYlKHhl6U5rpOGMyEf41FOYm/NlDhUmhG9VQLgz3shCrLFWtw3q05Onx811nhr7rEcr9hh9OODx/0Sq2fflLh0beTHbk7rTHurhzb+uauYjnGJFdhfNsozqkvnTpnKzi0hyKDoIn7AwBrqa5I5jImSR9URuXLRqmwoNqSbWHjt5JtkmLyU7igqA+Z2n/Km5Yw0BpY+/+UU22wLKCtW+MHGJnwIyPQAsnhlr/6CcLH3BE/y9/ljQtZUQHss/pEki26JF4v5an144l5iPeHW0xI89U0ncbKM0JOSdrQNKmp43KG3aviSPVmlSoQcybSk2D/wbk2dM9QwLdtbWqZ9laOCpo/B5e7BpxS6LawvXxtgzJiOHWfiDFIltbbJm2DF6wc3RX+vtNiyrVdnNlMJTYqHwxKQML6QnW2PKYcgK0M4l8EFnO7st2gpy8ud7xRTkZ1NEQmc 7OzjMrOk kMRYd11jqR5ahEYl32e6KUN1tJI0JdrzGX7sGdloQTcK14YLY91O6DpQNrNPLoMeIamjLQPOibqaO1bGsovE6htk5vyvwwTK30VgEVvAMJjC3FD0UdeEh7Xajvgejc1lWg0D6C/oUFlU78PTAjdFOohmVoWDMVuFYE7PqeIR+QZP0T/ZTrANu7zBdsn4fo0zK4tLTzhv5kI1HaJXnV8EvQHXFpFhyWUrotMXfmE28ep2/CkUZjPl5VxoSPD6a5sfk9hwQxWVz6QOsxhpDh5LzhLWh1cLZe26527ONeiJcj8HobsKDUrQlx+mM67TDP1fCO6OMsDqOlswS+f7DBCa/Zvs2haCOx9EUUEs7kklsria7Q2I5/W3WUhN6J5lyOSTUL+nK+ZXUGnrC+GfZcfaO+HbxCqMp217JvAUz593qG8iNkyvYu7qIGByANLpMNYj6LtyvPI5J6RDYYOq5wNCsbRxD1w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/3/28 17:25, David Hildenbrand wrote: > On 26.03.24 12:51, Baolin Wang wrote: >> Now the anonymous page allocation already supports multi-size THP (mTHP), >> but the numa balancing still prohibits mTHP migration even though it >> is an >> exclusive mapping, which is unreasonable. >> >> Allow scanning mTHP: >> Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section >> pages") skips shared CoW pages' NUMA page migration to avoid shared data >> segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to >> NUMA-migrate COW pages that have other uses") change to use page_count() >> to avoid GUP pages migration, that will also skip the mTHP numa scaning. >> Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP >> issue, although there is still a GUP race, the issue seems to have been >> resolved by commit 80d47f5de5e3. Meanwhile, use the >> folio_likely_mapped_shared() >> to skip shared CoW pages though this is not a precise sharers count. To >> check if the folio is shared, ideally we want to make sure every page is >> mapped to the same process, but doing that seems expensive and using >> the estimated mapcount seems can work when running autonuma benchmark. >> >> Allow migrating mTHP: >> As mentioned in the previous thread[1], large folios (including THP) are >> more susceptible to false sharing issues among threads than 4K base page, >> leading to pages ping-pong back and forth during numa balancing, which is >> currently not easy to resolve. Therefore, as a start to support mTHP numa >> balancing, we can follow the PMD mapped THP's strategy, that means we can >> reuse the 2-stage filter in should_numa_migrate_memory() to check if the >> mTHP is being heavily contended among threads (through checking the >> CPU id >> and pid of the last access) to avoid false sharing at some degree. Thus, >> we can restore all PTE maps upon the first hint page fault of a large >> folio >> to follow the PMD mapped THP's strategy. In the future, we can >> continue to >> optimize the NUMA balancing algorithm to avoid the false sharing issue >> with >> large folios as much as possible. >> >> Performance data: >> Machine environment: 2 nodes, 128 cores Intel(R) Xeon(R) Platinum >> Base: 2024-03-25 mm-unstable branch >> Enable mTHP to run autonuma-benchmark >> >> mTHP:16K >> Base                Patched >> numa01                numa01 >> 224.70                137.23 >> numa01_THREAD_ALLOC        numa01_THREAD_ALLOC >> 118.05                50.57 >> numa02                numa02 >> 13.45                9.30 >> numa02_SMT            numa02_SMT >> 14.80                7.43 >> >> mTHP:64K >> Base                Patched >> numa01                numa01 >> 216.15                135.20 >> numa01_THREAD_ALLOC        numa01_THREAD_ALLOC >> 115.35                46.93 >> numa02                numa02 >> 13.24                9.24 >> numa02_SMT            numa02_SMT >> 14.67                7.31 >> >> mTHP:128K >> Base                Patched >> numa01                numa01 >> 205.13                140.41 >> numa01_THREAD_ALLOC        numa01_THREAD_ALLOC >> 112.93                44.78 >> numa02                numa02 >> 13.16                9.19 >> numa02_SMT            numa02_SMT >> 14.81                7.39 >> >> [1] >> https://lore.kernel.org/all/20231117100745.fnpijbk4xgmals3k@techsingularity.net/ >> Signed-off-by: Baolin Wang >> --- >>   mm/memory.c   | 56 +++++++++++++++++++++++++++++++++++++++++++-------- >>   mm/mprotect.c |  3 ++- >>   2 files changed, 50 insertions(+), 9 deletions(-) >> >> diff --git a/mm/memory.c b/mm/memory.c >> index c30fb4b95e15..36191a9c799c 100644 >> --- a/mm/memory.c >> +++ b/mm/memory.c >> @@ -5068,16 +5068,55 @@ static void numa_rebuild_single_mapping(struct >> vm_fault *vmf, struct vm_area_str >>       update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); >>   } >> +static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct >> vm_area_struct *vma, >> +                       struct folio *folio, pte_t fault_pte, bool >> ignore_writable) >> +{ >> +    int nr = pte_pfn(fault_pte) - folio_pfn(folio); >> +    unsigned long start = max(vmf->address - nr * PAGE_SIZE, >> vma->vm_start); >> +    unsigned long end = min(start + folio_nr_pages(folio) * >> PAGE_SIZE, vma->vm_end); >> +    pte_t *start_ptep = vmf->pte - (vmf->address - start) / PAGE_SIZE; >> +    bool pte_write_upgrade = vma_wants_manual_pte_write_upgrade(vma); >> +    unsigned long addr; >> + >> +    /* Restore all PTEs' mapping of the large folio */ >> +    for (addr = start; addr != end; start_ptep++, addr += PAGE_SIZE) { >> +        pte_t pte, old_pte; >> +        pte_t ptent = ptep_get(start_ptep); >> +        bool writable = false; >> + >> +        if (!pte_present(ptent) || !pte_protnone(ptent)) >> +            continue; >> + >> +        if (vm_normal_folio(vma, addr, ptent) != folio) >> +            continue; >> + > > Should you be using folio_pte_batch() in the caller to collect all > applicable PTEs and then only have function that batch-changes a given > nr of PTEs? > > (just like we are now batching other stuff) Seems folio_pte_batch() is not suitable for numa balancing, since we did not care about other PTE bits, only care about the protnone bits. And after more thinking, I think I can drop the vm_normal_folio() validation, since all PTEs are ensured to be within the range of the folio size.