From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FC65C54E58 for ; Mon, 18 Mar 2024 10:31:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A30556B0088; Mon, 18 Mar 2024 06:31:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9B9BF6B0089; Mon, 18 Mar 2024 06:31:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 859F86B008A; Mon, 18 Mar 2024 06:31:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 708C26B0088 for ; Mon, 18 Mar 2024 06:31:20 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 391EDA0DDA for ; Mon, 18 Mar 2024 10:31:20 +0000 (UTC) X-FDA: 81909792720.23.4020481 Received: from out30-98.freemail.mail.aliyun.com (out30-98.freemail.mail.aliyun.com [115.124.30.98]) by imf23.hostedemail.com (Postfix) with ESMTP id C0FFB14001A for ; Mon, 18 Mar 2024 10:31:16 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=adoeg+gC; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf23.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.98 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710757878; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=N+z5yj4errvZzUnsDbH2i/qBxwekB8l9wAKiw6mVF4c=; b=HBtiP4SzkYSQ94sCNDlVg5yoxIcf2mzOja8sCOzNjoIMvMLUfJrfimni7xwiLY60ORl02a iWS/ZTz53NbhyW1xvNsGiuNYUi6Ws5nCxiQRH0e4lRRFyGdThVxO05+j8D3jjh7WURS407 S+vwgBMas8QaL7Ugm3UGj/QT6VhTNaY= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=adoeg+gC; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf23.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.98 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710757878; a=rsa-sha256; cv=none; b=GKG3w6eWeaxCq8CJI37CU1uaikagn65PxX4b1/+4ZgUpGhDXPEvY8SNHrqsr+Sy+cBsglp 484IPIbZbykTBzlO5cnTmJ0flJKDJLkkDEFuKjixK+ErqQuoEOhF3dScxZCXDg2ZaszD4s qbNGgZAjYm6L/U7+gZMsLRhu4EINdOs= DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1710757873; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=N+z5yj4errvZzUnsDbH2i/qBxwekB8l9wAKiw6mVF4c=; b=adoeg+gCyGNv1WWfHrujKljcBNQdifgQMgknPj1c4M/rfgBp6Lr5X0EQidqLixgaD/ASaQ5Usriq0A6jKJMI0C9f8NISvC4BEAQ13bFpZz/i58m+Dk2Irr9m3Ylf6VJgpYUhDTUCqCYc14w4I8g5UU/7uHAaUJDY+/cWJe7XmZw= X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R111e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046050;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=10;SR=0;TI=SMTPD_---0W2mJh7n_1710757871; Received: from 30.97.56.66(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W2mJh7n_1710757871) by smtp.aliyun-inc.com; Mon, 18 Mar 2024 18:31:12 +0800 Message-ID: Date: Mon, 18 Mar 2024 18:31:10 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC PATCH v2] mm: support multi-size THP numa balancing To: David Hildenbrand , "Huang, Ying" Cc: akpm@linux-foundation.org, mgorman@techsingularity.net, wangkefeng.wang@huawei.com, jhubbard@nvidia.com, 21cnbao@gmail.com, ryan.roberts@arm.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <903bf13fc3e68b8dc1f256570d78b55b2dd9c96f.1710493587.git.baolin.wang@linux.alibaba.com> <871q88vzc4.fsf@yhuang6-desk2.ccr.corp.intel.com> <3bf2c3e1-44fd-4bc8-a97b-9da7b606aec0@linux.alibaba.com> <8e13bce5-e353-4258-9891-97158b8ccd84@redhat.com> From: Baolin Wang In-Reply-To: <8e13bce5-e353-4258-9891-97158b8ccd84@redhat.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: C0FFB14001A X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: wfcnjurwqi4yr5fcf51f3gqcwbm3x46n X-HE-Tag: 1710757876-784190 X-HE-Meta: U2FsdGVkX19w/ZbpEX9FhQV6P91BQ6MuY44PTZjlGCGnnog1guagLFwNFl+QvN9U1Uk8NNs2eVyjGvf9rf6GFFHViyWIq0CUwG66Pgk8ab4ZBcj9HuXUzDtmWCodQzke1jg79raB5qo2z2RoxnzwYEs+khVuBBXlJGp53VAHaF5u8tTvdb+h7zZuEmEtGFk36AOA/AHxk/tx0inQKgLnrKa2haelSpSU2h5hJaxw1++0HbFRMklwFtBFhrQOW4IVEWqn356n3qDO9+QjcDCteyK5ghlGES0F9ZbNkrSIm3JZJiXHvstCDPVVBpUJNOGqPGm3VTLnM7nYhxZ8EmWKOVVGg3/B8hCmNfuVy9UOTLI0RfLT0wDup5vSfLAQajwyDamQlU8vVLO3aFVx91IBywYQkvZ4bB8oiwh44EShtzMBhheBbFtKy2i8xwWBuGVIGQcsxF/rH8pw72SJV4WVa8skHweGSXej80pvPXwQHdfIIQspkvwq2LKehVWVpdWp+3bxkGmVKAZwyJlRTBQTLyBlFaxFobOZ9RoK7PXtFgbi+C7LzFPi3tTTXCKDgO/OkEQ8j555n7UVerupuADoPrWjYBeSJ86hjjVijnVrsUuO8AB9YK4wSixC0fqIgsS/95Yh7gOfiwpd1eepmYwWHWW5a9nwpOq62g4BwMFQm24aWHdnY4xGM7tLzjKv+F2kUruCcCtlfW/Xxlor0HVEctoGRxJPyeLyX4D5pbyswZSzmvw7RQS9pDPFzMMW7TD1bzjd9Z1AMUH5eJTFSg+JcKAKm20JI1JTz6mldoCRlSb0mo2xGsckic3gwCvKtFJWup5UV5lTeZ6P3SxwG9QTjSWvEy9S+2BwnmxX/n3O13qXI4jhfbrr4kqjnhOB6AsoIpw6AjCXmQjRS0+r091rozL8tSvnzYE0gaX+bWg85P1sk1lbgQgtW/nGKz+Mzz/g7qP14m6/B3BX9GuZivi 6x9FqXE8 KaM5bNKRqkOWqyjjT7dcKC0E5PiH7fAIiCrJDTqpq9JwyZKPOMDuad72cCXHMztYd8FQP4TnpbimqRcAninxRPcFoslZ74cwG/U3I9Fva8LgQLHYZzQkuhKJUhFcTuDys9f3jCWfWDL3gyfPkxKQalB9mTFByvhI6Xob2+jCbE1yetpuPwHmpTWmnBN7TCPxrRWySwrmx1vBDEMlL+bTT0Y4r9CH2WMUuL8Dj3uSqyfqjopnttcp7+EWIWD6VAbiiyLKVOqFNGVR4Ty2V4R1yHf9IJ6XNnsIYY0frx0wswKkb1d2yecHCbVooFxQJkdGTlnEPVcUzmWBdOGCHm8/gCjVzFCQRkKdciERp X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/3/18 18:15, David Hildenbrand wrote: > On 18.03.24 11:13, Baolin Wang wrote: >> >> >> On 2024/3/18 17:48, David Hildenbrand wrote: >>> On 18.03.24 10:42, Baolin Wang wrote: >>>> >>>> >>>> On 2024/3/18 14:16, Huang, Ying wrote: >>>>> Baolin Wang writes: >>>>> >>>>>> Now the anonymous page allocation already supports multi-size THP >>>>>> (mTHP), >>>>>> but the numa balancing still prohibits mTHP migration even though it >>>>>> is an >>>>>> exclusive mapping, which is unreasonable. Thus let's support the >>>>>> exclusive >>>>>> mTHP numa balancing firstly. >>>>>> >>>>>> Allow scanning mTHP: >>>>>> Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data >>>>>> section >>>>>> pages") skips shared CoW pages' NUMA page migration to avoid shared >>>>>> data >>>>>> segment migration. In addition, commit 80d47f5de5e3 ("mm: don't >>>>>> try to >>>>>> NUMA-migrate COW pages that have other uses") change to use >>>>>> page_count() >>>>>> to avoid GUP pages migration, that will also skip the mTHP numa >>>>>> scaning. >>>>>> Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP >>>>>> issue, although there is still a GUP race, the issue seems to have >>>>>> been >>>>>> resolved by commit 80d47f5de5e3. Meanwhile, use the >>>>>> folio_estimated_sharers() >>>>>> to skip shared CoW pages though this is not a precise sharers >>>>>> count. To >>>>>> check if the folio is shared, ideally we want to make sure every >>>>>> page is >>>>>> mapped to the same process, but doing that seems expensive and using >>>>>> the estimated mapcount seems can work when running autonuma >>>>>> benchmark. >>>>>> >>>>>> Allow migrating mTHP: >>>>>> As mentioned in the previous thread[1], large folios are more >>>>>> susceptible >>>>>> to false sharing issues, leading to pages ping-pong back and forth >>>>>> during >>>>>> numa balancing, which is currently hard to resolve. Therefore, as a >>>>>> start to >>>>>> support mTHP numa balancing, only exclusive mappings are allowed to >>>>>> perform >>>>>> numa migration to avoid the false sharing issues with large folios. >>>>>> Similarly, >>>>>> use the estimated mapcount to skip shared mappings, which seems can >>>>>> work >>>>>> in most cases (?), and we've used folio_estimated_sharers() to skip >>>>>> shared >>>>>> mappings in migrate_misplaced_folio() for numa balancing, seems no >>>>>> real >>>>>> complaints. >>>>> >>>>> IIUC, folio_estimated_sharers() cannot identify multi-thread >>>>> applications.  If some mTHP is shared by multiple threads in one >>>> >>>> Right. >>>> >>> >>> Wasn't this "false sharing" previously raised/described by Mel in this >>> context? >> >> Yes, I got confused with the process's false sharing. >> >>>>> process, how to deal with that? >>>> >>>> IMHO, seems the should_numa_migrate_memory() already did something to >>>> help? >>>> >>>> ...... >>>>      if (!cpupid_pid_unset(last_cpupid) && >>>>                  cpupid_to_nid(last_cpupid) != dst_nid) >>>>          return false; >>>> >>>>      /* Always allow migrate on private faults */ >>>>      if (cpupid_match_pid(p, last_cpupid)) >>>>          return true; >>>> ...... >>>> >>>> If the node of the CPU that accessed the mTHP last time is different >>>> from this time, which means there is some contention for that mTHP >>>> among >>>> threads. So it will not allow migration. >>>> >>>> If the contention for the mTHP among threads is light or the accessing >>>> is relatively stable, then we can allow migration? >>>> >>>>> For example, I think that we should avoid to migrate on the first >>>>> fault >>>>> for mTHP in should_numa_migrate_memory(). >>>>> >>>>> More thoughts?  Can we add a field in struct folio for mTHP to count >>>>> hint page faults from the same node? >>>> >>>> IIUC, we do not need add a new field for folio, seems we can reuse >>>> ->_flags_2a field. But how to use it? If there are multiple consecutive >>>> NUMA faults from the same node, then allow migration? >>> >>> _flags_2a cannot be used. You could place something after _deferred_list >> >> Could you be more explicit? I didn't see _flags_2 currently being used, >> did I miss something? > > Yes, that we use it implicitly via page->flags on subpages (for example, > some flags are still per-subpage and not per-folio). Yes, thanks for reminding:)