From: Baolin Wang <baolin.wang@linux.alibaba.com>
To: "Huang, Ying" <ying.huang@intel.com>
Cc: kernel test robot <oliver.sang@intel.com>,
oe-lkp@lists.linux.dev, lkp@intel.com,
linux-kernel@vger.kernel.org,
Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@redhat.com>,
John Hubbard <jhubbard@nvidia.com>,
Kefeng Wang <wangkefeng.wang@huawei.com>,
Mel Gorman <mgorman@techsingularity.net>,
Ryan Roberts <ryan.roberts@arm.com>,
linux-mm@kvack.org, feng.tang@intel.com, fengwei.yin@intel.com
Subject: Re: [linus:master] [mm] d2136d749d: vm-scalability.throughput -7.1% regression
Date: Thu, 20 Jun 2024 19:13:49 +0800 [thread overview]
Message-ID: <fab6f79f-3fb5-471a-8995-7f683a199769@linux.alibaba.com> (raw)
In-Reply-To: <87bk3w2he5.fsf@yhuang6-desk2.ccr.corp.intel.com>
On 2024/6/20 15:38, Huang, Ying wrote:
> Baolin Wang <baolin.wang@linux.alibaba.com> writes:
>
>> On 2024/6/20 10:39, kernel test robot wrote:
>>> Hello,
>>> kernel test robot noticed a -7.1% regression of
>>> vm-scalability.throughput on:
>>> commit: d2136d749d76af980b3accd72704eea4eab625bd ("mm: support
>>> multi-size THP numa balancing")
>>> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
>>> [still regression on linus/master
>>> 92e5605a199efbaee59fb19e15d6cc2103a04ec2]
>>> testcase: vm-scalability
>>> test machine: 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz (Ice Lake) with 256G memory
>>> parameters:
>>> runtime: 300s
>>> size: 512G
>>> test: anon-cow-rand-hugetlb
>>> cpufreq_governor: performance
>>
>> Thanks for reporting. IIUC numa balancing will not scan hugetlb VMA,
>> I'm not sure how this patch affects the performance of hugetlb cow,
>> but let me try to reproduce it.
>>
>>
>>> If you fix the issue in a separate patch/commit (i.e. not just a new version of
>>> the same patch/commit), kindly add following tags
>>> | Reported-by: kernel test robot <oliver.sang@intel.com>
>>> | Closes: https://lore.kernel.org/oe-lkp/202406201010.a1344783-oliver.sang@intel.com
>>> Details are as below:
>>> -------------------------------------------------------------------------------------------------->
>>> The kernel config and materials to reproduce are available at:
>>> https://download.01.org/0day-ci/archive/20240620/202406201010.a1344783-oliver.sang@intel.com
>>> =========================================================================================
>>> compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase:
>>> gcc-13/performance/x86_64-rhel-8.3/debian-12-x86_64-20240206.cgz/300s/512G/lkp-icl-2sp2/anon-cow-rand-hugetlb/vm-scalability
>>> commit:
>>> 6b0ed7b3c7 ("mm: factor out the numa mapping rebuilding into a new helper")
>>> d2136d749d ("mm: support multi-size THP numa balancing")
>>> 6b0ed7b3c77547d2 d2136d749d76af980b3accd7270
>>> ---------------- ---------------------------
>>> %stddev %change %stddev
>>> \ | \
>>> 12.02 -1.3 10.72 ± 4% mpstat.cpu.all.sys%
>>> 1228757 +3.0% 1265679 proc-vmstat.pgfault
>
> Also from other proc-vmstat stats,
>
> 21770 36% +6.1% 23098 28% proc-vmstat.numa_hint_faults
> 6168 107% +48.8% 9180 18% proc-vmstat.numa_hint_faults_local
> 154537 15% +23.5% 190883 17% proc-vmstat.numa_pte_updates
>
> After your patch, more hint page faults occurs, I think this is expected.
>
> Then, tasks may be moved between sockets because of that, so that some
> hugetlb page access becomes remote?
After trying to reproduce this case, I also find that more hint page
faults occur. And I think that is casued by changing
"folio_ref_count(folio) != 1" to "folio_likely_mapped_shared(folio)",
which results in scanning more exclusive pages, so I think this is
expected from the previous discussion.
Yes, I think your analysis is correct, some hugetlb page accesses become
remote due to task migration.
prev parent reply other threads:[~2024-06-20 11:14 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-20 2:39 kernel test robot
2024-06-20 6:07 ` Baolin Wang
2024-06-20 7:38 ` Huang, Ying
2024-06-20 8:44 ` Baolin Wang
2024-06-20 11:13 ` Baolin Wang [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=fab6f79f-3fb5-471a-8995-7f683a199769@linux.alibaba.com \
--to=baolin.wang@linux.alibaba.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=feng.tang@intel.com \
--cc=fengwei.yin@intel.com \
--cc=jhubbard@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lkp@intel.com \
--cc=mgorman@techsingularity.net \
--cc=oe-lkp@lists.linux.dev \
--cc=oliver.sang@intel.com \
--cc=ryan.roberts@arm.com \
--cc=wangkefeng.wang@huawei.com \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox