From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D512FE674A8 for ; Fri, 1 Nov 2024 06:22:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BAFF56B0088; Fri, 1 Nov 2024 02:22:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B60556B0089; Fri, 1 Nov 2024 02:22:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A00516B008A; Fri, 1 Nov 2024 02:22:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 823BD6B0088 for ; Fri, 1 Nov 2024 02:22:34 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 0803A1C5062 for ; Fri, 1 Nov 2024 06:22:34 +0000 (UTC) X-FDA: 82736531682.04.8B05ED4 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) by imf25.hostedemail.com (Postfix) with ESMTP id 3CDD2A0002 for ; Fri, 1 Nov 2024 06:22:11 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=aMyAPS+n; spf=pass (imf25.hostedemail.com: domain of ying.huang@intel.com designates 192.198.163.10 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730441991; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=BhlrrP8bhU1b9Tj6HpfWsUgvivFeL0jTadqtug0hDtk=; b=tv63O/027Am3aYssL2myw8sBwIbnJi8YTIchuOgOts1OeHGLF/wYMqCoolZLnoewMCubJK VdAWpw6lp9u+r23zbJLiVXcEHgy+c/dZ+pT9xp9cjTeWabSEgLCZb0dnXzfbiFRvDArUZs kXAhzkcoV68Bg3C7DhhEax1B/DYZueo= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=aMyAPS+n; spf=pass (imf25.hostedemail.com: domain of ying.huang@intel.com designates 192.198.163.10 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730441991; a=rsa-sha256; cv=none; b=ZeRzLOAcGiTFvx1SSDNKHgAEVbRAaQwL0EKWW1rTfird3l5fcEZgjoi7GbhE4AwzNPnPcT GOxFlWVluc4up/DKQe71/SfP9VosE1OboXqzEqH8GEF8wMtz+SzHRf/u9gclgZ5sjJeypM bsFSxp61H4EgbLQ2wVkn6btWyaY90Oc= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730442151; x=1761978151; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version:content-transfer-encoding; bh=HQl0AMuB8jna+PS62r1rSOv/S0WQT34zxnABjcfiMcU=; b=aMyAPS+nKxLgdnTrftzqfCKZ16EOmK6dgmyxyuq/edrfV+9fe2eUVxOe Sd74C5UP7qK9OySjZSDRx8AUPCORxTBP8HvBORuohUuvilJGu/3ocKYX+ KE0gD22T9rciR6INqqbtkM0mZx8sq260SmeipPOihYPsYQDQzzZpInfdZ RJHdYbbjBk26sxN7DEYTUQ8dOZHb8xfEE6z8NVTu6aUVHTldxjant4N7q 4djMZ+LQ0EulL2rURvjjvKzMkocuUksMiisvBs7WQBl0JXUVd9mS7niXc 75JpkNFSdXQ+3UpLuPLM13SuE+jmVqYq/EGGvdOJgb6cbvjMOK0AQiusB Q==; X-CSE-ConnectionGUID: sjgAMDM4RRSHP/f0Kj1pwA== X-CSE-MsgGUID: dwx6nukTTEmqAmRs2rxKGA== X-IronPort-AV: E=McAfee;i="6700,10204,11242"; a="41573797" X-IronPort-AV: E=Sophos;i="6.11,248,1725346800"; d="scan'208";a="41573797" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Oct 2024 23:22:29 -0700 X-CSE-ConnectionGUID: QO2TItx8RDe1uvJ4SKjRAA== X-CSE-MsgGUID: 5P4Kq6eHQXuebP/H9Vwrbw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,248,1725346800"; d="scan'208";a="82549085" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by fmviesa007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Oct 2024 23:22:27 -0700 From: "Huang, Ying" To: Kefeng Wang Cc: David Hildenbrand , Andrew Morton , Matthew Wilcox , "Muchun Song" , , Zi Yan Subject: Re: [PATCH v2 1/2] mm: use aligned address in clear_gigantic_page() In-Reply-To: <64f1c69d-3706-41c5-a29f-929413e3dfa2@huawei.com> (Kefeng Wang's message of "Wed, 30 Oct 2024 13:05:33 +0800") References: <20241026054307.3896926-1-wangkefeng.wang@huawei.com> <54f5f3ee-8442-4c49-ab4e-c46e8db73576@huawei.com> <4219a788-52ad-4d80-82e6-35a64c980d50@redhat.com> <127d4a00-29cc-4b45-aa96-eea4e0adaed2@huawei.com> <9b06805b-4f4f-4b37-861f-681e3ab9d470@huawei.com> <113d3cb9-0391-48ab-9389-f2fd1773ab73@redhat.com> <878qu6wgcm.fsf@yhuang6-desk2.ccr.corp.intel.com> <87sese9sy9.fsf@yhuang6-desk2.ccr.corp.intel.com> <64f1c69d-3706-41c5-a29f-929413e3dfa2@huawei.com> Date: Fri, 01 Nov 2024 14:18:54 +0800 Message-ID: <87r07v8oj5.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 3CDD2A0002 X-Stat-Signature: 73u6z6kmo7kym4zued36gw3kn3paam8a X-Rspam-User: X-HE-Tag: 1730442131-968600 X-HE-Meta: U2FsdGVkX18OCvyKTZ32UkETjhR1Q+zN4khRYmT+epQ+feZ4YYGtkPw1koC4L2afH1wGzYXhF/+ua7SrltChxI8VNM29nb73rZ+V3BLDf5CxYjeKGysoA6tqqK9Bm7e3LxmCUw4qB4dCdf30J2LwunfDAJFYdlGq42GDtiwLAYKN2DSEqSenRjFIN+8Jv4KF2iCkM/yq8zXTvBYjiGhIyiR+uNwrd/Y/BWEtARasn05RXs0TiWXZUwMmSulf1upqNNVTWzU3XrDHq+gYXDjm/jSPqXaaFbVn3c4JU0O/6H0k5W/SsgdlBSGDh2wArcsRHaGxlgnzDwH4wm4bSGxj91gbHt2rRcgEl8cRYIYGlR/4yiCiJqPh42xLBbLJMsf8gzfi+Nv0j62qLrzGgVSs1Vs2f5BgqcZBLcUA8NJgyZJPKnXUNYHvMl+GBzs1MeYQLCd0lYmj9gwKF2cBqvw5SdE5+0AQpbgBze7644U3IX6I8P5Ivn4TDm4tjKZ9YUp+W6P5pTOUQLnDVvT6jBvOhuH8NVn76hAwjfUjbVkz4XWoCd+A3qTcHyFecjcOf+sMN5PSEODocnb6Yd6BkMAvClMxGiZaGkvQyNcEoiK+lzBFjxct1K4pEQ/tpTP+mWZJwCOtrYy7VpOJ3kfa/TybjPQ+eQyMKDUaJcD+1qwTuWAFmRXDTDcgwc3PcfxwJo9UYqL0Ihz8l2BU3oLqP1OU0UyGtVrM0qFEQBlk+4Jyni0Qi9rwSRYNtLqPkUObwklyelGQ0njTfrLf0bAEuDGupshJ9KfwrrhkeahJQSe0yyFq7Fm8lmtpaKWEYSxuKsGyfrbeUpG3suOxGz0XAQhD9awQ/YmZ3+NCVvXuTwx0UEIq5AiIR7PbjS2Npqz6YAjKnBoHagLrY1g6IwK0rtvsqbiQfOCzHpIxuiRnR9J62RckA6ufKAwnn1BMexZrxHfusgGEIZuCiDRL8taFv5L f6RNRB7z SQnOp9vZo4RTTNmmePXNw4QjcWMiefEgOI93469V/Cv//XOm2531BNn/A7RfSNQSMXCC9VT0qrn3T02qg+K8vrez/cgUucBs3DoTt52EcjcWzi0kACRabRAydYmoUsaaM3aHXUHMMY/xahnoJPifoZRHih+JM+21XwH0Nu521te6721UzbBm/TPKCRcmYjHEbGLlGx10ahrz1gLB4GkarNRqDh6AW4IvqRsZh28bIeCdeORUmwratEMvyXZodDyu7IV9NuImxjst46MZAXXss1A6+ySfV0WhAYdI45AcKvXV2dlBed9K68134kMNhEDScAUNeWqRX1Z5dBunCz3RWYrSWQw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Kefeng Wang writes: > On 2024/10/30 11:21, Huang, Ying wrote: >> Kefeng Wang writes: >>=20 >>> On 2024/10/30 9:04, Huang, Ying wrote: >>>> David Hildenbrand writes: >>>> >>>>> On 29.10.24 14:04, Kefeng Wang wrote: >>>>>>>>>>>>> >>>>>>>>>>>>> That should all be cleaned up ... process_huge_page() likely >>>>>>>>>>>>> shouldn't >>>>>>>>>>>> >>>>>>>>>>>> Yes, let's fix the bug firstly, >>>>>>>>>>>> >>>>>>>>>>>>> be even consuming "nr_pages". >>>>>>>>>>>> >>>>>>>>>>>> No sure about this part, it uses nr_pages as the end and calcu= late >>>>>>>>>>>> the >>>>>>>>>>>> 'base'. >>>>>>>>>>> >>>>>>>>>>> It should be using folio_nr_pages(). >>>>>>>>>> >>>>>>>>>> But process_huge_page() without an explicit folio argument, I'd = like to >>>>>>>>>> move the aligned address calculate into the folio_zero_user and >>>>>>>>>> copy_user_large_folio(will rename it to folio_copy_user()) in the >>>>>>>>>> following cleanup patches, or do it in the fix patches? >>>>>>>>> >>>>>>>>> First, why does folio_zero_user() call process_huge_page() for *a= small >>>>>>>>> folio*? Because we like or code to be extra complicated to unders= tand? >>>>>>>>> Or am I missing something important? >>>>>>>> >>>>>>>> The folio_zero_user() used for PMD-sized THP and HugeTLB before, a= nd >>>>>>>> after anon mTHP supported, it is used for order-2~order-PMD-order = THP >>>>>>>> and HugeTLB, so it won't process a small folio if I understand cor= rectly. >>>>>>> >>>>>>> And unfortunately neither the documentation nor the function name >>>>>>> expresses that :( >>>>>>> >>>>>>> I'm happy to review any patches that improve the situation here :) >>>>>>> >>>>>> Actually, could we drop the process_huge_page() totally, from my >>>>>> testcase[1], process_huge_page() is not better than clear/copy page >>>>>> from start to last, and sequential clearing/copying maybe more >>>>>> beneficial to the hardware prefetching, and is there a way to let lkp >>>>>> to test to check the performance, since the process_huge_page() >>>>>> was submitted by Ying, what's your opinion? >>>> I don't think that it's a good idea to revert the commit without >>>> studying and root causing the issues. I can work together with you on >>>> that. If we have solid and well explained data to prove >>>> process_huge_page() isn't benefitial, we can revert the commit. >>> >>> >>> Take 'fallocate 20G' as an example=EF=BC=8C before >>> >>> Performance counter stats for 'taskset -c 10 fallocate -l 20G >>> /mnt/hugetlbfs/test': >> IIUC, fallocate will zero pages, but will not touch them at all, >> right? >> If so, no cache benefit from clearing referenced page last. > > > Yes, for this case, only clear page. >>=20 >>> 3,118.94 msec task-clock # 0.999 CPUs >>> utilized >>> 30 context-switches # 0.010 K/sec >>> 1 cpu-migrations # 0.000 K/sec >>> 136 page-faults # 0.044 K/sec >>> 8,092,075,873 cycles # >>> 2.594 GHz (92.82%) >>> 1,624,587,663 instructions # 0.20 insn per >>> cycle (92.83%) >>> 395,341,850 branches # 126.755 M/sec >>> (92.82%) >>> 3,872,302 branch-misses # 0.98% of all >>> branches (92.83%) >>> 1,398,066,701 L1-dcache-loads # 448.251 M/sec >>> (92.82%) >>> 58,124,626 L1-dcache-load-misses # 4.16% of all >>> L1-dcache accesses (92.82%) >>> 1,032,527 LLC-loads # 0.331 M/sec >>> (92.82%) >>> 498,684 LLC-load-misses # 48.30% of all >>> LL-cache accesses (92.84%) >>> 473,689,004 L1-icache-loads # 151.875 M/sec >>> (92.82%) >>> 356,721 L1-icache-load-misses # 0.08% of all >>> L1-icache accesses (92.85%) >>> 1,947,644,987 dTLB-loads # 624.458 M/sec >>> (92.95%) >>> 10,185 dTLB-load-misses # 0.00% of all >>> dTLB cache accesses (92.96%) >>> 474,622,896 iTLB-loads # 152.174 M/sec >>> (92.95%) >>> 94 iTLB-load-misses # 0.00% of all >>> iTLB cache accesses (85.69%) >>> >>> 3.122844830 seconds time elapsed >>> >>> 0.000000000 seconds user >>> 3.107259000 seconds sys >>> >>> and after=EF=BC=88clear from start to end=EF=BC=89 >>> >>> Performance counter stats for 'taskset -c 10 fallocate -l 20G >>> /mnt/hugetlbfs/test': >>> >>> 1,135.53 msec task-clock # 0.999 CPUs >>> utilized >>> 10 context-switches # 0.009 K/sec >>> 1 cpu-migrations # 0.001 K/sec >>> 137 page-faults # 0.121 K/sec >>> 2,946,673,587 cycles # >>> 2.595 GHz (92.67%) >>> 1,620,704,205 instructions # 0.55 insn per >>> cycle (92.61%) >>> 394,595,772 branches # 347.499 M/sec >>> (92.60%) >>> 130,756 branch-misses # 0.03% of all >>> branches (92.84%) >>> 1,396,726,689 L1-dcache-loads # 1230.022 M/sec >>> (92.96%) >>> 338,344 L1-dcache-load-misses # 0.02% of all >>> L1-dcache accesses (92.95%) >>> 111,737 LLC-loads # 0.098 M/sec >>> (92.96%) >>> 67,486 LLC-load-misses # 60.40% of all >>> LL-cache accesses (92.96%) >>> 418,198,663 L1-icache-loads # 368.285 M/sec >>> (92.96%) >>> 173,764 L1-icache-load-misses # 0.04% of all >>> L1-icache accesses (92.96%) >>> 2,203,364,632 dTLB-loads # 1940.385 M/sec >>> (92.96%) >>> 17,195 dTLB-load-misses # 0.00% of all >>> dTLB cache accesses (92.95%) >>> 418,198,365 iTLB-loads # 368.285 M/sec >>> (92.96%) >>> 79 iTLB-load-misses # 0.00% of all >>> iTLB cache accesses (85.34%) >>> >>> 1.137015760 seconds time elapsed >>> >>> 0.000000000 seconds user >>> 1.131266000 seconds sys >>> >>> The IPC improved a lot=EF=BC=8Cless LLC-loads and more L1-dcache-loads= =EF=BC=8C but >>> this depends on the implementation of the microarchitecture. >> Anyway, we need to avoid (or reduce at least) the pure memory >> clearing >> performance. Have you double checked whether process_huge_page() is >> inlined? Perf-profile result can be used to check this too. >>=20 > > Yes, I'm sure the process_huge_page() is inlined. > >> When you say from start to end, you mean to use clear_gigantic_page() >> directly, or change process_huge_page() to clear page from start to end? >>=20 > > Using clear_gigantic_page() and changing process_huge_page() to clear > page from start to end are both good for performance when sequential > clearing, but no random test so far. > >>> 1) Will test some rand test to check the different of performance as >>> David suggested. >>> >>> 2) Hope the LKP to run more tests since it is very useful(more test >>> set and different machines) >> I'm starting to use LKP to test. https://lore.kernel.org/linux-mm/20200419155856.dtwxomdkyujljdfi@oneplus.co= m/ Just remembered that we have discussed a similar issue for arm64 before. Can you take a look at it? There's more discussion and tests/results in the thread, I think that may be helpful. -- Best Regards, Huang, Ying