From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CEBBC05027 for ; Wed, 8 Feb 2023 06:27:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 319A46B0071; Wed, 8 Feb 2023 01:27:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2C9B06B0072; Wed, 8 Feb 2023 01:27:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1B89F6B0073; Wed, 8 Feb 2023 01:27:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 0CFDD6B0071 for ; Wed, 8 Feb 2023 01:27:43 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id D056D40404 for ; Wed, 8 Feb 2023 06:27:42 +0000 (UTC) X-FDA: 80443143564.17.E796901 Received: from out30-97.freemail.mail.aliyun.com (out30-97.freemail.mail.aliyun.com [115.124.30.97]) by imf21.hostedemail.com (Postfix) with ESMTP id 233451C0013 for ; Wed, 8 Feb 2023 06:27:39 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of xhao@linux.alibaba.com designates 115.124.30.97 as permitted sender) smtp.mailfrom=xhao@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675837661; a=rsa-sha256; cv=none; b=JOFNn+KwGyKnOs92mZApRima0lSg5m9GKCMC2bYhX0ysI+nPGrZUbfdTmTHonQPFn0c5l8 4LQOS63AKEAiQfYlAai1J9buF2pTyfYapFtRyDzGjPsYbW+toCSpdCnbT2v7v6MV9TvSWE 0KYltIOufYSlC5U09KI0fpW+fgM9dkY= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of xhao@linux.alibaba.com designates 115.124.30.97 as permitted sender) smtp.mailfrom=xhao@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675837661; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Jsl9e6s/DZW7WN59dcAZX6lMBdt5LEtIWWAfzQGGTKI=; b=fmU4LLQsk2U9bt/gBwrHVuSeLlhYY8UUlmz0sLBzZ75Cb+kPyrG89FbP4nV1Wpnvy4MZ93 1wLHb/27vxUKqUs4sz6+rmP0xVG+UY/pfDa2Unxh9PqDQ+52gP7YgDI2TMhiciHI8Qoud5 di2D95hBpspp4TmGro1bXPjQkap1MtQ= X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R101e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046050;MF=xhao@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VbAM9hs_1675837652; Received: from 30.240.100.162(mailfrom:xhao@linux.alibaba.com fp:SMTPD_---0VbAM9hs_1675837652) by smtp.aliyun-inc.com; Wed, 08 Feb 2023 14:27:33 +0800 Content-Type: multipart/alternative; boundary="------------zcFdqGJgPw4EuqtNnuqM2YT0" Message-ID: <2b15a6c8-67a4-8d93-09a4-cdc9f09e6b78@linux.alibaba.com> Date: Wed, 8 Feb 2023 14:27:32 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.6.1 Subject: Re: [PATCH -v4 0/9] migrate_pages(): batch TLB flushing From: haoxin To: Huang Ying , Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Zi Yan , Yang Shi , Baolin Wang , Oscar Salvador , Matthew Wilcox , Bharata B Rao , Alistair Popple , Minchan Kim , Mike Kravetz , Hyeonggon Yoo <42.hyeyoo@gmail.com> References: <20230206063313.635011-1-ying.huang@intel.com> In-Reply-To: X-Rspam-User: X-Rspamd-Queue-Id: 233451C0013 X-Rspamd-Server: rspam01 X-Stat-Signature: 3zua5hb3nr9igo8euefy7ciq4fopefr7 X-HE-Tag: 1675837659-277848 X-HE-Meta: U2FsdGVkX18vMJRB/r6H9PGtPwwuyEco+4W0Mv1UF9lsltRlJUrNme7Vri4emvmEljK89B+20gLg45xlkF/JUGJd6LP6HONEWWgSeooSbOlyyR+RVtINA4OzMpHHBXgckErHd8QZoZwYCb7RaFiYDyoNJXenMfl03EQ07PanruP5EZ5sJVBLXG2BixGBMGbkO8/c+4BAwBZ/zbhJsIjbLSPw9I4TsBFSzikx4fOjEX2UGhjP3y8v/7YzdtFezf4qjIj36fwJ/ueMr5qsZb9DkBA8rlCGeX457YmZRV5X2GfoTI7uzQu2yv1GvS08aq4IJHDRusecaQQVRVLgASHdDsBEaum4avvZLpfiBqc0XJ9iIaRAoKoYpg7Idnradkc+VAYo5+d+teeoA/5pCEorkny8inGGDB8zcDNCDI9Ui/0fNG25eU/Cq/9Hw0Hn2FjVdBXpKJ+EiaOBvr9MWqYcXy3JZH7ZtwCwhqfdmU81JmQGQPdY0/XstVga9PJu13EZHv6Y/BWC+DNxg+mLgzxA6dMR3MHyM3+3LwdPkY040EHgDql71nvaC9ob22TLGhbVNrD/nStJVOs5hN3NAHewrCLIaeJ8i5RWkqkxkbdQac03BpeUdl+BS8YyVCT9mfxF6pzGHZKo7kwp75yKZAsM3RulX4wJ8WpFSqDcMbwUrPnTjbKkMk50PlgK6IYoWo1E6myfGKo/xhOHDpjDAjSciGeqSeYiiK+uSHVX1uN7e0d6cT/3n+yKYrij6vqIwDkAUdJUtqsIzNrTtDQW0jCNViXrHq2/Vi8XnSFVlAoiRATHR/6Ya95Hsa7b54BNXWmNja0o8eG6n1+QBga+eRAT5icdXgr5myNPrL+j8k0807GSfdDx1L3dJiau3vumWZuQNDA90EoItrN/oZ7q0HQd1l3AbxOgc/7QmbAXdmOmiZMsb1KP6Hzmn/vz3BQYwRQJFx8uZfTi/er/XR7fw+4 iLH3SwOh 294Lc/1YKBydAEWp1zYU0D0QxT5PZNZn7zbDrBH3ThgIcDY7clK918BJKdYgykLIl0Ygu4PlRJVsT8NaHnJKg1ohPc/RG5m6xgOnMZWi82GdlPtHgSr3crRdNv7qeTkN9KZY0pMucJqqXlPZsKMqeG17aCWxi2iu4vB4hOAMGUp1hmMItP8/YqJXVsih03K+P9gbCKZIIeLJZve6djm9sctHMkg9DRxrIEIZxT0GvOijHpr0a3aAEOpRu06XZW6sE79z6Ve2bDZYHS5yhJ0xWkKoqSYmxl6+7a3UaNdP8dHgaOMEj6JSvmhuQg8ncva1Vw+FtGMxEYbJg2qyxRLtABW3CimlhK+KV/opZG5KDITn6PWqt48SRi10eMmQz9J9stvU6za5F33m6Vq4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a multi-part message in MIME format. --------------zcFdqGJgPw4EuqtNnuqM2YT0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit 在 2023/2/8 下午2:21, haoxin 写道: > > On my arm64 server with 128 cores, 2 numa nodes. > > I used memhog as benchmark : > >     numactl -m -C 5 memhog -r100000 1G >     Do a fix, numactl -m 0 -C 5 memhog -r100000 1G > The test result as below: > >  With this patch: > >     #time migratepages 8490 0 1 > >     real 0m1.161s > >     user 0m0.000s > >     sys 0m1.161s > > without this patch: > >     #time migratepages 8460 0 1 > >     real 0m2.068s > >     user 0m0.001s > >     sys 0m2.068s > > So you can see the migration performance improvement about *+78%* > > This is the perf record info. > > w/o > +   51.07%     0.09%  migratepages  [kernel.kallsyms]  [k] > migrate_folio_extra > +   42.43%     0.04%  migratepages  [kernel.kallsyms]  [k] folio_copy > +   42.34%    42.34%  migratepages  [kernel.kallsyms]  [k] __pi_copy_page > +   33.99%     0.09%  migratepages  [kernel.kallsyms]  [k] rmap_walk_anon > +   32.35%     0.04%  migratepages  [kernel.kallsyms]  [k] try_to_migrate > *+   27.78%    27.78%  migratepages  [kernel.kallsyms]  [k] > ptep_clear_flush * > +    8.19%     6.64%  migratepages  [kernel.kallsyms]  [k] > folio_migrate_flagsmigrati_tlb_flush > > w/ this patch > +   18.57%     0.13%  migratepages     [kernel.kallsyms]   [k] > migrate_pages > +   18.23%     0.07%  migratepages     [kernel.kallsyms]   [k] > migrate_pages_batch > +   16.29%     0.13%  migratepages     [kernel.kallsyms]   [k] > migrate_folio_move > +   12.73%     0.10%  migratepages     [kernel.kallsyms]   [k] > move_to_new_folio > +   12.52%     0.06%  migratepages     [kernel.kallsyms]   [k] > migrate_folio_extra > > Therefore, this patch helps improve performance in page migration > > > So,  you can add Tested-by: Xin Hao > > > 在 2023/2/6 下午2:33, Huang Ying 写道: >> From: "Huang, Ying" >> >> Now, migrate_pages() migrate folios one by one, like the fake code as >> follows, >> >> for each folio >> unmap >> flush TLB >> copy >> restore map >> >> If multiple folios are passed to migrate_pages(), there are >> opportunities to batch the TLB flushing and copying. That is, we can >> change the code to something as follows, >> >> for each folio >> unmap >> for each folio >> flush TLB >> for each folio >> copy >> for each folio >> restore map >> >> The total number of TLB flushing IPI can be reduced considerably. And >> we may use some hardware accelerator such as DSA to accelerate the >> folio copying. >> >> So in this patch, we refactor the migrate_pages() implementation and >> implement the TLB flushing batching. Base on this, hardware >> accelerated folio copying can be implemented. >> >> If too many folios are passed to migrate_pages(), in the naive batched >> implementation, we may unmap too many folios at the same time. The >> possibility for a task to wait for the migrated folios to be mapped >> again increases. So the latency may be hurt. To deal with this >> issue, the max number of folios be unmapped in batch is restricted to >> no more than HPAGE_PMD_NR in the unit of page. That is, the influence >> is at the same level of THP migration. >> >> We use the following test to measure the performance impact of the >> patchset, >> >> On a 2-socket Intel server, >> >> - Run pmbench memory accessing benchmark >> >> - Run `migratepages` to migrate pages of pmbench between node 0 and >> node 1 back and forth. >> >> With the patch, the TLB flushing IPI reduces 99.1% during the test and >> the number of pages migrated successfully per second increases 291.7%. >> >> This patchset is based on v6.2-rc4. >> >> Changes: >> >> v4: >> >> - Fixed another bug about non-LRU folio migration. Thanks Hyeonggon! >> >> v3: >> >> - Rebased on v6.2-rc4 >> >> - Fixed a bug about non-LRU folio migration. Thanks Mike! >> >> - Fixed some comments. Thanks Baolin! >> >> - Collected reviewed-by. >> >> v2: >> >> - Rebased on v6.2-rc3 >> >> - Fixed type force cast warning. Thanks Kees! >> >> - Added more comments and cleaned up the code. Thanks Andrew, Zi, Alistair, Dan! >> >> - Collected reviewed-by. >> >> from rfc to v1: >> >> - Rebased on v6.2-rc1 >> >> - Fix the deadlock issue caused by locking multiple pages synchronously >> per Alistair's comments. Thanks! >> >> - Fix the autonumabench panic per Rao's comments and fix. Thanks! >> >> - Other minor fixes per comments. Thanks! >> >> Best Regards, >> Huang, Ying --------------zcFdqGJgPw4EuqtNnuqM2YT0 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit


在 2023/2/8 下午2:21, haoxin 写道:

On my arm64 server with 128 cores, 2 numa nodes.

I used memhog as benchmark :

    numactl -m -C 5 memhog -r100000 1G

    Do a fix, numactl -m 0 -C 5 memhog -r100000 1G

The test result as below:

 With this patch:

    #time migratepages 8490 0 1

    real 0m1.161s

    user 0m0.000s

    sys 0m1.161s

without this patch:

    #time migratepages 8460 0 1

    real 0m2.068s

    user 0m0.001s

    sys 0m2.068s

So you can see the  migration performance improvement about +78%

 

This is the perf  record info.

w/o
+   51.07%     0.09%  migratepages  [kernel.kallsyms]  [k] migrate_folio_extra                                        
+   42.43%     0.04%  migratepages  [kernel.kallsyms]  [k] folio_copy                                                  
+   42.34%    42.34%  migratepages  [kernel.kallsyms]  [k] __pi_copy_page                                             
+   33.99%     0.09%  migratepages  [kernel.kallsyms]  [k] rmap_walk_anon                                             
+   32.35%     0.04%  migratepages  [kernel.kallsyms]  [k] try_to_migrate                                             
+   27.78%    27.78%  migratepages  [kernel.kallsyms]  [k] ptep_clear_flush                                          
+    8.19%     6.64%  migratepages  [kernel.kallsyms]  [k] folio_migrate_flagsmigrati_tlb_flush

w/ this patch
+   18.57%     0.13%  migratepages     [kernel.kallsyms]   [k] migrate_pages                                 
+   18.23%     0.07%  migratepages     [kernel.kallsyms]   [k] migrate_pages_batch                          
+   16.29%     0.13%  migratepages     [kernel.kallsyms]   [k] migrate_folio_move                             
+   12.73%     0.10%  migratepages     [kernel.kallsyms]   [k] move_to_new_folio                           
+   12.52%     0.06%  migratepages     [kernel.kallsyms]   [k] migrate_folio_extra
 

Therefore, this patch helps improve performance in page migration


So,  you can add Tested-by: Xin Hao <xhao@linux.alibaba.com>


在 2023/2/6 下午2:33, Huang Ying 写道:
From: "Huang, Ying" <ying.huang@intel.com>

Now, migrate_pages() migrate folios one by one, like the fake code as
follows,

  for each folio
    unmap
    flush TLB
    copy
    restore map

If multiple folios are passed to migrate_pages(), there are
opportunities to batch the TLB flushing and copying.  That is, we can
change the code to something as follows,

  for each folio
    unmap
  for each folio
    flush TLB
  for each folio
    copy
  for each folio
    restore map

The total number of TLB flushing IPI can be reduced considerably.  And
we may use some hardware accelerator such as DSA to accelerate the
folio copying.

So in this patch, we refactor the migrate_pages() implementation and
implement the TLB flushing batching.  Base on this, hardware
accelerated folio copying can be implemented.

If too many folios are passed to migrate_pages(), in the naive batched
implementation, we may unmap too many folios at the same time.  The
possibility for a task to wait for the migrated folios to be mapped
again increases.  So the latency may be hurt.  To deal with this
issue, the max number of folios be unmapped in batch is restricted to
no more than HPAGE_PMD_NR in the unit of page.  That is, the influence
is at the same level of THP migration.

We use the following test to measure the performance impact of the
patchset,

On a 2-socket Intel server,

 - Run pmbench memory accessing benchmark

 - Run `migratepages` to migrate pages of pmbench between node 0 and
   node 1 back and forth.

With the patch, the TLB flushing IPI reduces 99.1% during the test and
the number of pages migrated successfully per second increases 291.7%.

This patchset is based on v6.2-rc4.

Changes:

v4:

- Fixed another bug about non-LRU folio migration.  Thanks Hyeonggon!

v3:

- Rebased on v6.2-rc4

- Fixed a bug about non-LRU folio migration.  Thanks Mike!

- Fixed some comments.  Thanks Baolin!

- Collected reviewed-by.

v2:

- Rebased on v6.2-rc3

- Fixed type force cast warning.  Thanks Kees!

- Added more comments and cleaned up the code.  Thanks Andrew, Zi, Alistair, Dan!

- Collected reviewed-by.

from rfc to v1:

- Rebased on v6.2-rc1

- Fix the deadlock issue caused by locking multiple pages synchronously
  per Alistair's comments.  Thanks!

- Fix the autonumabench panic per Rao's comments and fix.  Thanks!

- Other minor fixes per comments. Thanks!

Best Regards,
Huang, Ying
--------------zcFdqGJgPw4EuqtNnuqM2YT0--