From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 232B2C32771 for ; Thu, 22 Sep 2022 01:45:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 73B5C940007; Wed, 21 Sep 2022 21:45:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6EA7D6B0072; Wed, 21 Sep 2022 21:45:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5B25D940007; Wed, 21 Sep 2022 21:45:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 4D22C6B0071 for ; Wed, 21 Sep 2022 21:45:17 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 231A3C0454 for ; Thu, 22 Sep 2022 01:45:17 +0000 (UTC) X-FDA: 79938028674.26.A4C1CB0 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by imf21.hostedemail.com (Postfix) with ESMTP id 1DF181C0002 for ; Thu, 22 Sep 2022 01:45:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663811116; x=1695347116; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=cP0K3eAQqdbj1b/k8WOSKUEnweFpdWclpBELpkHlMOs=; b=ihJyE9BN1qlAKFrPb0AzcF77OkTPO8N8mgPM7v32LCdHxiT0uube8oQr ZrJtGR+Tf4LV7YjNjGT5EuP2YLx3Xy0yGFZYR5dPIA+wrNV1wy8eOu3wS zurpmXxs/5mxHanNqavDzHLlNtn0n6kd6UJHz1oocs7OlpRZ+s54x2jLB whMtPLPLy6NQfO8XmRD/2V9LaugSz62btWnzWaRNj82dEq7GHtuYonv2V reJefzMzTl2pHnOtmOQLdKSXbHB+7O4dPcIHzZasbeHy5gFyFyCzRRIV+ ZZSv7rT0i5a8zzQh+nzxgnuE3UwnAkFgBnnFNgG+rjjY+bLZkjOU5TmFr A==; X-IronPort-AV: E=McAfee;i="6500,9779,10477"; a="280541578" X-IronPort-AV: E=Sophos;i="5.93,335,1654585200"; d="scan'208";a="280541578" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2022 18:45:14 -0700 X-IronPort-AV: E=Sophos;i="5.93,335,1654585200"; d="scan'208";a="570774254" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2022 18:45:12 -0700 From: "Huang, Ying" To: Zi Yan Cc: , , Andrew Morton , Yang Shi , Baolin Wang , Oscar Salvador , "Matthew Wilcox" Subject: Re: [RFC 0/6] migrate_pages(): batch TLB flushing References: <20220921060616.73086-1-ying.huang@intel.com> Date: Thu, 22 Sep 2022 09:45:11 +0800 In-Reply-To: (Zi Yan's message of "Wed, 21 Sep 2022 11:47:38 -0400") Message-ID: <878rmckwrc.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663811116; a=rsa-sha256; cv=none; b=7mqyFq2kUzbm01M/bod6XdsbIWsZKGJRr1N3zxhIKy/cCQGZHfcETnP/46yvR/gMps+F0S S3/rnD20GZaJPvDOUB44jaeXqfDCg3k+3sQLA8pIn+gH71IFifkwWWBk+eBjcnj9GPgFco do75qdbkDzDlP2zBuM8yOHfnyN2KwC8= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=ihJyE9BN; spf=pass (imf21.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.151 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663811116; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LzM/k+90FoUgc9BU4fVTPi3HD2SQxbQ+3lVvQj9Kzx8=; b=gtn5N9mYzyDQoN4X/mhctDmZhVW7mTCroHpENO+sedYgkwt0lIG0QOpZF1+K6nLR/J7fCS R/jlCuQ1AQp2rLIaZRCEZBlQWiny0ocZrntoi3DL/4+9Uoe3L8m69JeRqK09jYjXjM6o/E 3f0njymrQgtSIMfBPdc8ifhU6HjGYMA= X-Rspamd-Queue-Id: 1DF181C0002 X-Rspam-User: Authentication-Results: imf21.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=ihJyE9BN; spf=pass (imf21.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.151 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com X-Rspamd-Server: rspam11 X-Stat-Signature: qcqbc4iyxof88o81yyfn9rq97hui3q1d X-HE-Tag: 1663811115-47961 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Zi Yan writes: > On 21 Sep 2022, at 2:06, Huang Ying wrote: > >> From: "Huang, Ying" >> >> Now, migrate_pages() migrate pages one by one, like the fake code as >> follows, >> >> for each page >> unmap >> flush TLB >> copy >> restore map >> >> If multiple pages are passed to migrate_pages(), there are >> opportunities to batch the TLB flushing and copying. That is, we can >> change the code to something as follows, >> >> for each page >> unmap >> for each page >> flush TLB >> for each page >> copy >> for each page >> restore map >> >> The total number of TLB flushing IPI can be reduced considerably. And >> we may use some hardware accelerator such as DSA to accelerate the >> page copying. >> >> So in this patch, we refactor the migrate_pages() implementation and >> implement the TLB flushing batching. Base on this, hardware >> accelerated page copying can be implemented. >> >> If too many pages are passed to migrate_pages(), in the naive batched >> implementation, we may unmap too many pages at the same time. The >> possibility for a task to wait for the migrated pages to be mapped >> again increases. So the latency may be hurt. To deal with this >> issue, the max number of pages be unmapped in batch is restricted to >> no more than HPAGE_PMD_NR. That is, the influence is at the same >> level of THP migration. >> >> We use the following test to measure the performance impact of the >> patchset, >> >> On a 2-socket Intel server, >> >> - Run pmbench memory accessing benchmark >> >> - Run `migratepages` to migrate pages of pmbench between node 0 and >> node 1 back and forth. >> >> With the patch, the TLB flushing IPI reduces 99.1% during the test and >> the number of pages migrated successfully per second increases 291.7%. > > Thank you for the patchset. Batching page migration will definitely > improve its throughput from my past experiments[1] and starting with > TLB flushing is a good first step. Thanks for the pointer, the patch description provides valuable information for me already! > BTW, what is the rationality behind the increased page migration > success rate per second? >From perf profiling data, in the base kernel, migrate_pages.migrate_to_node.do_migrate_pages.kernel_migrate_pages.__x64_sys_migrate_pages: 2.87 ptep_clear_flush.try_to_migrate_one.rmap_walk_anon.try_to_migrate.__unmap_and_move: 2.39 Because pmbench run in the system too, the CPU cycles of migrate_pages() is about 2.87%. While the CPU cycles for TLB flushing is 2.39%. That is, 2.39/2.87 = 83.3% CPU cycles of migrate_pages() are used for TLB flushing. After batching the TLB flushing, the perf profiling data becomes, migrate_pages.migrate_to_node.do_migrate_pages.kernel_migrate_pages.__x64_sys_migrate_pages: 2.77 move_to_new_folio.migrate_pages_batch.migrate_pages.migrate_to_node.do_migrate_pages: 1.68 copy_page.folio_copy.migrate_folio.move_to_new_folio.migrate_pages_batch: 1.21 1.21/2.77 = 43.7% CPU cycles of migrate_pages() are used for page copying now. try_to_migrate_one: 0.23 The CPU cycles of unmapping and TLB flushing becomes 0.23/2.77 = 8.3% of migrate_pages(). All in all, after the optimization, we do much less TLB flushing, which consumes a lot of CPU cycles before the optimization. So the throughput of migrate_pages() increases greatly. I will add these data in the next version of patch. Best Regards, Huang, Ying >> >> This patchset is based on v6.0-rc5 and the following patchset, >> >> [PATCH -V3 0/8] migrate_pages(): fix several bugs in error path >> https://lore.kernel.org/lkml/20220817081408.513338-1-ying.huang@intel.com/ >> >> The migrate_pages() related code is converting to folio now. So this >> patchset cannot apply recent akpm/mm-unstable branch. This patchset >> is used to check the basic idea. If it is OK, I will rebase the >> patchset on top of folio changes. >> >> Best Regards, >> Huang, Ying > > > [1] https://lwn.net/Articles/784925/ > > -- > Best Regards, > Yan, Zi