From: Fengguang Wu <fengguang.wu@intel.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Linux Memory Management List <linux-mm@kvack.org>,
kvm@vger.kernel.org, LKML <linux-kernel@vger.kernel.org>,
Fan Du <fan.du@intel.com>, Yao Yuan <yuan.yao@intel.com>,
Peng Dong <dongx.peng@intel.com>,
Huang Ying <ying.huang@intel.com>,
Liu Jingqi <jingqi.liu@intel.com>,
Dong Eddie <eddie.dong@intel.com>,
Dave Hansen <dave.hansen@intel.com>,
Zhang Yi <yi.z.zhang@linux.intel.com>,
Dan Williams <dan.j.williams@intel.com>
Subject: Re: [RFC][PATCH v2 00/21] PMEM NUMA node and hotness accounting/migration
Date: Fri, 28 Dec 2018 17:42:08 +0800 [thread overview]
Message-ID: <20181228094208.7lgxhha34zpqu4db@wfg-t540p.sh.intel.com> (raw)
In-Reply-To: <20181228084105.GQ16738@dhcp22.suse.cz>
On Fri, Dec 28, 2018 at 09:41:05AM +0100, Michal Hocko wrote:
>On Fri 28-12-18 13:08:06, Wu Fengguang wrote:
>[...]
>> Optimization: do hot/cold page tracking and migration
>> =====================================================
>>
>> Since PMEM is slower than DRAM, we need to make sure hot pages go to
>> DRAM and cold pages stay in PMEM, to get the best out of PMEM and DRAM.
>>
>> - DRAM=>PMEM cold page migration
>>
>> It can be done in kernel page reclaim path, near the anonymous page
>> swap out point. Instead of swapping out, we now have the option to
>> migrate cold pages to PMEM NUMA nodes.
>
>OK, this makes sense to me except I am not sure this is something that
>should be pmem specific. Is there any reason why we shouldn't migrate
>pages on memory pressure to other nodes in general? In other words
>rather than paging out we whould migrate over to the next node that is
>not under memory pressure. Swapout would be the next level when the
>memory is (almost_) fully utilized. That wouldn't be pmem specific.
In future there could be multi memory levels with different
performance/size/cost metric. There are ongoing HMAT works to describe
that. When ready, we can switch to the HMAT based general infrastructure.
Then the code will no longer be PMEM specific, but do general
promotion/demotion migrations between high/low memory levels.
Swapout could be from the lowest level memory.
Migration between peer nodes is the obvious simple way and a good
choice for the initial implementation. But yeah, it's possible to
migrate to other nodes. For example, it can be combined with NUMA
balancing: if we know the page is mostly accessed by the other socket,
then it'd best to migrate hot/cold pages directly to that socket.
>> User space may also do it, however cannot act on-demand, when there
>> are memory pressure in DRAM nodes.
>>
>> - PMEM=>DRAM hot page migration
>>
>> While LRU can be good enough for identifying cold pages, frequency
>> based accounting can be more suitable for identifying hot pages.
>>
>> Our design choice is to create a flexible user space daemon to drive
>> the accounting and migration, with necessary kernel supports by this
>> patchset.
>
>We do have numa balancing, why cannot we rely on it? This along with the
>above would allow to have pmem numa nodes (cpuless nodes in fact)
>without any special casing and a natural part of the MM. It would be
>only the matter of the configuration to set the appropriate distance to
>allow reasonable allocation fallback strategy.
Good question. We actually tried reusing NUMA balancing mechanism to
do page-fault triggered migration. move_pages() only calls
change_prot_numa(). It turns out the 2 migration types have different
purposes (one for hotness, another for home node) and hence implement
details. We end up modifying some few NUMA balancing logic -- removing
rate limiting, changing target node logics, etc.
Those look unnecessary complexities for this post. This v2 patchset
mainly fulfills our first milestone goal: a minimal viable solution
that's relatively clean to backport. Even when preparing for new
upstreamable versions, it may be good to keep it simple for the
initial upstream inclusion.
>I haven't looked at the implementation yet but if you are proposing a
>special cased zone lists then this is something CDM (Coherent Device
>Memory) was trying to do two years ago and there was quite some
>skepticism in the approach.
It looks we are pretty different than CDM. :)
We creating new NUMA nodes rather than CDM's new ZONE.
The zonelists modification is just to make PMEM nodes more separated.
Thanks,
Fengguang
next prev parent reply other threads:[~2018-12-28 9:42 UTC|newest]
Thread overview: 99+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-12-26 13:14 Fengguang Wu
2018-12-26 13:14 ` Fengguang Wu
2018-12-26 13:14 ` [RFC][PATCH v2 01/21] e820: cheat PMEM as DRAM Fengguang Wu
2018-12-26 13:14 ` Fengguang Wu
2018-12-27 3:41 ` Matthew Wilcox
2018-12-27 4:11 ` Fengguang Wu
2018-12-27 5:13 ` Dan Williams
2018-12-27 5:13 ` Dan Williams
2018-12-27 19:32 ` Yang Shi
2018-12-27 19:32 ` Yang Shi
2018-12-28 3:27 ` Fengguang Wu
2018-12-26 13:14 ` [RFC][PATCH v2 02/21] acpi/numa: memorize NUMA node type from SRAT table Fengguang Wu
2018-12-26 13:14 ` Fengguang Wu
2018-12-26 13:14 ` [RFC][PATCH v2 03/21] x86/numa_emulation: fix fake NUMA in uniform case Fengguang Wu
2018-12-26 13:14 ` Fengguang Wu
2018-12-26 13:14 ` [RFC][PATCH v2 04/21] x86/numa_emulation: pass numa node type to fake nodes Fengguang Wu
2018-12-26 13:14 ` Fengguang Wu
2018-12-26 13:14 ` [RFC][PATCH v2 05/21] mmzone: new pgdat flags for DRAM and PMEM Fengguang Wu
2018-12-26 13:14 ` Fengguang Wu
2018-12-26 13:14 ` [RFC][PATCH v2 06/21] x86,numa: update numa node type Fengguang Wu
2018-12-26 13:14 ` Fengguang Wu
2018-12-26 13:14 ` [RFC][PATCH v2 07/21] mm: export node type {pmem|dram} under /sys/bus/node Fengguang Wu
2018-12-26 13:14 ` Fengguang Wu
2018-12-26 13:14 ` [RFC][PATCH v2 08/21] mm: introduce and export pgdat peer_node Fengguang Wu
2018-12-26 13:14 ` Fengguang Wu
2018-12-27 20:07 ` Christopher Lameter
2018-12-27 20:07 ` Christopher Lameter
2018-12-28 2:31 ` Fengguang Wu
2018-12-26 13:14 ` [RFC][PATCH v2 09/21] mm: avoid duplicate peer target node Fengguang Wu
2018-12-26 13:14 ` Fengguang Wu
2018-12-26 13:14 ` [RFC][PATCH v2 10/21] mm: build separate zonelist for PMEM and DRAM node Fengguang Wu
2018-12-26 13:14 ` Fengguang Wu
2019-01-01 9:14 ` Aneesh Kumar K.V
2019-01-01 9:14 ` Aneesh Kumar K.V
2019-01-07 9:57 ` Fengguang Wu
2019-01-07 14:09 ` Aneesh Kumar K.V
2018-12-26 13:14 ` [RFC][PATCH v2 11/21] kvm: allocate page table pages from DRAM Fengguang Wu
2018-12-26 13:14 ` Fengguang Wu
2019-01-01 9:23 ` Aneesh Kumar K.V
2019-01-01 9:23 ` Aneesh Kumar K.V
2019-01-02 0:59 ` Yuan Yao
2019-01-02 16:47 ` Dave Hansen
2019-01-07 10:21 ` Fengguang Wu
2018-12-26 13:14 ` [RFC][PATCH v2 12/21] x86/pgtable: " Fengguang Wu
2018-12-26 13:14 ` Fengguang Wu
2018-12-26 13:14 ` [RFC][PATCH v2 13/21] x86/pgtable: dont check PMD accessed bit Fengguang Wu
2018-12-26 13:14 ` Fengguang Wu
2018-12-26 13:15 ` [RFC][PATCH v2 14/21] kvm: register in mm_struct Fengguang Wu
2018-12-26 13:15 ` Fengguang Wu
2019-02-02 6:57 ` Peter Xu
2019-02-02 10:50 ` Fengguang Wu
2019-02-04 10:46 ` Paolo Bonzini
2018-12-26 13:15 ` [RFC][PATCH v2 15/21] ept-idle: EPT walk for virtual machine Fengguang Wu
2018-12-26 13:15 ` Fengguang Wu
2018-12-26 13:15 ` [RFC][PATCH v2 16/21] mm-idle: mm_walk for normal task Fengguang Wu
2018-12-26 13:15 ` Fengguang Wu
2018-12-26 13:15 ` [RFC][PATCH v2 17/21] proc: introduce /proc/PID/idle_pages Fengguang Wu
2018-12-26 13:15 ` Fengguang Wu
2018-12-26 13:15 ` [RFC][PATCH v2 18/21] kvm-ept-idle: enable module Fengguang Wu
2018-12-26 13:15 ` Fengguang Wu
2018-12-26 13:15 ` [RFC][PATCH v2 19/21] mm/migrate.c: add move_pages(MPOL_MF_SW_YOUNG) flag Fengguang Wu
2018-12-26 13:15 ` Fengguang Wu
2018-12-26 13:15 ` [RFC][PATCH v2 20/21] mm/vmscan.c: migrate anon DRAM pages to PMEM node Fengguang Wu
2018-12-26 13:15 ` Fengguang Wu
2018-12-26 13:15 ` [RFC][PATCH v2 21/21] mm/vmscan.c: shrink anon list if can migrate to PMEM Fengguang Wu
2018-12-26 13:15 ` Fengguang Wu
2018-12-27 20:31 ` [RFC][PATCH v2 00/21] PMEM NUMA node and hotness accounting/migration Michal Hocko
2018-12-28 5:08 ` Fengguang Wu
2018-12-28 8:41 ` Michal Hocko
2018-12-28 9:42 ` Fengguang Wu [this message]
2018-12-28 12:15 ` Michal Hocko
2018-12-28 13:15 ` Fengguang Wu
2018-12-28 13:15 ` Fengguang Wu
2018-12-28 19:46 ` Michal Hocko
2018-12-28 13:31 ` Fengguang Wu
2018-12-28 18:28 ` Yang Shi
2018-12-28 18:28 ` Yang Shi
2018-12-28 19:52 ` Michal Hocko
2019-01-02 12:21 ` Jonathan Cameron
2019-01-02 12:21 ` Jonathan Cameron
2019-01-08 14:52 ` Michal Hocko
2019-01-10 15:53 ` Jerome Glisse
2019-01-10 15:53 ` Jerome Glisse
2019-01-10 16:42 ` Michal Hocko
2019-01-10 17:42 ` Jerome Glisse
2019-01-10 17:42 ` Jerome Glisse
2019-01-10 18:26 ` Jonathan Cameron
2019-01-10 18:26 ` Jonathan Cameron
2019-01-28 17:42 ` Jonathan Cameron
2019-01-28 17:42 ` Jonathan Cameron
2019-01-29 2:00 ` Fengguang Wu
2019-01-03 10:57 ` Mel Gorman
2019-01-10 16:25 ` Jerome Glisse
2019-01-10 16:25 ` Jerome Glisse
2019-01-10 16:50 ` Michal Hocko
2019-01-10 18:02 ` Jerome Glisse
2019-01-10 18:02 ` Jerome Glisse
2019-01-02 18:12 ` Dave Hansen
2019-01-08 14:53 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20181228094208.7lgxhha34zpqu4db@wfg-t540p.sh.intel.com \
--to=fengguang.wu@intel.com \
--cc=akpm@linux-foundation.org \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@intel.com \
--cc=dongx.peng@intel.com \
--cc=eddie.dong@intel.com \
--cc=fan.du@intel.com \
--cc=jingqi.liu@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=yi.z.zhang@linux.intel.com \
--cc=ying.huang@intel.com \
--cc=yuan.yao@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox