linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Huang, Ying" <ying.huang@intel.com>
To: Michal Hocko <mhocko@suse.com>
Cc: Bharata B Rao <bharata@amd.com>,
	 Aneesh Kumar K V <aneesh.kumar@linux.ibm.com>,
	 linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	 Andrew Morton <akpm@linux-foundation.org>,
	Alistair Popple <apopple@nvidia.com>,
	 Dan Williams <dan.j.williams@intel.com>,
	 Dave Hansen <dave.hansen@intel.com>,
	Davidlohr Bueso <dave@stgolabs.net>,
	 Hesham Almatary <hesham.almatary@huawei.com>,
	 Jagdish Gediya <jvgediya.oss@gmail.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	 Jonathan Cameron <Jonathan.Cameron@huawei.com>,
	 Tim Chen <tim.c.chen@intel.com>,  Wei Xu <weixugc@google.com>,
	 Yang Shi <shy828301@gmail.com>
Subject: Re: [RFC] memory tiering: use small chunk size and more tiers
Date: Wed, 02 Nov 2022 16:28:08 +0800	[thread overview]
Message-ID: <877d0dbw13.fsf@yhuang6-desk2.ccr.corp.intel.com> (raw)
In-Reply-To: <Y2Inot4i4xUGH60O@dhcp22.suse.cz> (Michal Hocko's message of "Wed, 2 Nov 2022 09:17:38 +0100")

Michal Hocko <mhocko@suse.com> writes:

> On Wed 02-11-22 16:02:54, Huang, Ying wrote:
>> Michal Hocko <mhocko@suse.com> writes:
>> 
>> > On Wed 02-11-22 08:39:49, Huang, Ying wrote:
>> >> Michal Hocko <mhocko@suse.com> writes:
>> >> 
>> >> > On Mon 31-10-22 09:33:49, Huang, Ying wrote:
>> >> > [...]
>> >> >> In the upstream implementation, 4 tiers are possible below DRAM.  That's
>> >> >> enough for now.  But in the long run, it may be better to define more.
>> >> >> 100 possible tiers below DRAM may be too extreme.
>> >> >
>> >> > I am just curious. Is any configurations with more than couple of tiers
>> >> > even manageable? I mean applications have been struggling even with
>> >> > regular NUMA systems for years and vast majority of them is largerly
>> >> > NUMA unaware. How are they going to configure for a more complex system
>> >> > when a) there is no resource access control so whatever you aim for
>> >> > might not be available and b) in which situations there is going to be a
>> >> > demand only for subset of tears (GPU memory?) ?
>> >> 
>> >> Sorry for confusing.  I think that there are only several (less than 10)
>> >> tiers in a system in practice.  Yes, here, I suggested to define 100 (10
>> >> in the later text) POSSIBLE tiers below DRAM.  My intention isn't to
>> >> manage a system with tens memory tiers.  Instead, my intention is to
>> >> avoid to put 2 memory types into one memory tier by accident via make
>> >> the abstract distance range of each memory tier as small as possible.
>> >> More possible memory tiers, smaller abstract distance range of each
>> >> memory tier.
>> >
>> > TBH I do not really understand how tweaking ranges helps anything.
>> > IIUC drivers are free to assign any abstract distance so they will clash
>> > without any higher level coordination.
>> 
>> Yes.  That's possible.  Each memory tier corresponds to one abstract
>> distance range.  The larger the range is, the higher the possibility of
>> clashing is.  So I suggest to make the abstract distance range smaller
>> to reduce the possibility of clashing.
>
> I am sorry but I really do not understand how the size of the range
> actually addresses a fundamental issue that each driver simply picks
> what it wants. Is there any enumeration defining basic characteristic of
> each tier? How does a driver developer knows which tear to assign its
> driver to?

The smaller range size will not guarantee anything.  It just tries to
help the default behavior.

The drivers are expected to assign the abstract distance based on the
memory latency/bandwidth, etc.  And the abstract distance range of a
memory tier corresponds to a memory latency/bandwidth range too.  So, if
the size of the abstract distance range is smaller, the possibility for
two types of memory with different latency/bandwidth to clash on
the abstract distance range is lower.

Clashing isn't a totally disaster.  We plan to provide a per-memory-type
knob to offset the abstract distance provided by driver.  Then, we can
move clashing memory types away if necessary.

Best Regards,
Huang, Ying


  reply	other threads:[~2022-11-02  8:28 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-27  6:59 Huang Ying
2022-10-27 10:45 ` Aneesh Kumar K V
2022-10-28  3:03   ` Huang, Ying
2022-10-28  5:05     ` Aneesh Kumar K V
2022-10-28  5:46       ` Huang, Ying
2022-10-28  8:04         ` Bharata B Rao
2022-10-28  8:33           ` Huang, Ying
2022-10-28 13:53             ` Bharata B Rao
2022-10-31  1:33               ` Huang, Ying
2022-11-01 14:34                 ` Michal Hocko
2022-11-02  0:39                   ` Huang, Ying
2022-11-02  7:51                     ` Michal Hocko
2022-11-02  8:02                       ` Huang, Ying
2022-11-02  8:17                         ` Michal Hocko
2022-11-02  8:28                           ` Huang, Ying [this message]
2022-11-02  8:39                             ` Michal Hocko
2022-11-02  8:45                               ` Huang, Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=877d0dbw13.fsf@yhuang6-desk2.ccr.corp.intel.com \
    --to=ying.huang@intel.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=apopple@nvidia.com \
    --cc=bharata@amd.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@intel.com \
    --cc=dave@stgolabs.net \
    --cc=hannes@cmpxchg.org \
    --cc=hesham.almatary@huawei.com \
    --cc=jvgediya.oss@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=shy828301@gmail.com \
    --cc=tim.c.chen@intel.com \
    --cc=weixugc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox