linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: "Huang, Ying" <ying.huang@intel.com>
Cc: Feng Tang <feng.tang@intel.com>,
	Aneesh Kumar K V <aneesh.kumar@linux.ibm.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Johannes Weiner <hannes@cmpxchg.org>, Tejun Heo <tj@kernel.org>,
	Zefan Li <lizefan.x@bytedance.com>,
	Waiman Long <longman@redhat.com>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"cgroups@vger.kernel.org" <cgroups@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"Hansen, Dave" <dave.hansen@intel.com>,
	"Chen, Tim C" <tim.c.chen@intel.com>,
	"Yin, Fengwei" <fengwei.yin@intel.com>
Subject: Re: [PATCH] mm/vmscan: respect cpuset policy during page demotion
Date: Thu, 27 Oct 2022 14:29:49 +0200	[thread overview]
Message-ID: <Y1p5vaN1AWhpNWZx@dhcp22.suse.cz> (raw)
In-Reply-To: <87fsf9k3yg.fsf@yhuang6-desk2.ccr.corp.intel.com>

On Thu 27-10-22 17:31:35, Huang, Ying wrote:
> Michal Hocko <mhocko@suse.com> writes:
> 
> > On Thu 27-10-22 15:39:00, Huang, Ying wrote:
> >> Michal Hocko <mhocko@suse.com> writes:
> >> 
> >> > On Thu 27-10-22 14:47:22, Huang, Ying wrote:
> >> >> Michal Hocko <mhocko@suse.com> writes:
> >> > [...]
> >> >> > I can imagine workloads which wouldn't like to get their memory demoted
> >> >> > for some reason but wouldn't it be more practical to tell that
> >> >> > explicitly (e.g. via prctl) rather than configuring cpusets/memory
> >> >> > policies explicitly?
> >> >> 
> >> >> If my understanding were correct, prctl() configures the process or
> >> >> thread.
> >> >
> >> > Not necessarily. There are properties which are per adddress space like
> >> > PR_[GS]ET_THP_DISABLE. This could be very similar.
> >> >
> >> >> How can we get process/thread configuration at demotion time?
> >> >
> >> > As already pointed out in previous emails. You could hook into
> >> > folio_check_references path, more specifically folio_referenced_one
> >> > where you have all that you need already - all vmas mapping the page and
> >> > then it is trivial to get the corresponding vm_mm. If at least one of
> >> > them has the flag set then the demotion is not allowed (essentially the
> >> > same model as VM_LOCKED).
> >> 
> >> Got it!  Thanks for detailed explanation.
> >> 
> >> One bit may be not sufficient.  For example, if we want to avoid or
> >> control cross-socket demotion and still allow demoting to slow memory
> >> nodes in local socket, we need to specify a node mask to exclude some
> >> NUMA nodes from demotion targets.
> >
> > Isn't this something to be configured on the demotion topology side? Or
> > do you expect there will be per process/address space usecases? I mean
> > different processes running on the same topology, one requesting local
> > demotion while other ok with the whole demotion topology?
> 
> I think that it's possible for different processes have different
> requirements.
> 
> - Some processes don't care about where the memory is placed, prefer
>   local, then fall back to remote if no free space.
> 
> - Some processes want to avoid cross-socket traffic, bind to nodes of
>   local socket.
> 
> - Some processes want to avoid to use slow memory, bind to fast memory
>   node only.

Yes, I do understand that. Do you have any specific examples in mind?
[...]
> > If we really need/want to give a fine grained control over demotion
> > nodemask then we would have to go with vma->mempolicy interface. In
> > any case a per process on/off knob sounds like a reasonable first step
> > before we learn more about real usecases.
> 
> Yes.  Per-mm or per-vma property is much better than per-task property.
> Another possibility, how about add a new flag to set_mempolicy() system
> call to set the per-mm mempolicy?  `numactl` can use that by default.

Do you mean a flag to control whether the given policy is applied to a
task or mm?
-- 
Michal Hocko
SUSE Labs


  reply	other threads:[~2022-10-27 12:29 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-26  7:43 Feng Tang
2022-10-26  7:49 ` Aneesh Kumar K V
2022-10-26  8:00   ` Feng Tang
2022-10-26  9:19     ` Michal Hocko
2022-10-26 10:42       ` Aneesh Kumar K V
2022-10-26 11:02         ` Michal Hocko
2022-10-26 12:08           ` Aneesh Kumar K V
2022-10-26 12:21             ` Michal Hocko
2022-10-26 12:35               ` Aneesh Kumar K V
2022-10-27  9:02                 ` Michal Hocko
2022-10-27 10:16                   ` Aneesh Kumar K V
2022-10-27 13:05                     ` Michal Hocko
2022-10-26 12:20       ` Feng Tang
2022-10-26 15:59         ` Michal Hocko
2022-10-26 17:57           ` Yang Shi
2022-10-27  7:11             ` Feng Tang
2022-10-27  7:45               ` Huang, Ying
2022-10-27  7:51                 ` Feng Tang
2022-10-27 17:55               ` Yang Shi
2022-10-28  3:37                 ` Feng Tang
2022-10-28  5:54                   ` Huang, Ying
2022-10-28 17:23                     ` Yang Shi
2022-10-31  1:56                       ` Huang, Ying
2022-10-31  2:19                       ` Feng Tang
2022-10-28  5:09                 ` Aneesh Kumar K V
2022-10-28 17:16                   ` Yang Shi
2022-10-31  1:53                     ` Huang, Ying
2022-10-27  6:47           ` Huang, Ying
2022-10-27  7:10             ` Michal Hocko
2022-10-27  7:39               ` Huang, Ying
2022-10-27  8:01                 ` Michal Hocko
2022-10-27  9:31                   ` Huang, Ying
2022-10-27 12:29                     ` Michal Hocko [this message]
2022-10-27 23:22                       ` Huang, Ying
2022-10-31  8:40                         ` Michal Hocko
2022-10-31  8:51                           ` Huang, Ying
2022-10-31  9:18                             ` Michal Hocko
2022-10-31 14:09                           ` Feng Tang
2022-10-31 14:32                             ` Michal Hocko
2022-11-07  8:05                               ` Feng Tang
2022-11-07  8:17                                 ` Michal Hocko
2022-11-01  3:17                     ` Huang, Ying
2022-10-26  8:26 ` Yin, Fengwei
2022-10-26  8:37   ` Feng Tang
2022-10-26 14:36 ` Waiman Long
2022-10-27  5:57   ` Feng Tang
2022-10-27  5:13 ` Huang, Ying
2022-10-27  5:49   ` Feng Tang
2022-10-27  6:05     ` Huang, Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y1p5vaN1AWhpNWZx@dhcp22.suse.cz \
    --to=mhocko@suse.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=cgroups@vger.kernel.org \
    --cc=dave.hansen@intel.com \
    --cc=feng.tang@intel.com \
    --cc=fengwei.yin@intel.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lizefan.x@bytedance.com \
    --cc=longman@redhat.com \
    --cc=tim.c.chen@intel.com \
    --cc=tj@kernel.org \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox