linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yang Shi <yang.shi@linux.alibaba.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: Mel Gorman <mgorman@techsingularity.net>,
	Andrew Morton <akpm@linux-foundation.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	linux-kernel <linux-kernel@vger.kernel.org>
Subject: Re: [HELP] How to get task_struct from mm
Date: Fri, 31 May 2019 20:51:05 +0800	[thread overview]
Message-ID: <352de468-9091-9866-ccbd-10d80c25ebb4@linux.alibaba.com> (raw)
In-Reply-To: <20190530154119.GF6703@dhcp22.suse.cz>



On 5/30/19 11:41 PM, Michal Hocko wrote:
> On Thu 30-05-19 14:57:46, Yang Shi wrote:
>> Hi folks,
>>
>>
>> As what we discussed about page demotion for PMEM at LSF/MM, the demotion
>> should respect to the mempolicy and allowed mems of the process which the
>> page (anonymous page only for now) belongs to.
> cpusets memory mask (aka mems_allowed) is indeed tricky and somehow
> awkward.  It is inherently an address space property and I never
> understood why we have it per _thread_. This just doesn't make any
> sense to me. This just leads to weird corner cases. What should happen
> if different threads disagree about the allocation affinity while
> working on a shared address space?

I'm supposed (just my guess) such restriction should just apply for the 
first allocation. Just like memcg charge, who does it first, whose 
policy gets applied.

>   
>> The vma that the page is mapped to can be retrieved from rmap walk easily,
>> but we need know the task_struct that the vma belongs to. It looks there is
>> not such API, and container_of seems not work with pointer member.
> I do not think this is a good idea. As you point out in the reply we
> have that for memcgs but we really hope to get rid of mm->owner there
> as well. It is just more tricky there. Moreover such a reverse mapping
> would be incorrect. Just think of a disagreeing yet overlapping cpusets
> for different threads mapping the same page.
>
> Is it such a big deal to document that the node migrate is not
> compatible with cpusets?

Not only cpuset, but get_vma_policy() also needs find task_struct from 
vma. Currently, get_vma_policy() just uses "current", so it just returns 
the current process's mempolicy if the vma doesn't have mempolicy. For 
the node migrate case, "current" is definitely not correct.

It looks there is not an easy way to workaround it unless we claim node 
migrate is not compatible with both cpusets and mempolicy.



  reply	other threads:[~2019-05-31 12:51 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-30  6:57 Yang Shi
2019-05-30  7:26 ` Yang Shi
2019-05-30 15:41 ` Michal Hocko
2019-05-31 12:51   ` Yang Shi [this message]
2019-05-31 13:56     ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=352de468-9091-9866-ccbd-10d80c25ebb4@linux.alibaba.com \
    --to=yang.shi@linux.alibaba.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox