linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Gang Li <ligang.bdlg@bytedance.com>
To: Michal Hocko <mhocko@suse.com>
Cc: "Michal Koutný" <mkoutny@suse.com>,
	"Waiman Long" <longman@redhat.com>,
	cgroups@vger.kernel.org, linux-mm@kvack.org, rientjes@google.com,
	"Zefan Li" <lizefan.x@bytedance.com>,
	linux-kernel@vger.kernel.org
Subject: Re: Re: Re: [PATCH v4] mm: oom: introduce cpuset oom
Date: Tue, 11 Apr 2023 21:17:27 +0800	[thread overview]
Message-ID: <e6ff9768-f41d-553c-e858-1b244a461526@bytedance.com> (raw)
In-Reply-To: <ZDVcwuiu3rWEFiTE@dhcp22.suse.cz>



On 2023/4/11 21:12, Michal Hocko wrote:
> On Tue 11-04-23 21:04:18, Gang Li wrote:
>>
>>
>> On 2023/4/11 20:23, Michal Koutný wrote:
>>> Hello.
>>>
>>> On Tue, Apr 11, 2023 at 02:58:15PM +0800, Gang Li <ligang.bdlg@bytedance.com> wrote:
>>>> +	cpuset_for_each_descendant_pre(cs, pos_css, &top_cpuset) {
>>>> +		if (nodes_equal(cs->mems_allowed, task_cs(current)->mems_allowed)) {
>>>> +			css_task_iter_start(&(cs->css), CSS_TASK_ITER_PROCS, &it);
>>>> +			while (!ret && (task = css_task_iter_next(&it)))
>>>> +				ret = fn(task, arg);
>>>> +			css_task_iter_end(&it);
>>>> +		}
>>>> +	}
>>>> +	rcu_read_unlock();
>>>> +	cpuset_read_unlock();
>>>> +	return ret;
>>>> +}
>>>
>>> I see this traverses all cpusets without the hierarchy actually
>>> mattering that much. Wouldn't the CONSTRAINT_CPUSET better achieved by
>>> globally (or per-memcg) scanning all processes and filtering with:
>>
>> Oh I see, you mean scanning all processes in all cpusets and scanning
>> all processes globally are equivalent.
> 
> Why cannot you simple select a process from the cpuset the allocating
> process belongs to? I thought the whole idea was to handle well
> partitioned workloads.
>

Yes I can :) It's much easier.

>>> 	nodes_intersect(current->mems_allowed, p->mems_allowed
>>
>> Perhaps it would be better to use nodes_equal first, and if no suitable
>> victim is found, then downgrade to nodes_intersect?
> 
> How can this happen?
> 
>> NUMA balancing mechanism tends to keep memory on the same NUMA node, and
>> if the selected victim's memory happens to be on a node that does not
>> intersect with the current process's node, we still won't be able to
>> free up any memory.
> 
> AFAIR NUMA balancing doesn't touch processes with memory policies.



  reply	other threads:[~2023-04-11 13:17 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-11  6:58 Gang Li
2023-04-11 12:23 ` Michal Koutný
2023-04-11 13:04   ` Gang Li
2023-04-11 13:12     ` Michal Hocko
2023-04-11 13:17       ` Gang Li [this message]
2023-04-11 15:08       ` Michal Koutný
2023-04-11 14:36 ` Michal Hocko
2023-08-17  8:40   ` Gang Li
2023-08-17 16:45     ` Waiman Long
2023-08-22  6:31     ` Gang Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e6ff9768-f41d-553c-e858-1b244a461526@bytedance.com \
    --to=ligang.bdlg@bytedance.com \
    --cc=cgroups@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lizefan.x@bytedance.com \
    --cc=longman@redhat.com \
    --cc=mhocko@suse.com \
    --cc=mkoutny@suse.com \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox