From: Arjan van de Ven <arjan@linux.intel.com>
To: Jonathan Cameron <Jonathan.Cameron@Huawei.com>,
Huang Ying <ying.huang@intel.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Andrew Morton <akpm@linux-foundation.org>,
Mel Gorman <mgorman@techsingularity.net>,
Vlastimil Babka <vbabka@suse.cz>,
David Hildenbrand <david@redhat.com>,
Johannes Weiner <jweiner@redhat.com>,
Dave Hansen <dave.hansen@linux.intel.com>,
Michal Hocko <mhocko@suse.com>,
Pavel Tatashin <pasha.tatashin@soleen.com>,
Matthew Wilcox <willy@infradead.org>
Subject: Re: [RFC 0/6] mm: improve page allocator scalability via splitting zones
Date: Thu, 11 May 2023 06:07:07 -0700 [thread overview]
Message-ID: <9ebd85b6-61da-c868-240d-0ea99c8e147d@linux.intel.com> (raw)
In-Reply-To: <20230511113009.00004821@Huawei.com>
On 5/11/2023 3:30 AM, Jonathan Cameron wrote:
> Hi,
>
> Interesting idea. I'm curious though on whether this can suffer from
> imbalance problems where due to uneven allocations from particular CPUs
> you can end up with all page faults happening in one zone and the original
> contention problem coming back? Or am I missing some process that will
> result in that imbalance being corrected?
>
> Jonathan
Well, the first line of defense is the per cpu page lists...
it can well be that a couple of cpus all in the same zone hit some high frequency
pattern... that by itself isn't the real issue. Note the "a couple".
It gets to be a problem if "a high number" start hitting this...
And by splitting the total into smaller pieces, this is going to be much much
less likely, since the total number per zone is just less.
next prev parent reply other threads:[~2023-05-11 13:13 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-11 6:56 Huang Ying
2023-05-11 6:56 ` [RFC 1/6] mm: distinguish zone type and zone instance explicitly Huang Ying
2023-05-11 6:56 ` [RFC 2/6] mm: add struct zone_type_struct to describe zone type Huang Ying
2023-05-11 6:56 ` [RFC 3/6] mm: support multiple zone instances per zone type in memory online Huang Ying
2023-05-11 6:56 ` [RFC 4/6] mm: avoid show invalid zone in /proc/zoneinfo Huang Ying
2023-05-11 6:56 ` [RFC 5/6] mm: create multiple zone instances for one zone type based on memory size Huang Ying
2023-05-11 6:56 ` [RFC 6/6] mm: prefer different zone list on different logical CPU Huang Ying
2023-05-11 10:30 ` [RFC 0/6] mm: improve page allocator scalability via splitting zones Jonathan Cameron
2023-05-11 13:07 ` Arjan van de Ven [this message]
2023-05-11 14:23 ` Dave Hansen
2023-05-12 3:08 ` Huang, Ying
2023-05-11 15:05 ` Michal Hocko
2023-05-12 2:55 ` Huang, Ying
2023-05-15 11:14 ` Michal Hocko
2023-05-16 9:38 ` Huang, Ying
2023-05-16 10:30 ` David Hildenbrand
2023-05-17 1:34 ` Huang, Ying
2023-05-17 8:09 ` David Hildenbrand
2023-05-18 8:06 ` Huang, Ying
2023-05-24 12:30 ` Michal Hocko
2023-05-29 1:13 ` Huang, Ying
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9ebd85b6-61da-c868-240d-0ea99c8e147d@linux.intel.com \
--to=arjan@linux.intel.com \
--cc=Jonathan.Cameron@Huawei.com \
--cc=akpm@linux-foundation.org \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=jweiner@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=pasha.tatashin@soleen.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox