linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@techsingularity.net>
To: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Michal Hocko <mhocko@suse.com>, Vlastimil Babka <vbabka@suse.cz>,
	Thomas Gleixner <tglx@linutronix.de>,
	Borislav Petkov <bp@alien8.de>, Linux-MM <linux-mm@kvack.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 0/3] Recalculate per-cpu page allocator batch and high limits after deferred meminit
Date: Fri, 18 Oct 2019 13:54:49 +0100	[thread overview]
Message-ID: <20191018125449.GJ3321@techsingularity.net> (raw)
In-Reply-To: <20191018115849.GH4065@codeblueprint.co.uk>

On Fri, Oct 18, 2019 at 12:58:49PM +0100, Matt Fleming wrote:
> On Fri, 18 Oct, at 11:56:03AM, Mel Gorman wrote:
> > A private report stated that system CPU usage was excessive on an AMD
> > EPYC 2 machine while building kernels with much longer build times than
> > expected. The issue is partially explained by high zone lock contention
> > due to the per-cpu page allocator batch and high limits being calculated
> > incorrectly. This series addresses a large chunk of the problem. Patch 1
> > is mostly cosmetic but prepares for patch 2 which is the real fix. Patch
> > 3 is definiely cosmetic but was noticed while implementing the fix. Proper
> > details are in the changelog for patch 2.
> > 
> >  include/linux/mm.h |  3 ---
> >  mm/internal.h      |  3 +++
> >  mm/page_alloc.c    | 33 ++++++++++++++++++++-------------
> >  3 files changed, 23 insertions(+), 16 deletions(-)
> 
> Just to confirm, these patches don't fix the issue we're seeing on the
> EPYC 2 machines, but they do return the batch sizes to sensible values.

To be clear, does the patch a) fix *some* of the issue and there is
something else also going on that needs to be chased down or b) has no
impact on build time or system CPU usage on your machine?

-- 
Mel Gorman
SUSE Labs


  reply	other threads:[~2019-10-18 12:54 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-18 10:56 Mel Gorman
2019-10-18 10:56 ` [PATCH 1/3] mm, pcp: Share common code between memory hotplug and percpu sysctl handler Mel Gorman
2019-10-18 11:57   ` Matt Fleming
2019-10-18 12:51   ` Michal Hocko
2019-10-18 10:56 ` [PATCH 2/3] mm, meminit: Recalculate pcpu batch and high limits after init completes Mel Gorman
2019-10-18 11:57   ` Matt Fleming
2019-10-18 13:01   ` Michal Hocko
2019-10-18 14:09     ` Mel Gorman
2019-10-19  1:40       ` Andrew Morton
2019-10-20  9:32         ` Mel Gorman
2019-10-18 10:56 ` [PATCH 3/3] mm, pcpu: Make zone pcp updates and reset internal to the mm Mel Gorman
2019-10-18 11:57   ` Matt Fleming
2019-10-18 13:02   ` Michal Hocko
2019-10-18 11:58 ` [PATCH 0/3] Recalculate per-cpu page allocator batch and high limits after deferred meminit Matt Fleming
2019-10-18 12:54   ` Mel Gorman [this message]
2019-10-18 14:48     ` Matt Fleming

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191018125449.GJ3321@techsingularity.net \
    --to=mgorman@techsingularity.net \
    --cc=akpm@linux-foundation.org \
    --cc=bp@alien8.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=matt@codeblueprint.co.uk \
    --cc=mhocko@suse.com \
    --cc=tglx@linutronix.de \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox