linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@techsingularity.net>
To: Dave Hansen <dave.hansen@intel.com>
Cc: Aaron Lu <aaron.lu@intel.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Huang Ying <ying.huang@intel.com>,
	Kemi Wang <kemi.wang@intel.com>,
	Tim Chen <tim.c.chen@linux.intel.com>,
	Andi Kleen <ak@linux.intel.com>, Michal Hocko <mhocko@suse.com>,
	Vlastimil Babka <vbabka@suse.cz>
Subject: Re: [PATCH 2/2] free_pcppages_bulk: prefetch buddy while not holding lock
Date: Wed, 24 Jan 2018 18:19:21 +0000	[thread overview]
Message-ID: <20180124181921.vnivr32q72ey7p5i@techsingularity.net> (raw)
In-Reply-To: <148a42d8-8306-2f2f-7f7c-86bc118f8ccd@intel.com>

On Wed, Jan 24, 2018 at 08:57:43AM -0800, Dave Hansen wrote:
> On 01/24/2018 08:43 AM, Mel Gorman wrote:
> > I'm less convinced by this for a microbenchmark. Prefetch has not been a
> > universal win in the past and we cannot be sure that it's a good idea on
> > all architectures or doesn't have other side-effects such as consuming
> > memory bandwidth for data we don't need or evicting cache hot data for
> > buddy information that is not used.
> 
> I had the same reaction.
> 
> But, I think this case is special.  We *always* do buddy merging (well,
> before the next patch in the series is applied) and check an order-0
> page's buddy to try to merge it when it goes into the main allocator.
> So, the cacheline will always come in.
> 
> IOW, I don't think this has the same downsides normally associated with
> prefetch() since the data is always used.

That doesn't side-step the calculations are done twice in the
free_pcppages_bulk path and there is no guarantee that one prefetch
in the list of pages being freed will not evict a previous prefetch
due to collisions. At least on the machine I'm writing this from, the
prefetches necessary for a standard drain are 1/16th of the L1D cache so
some collisions/evictions are possible. We're doing definite work in one
path on the chance it'll still be cache-resident when it's recalculated.
I suspect that only a microbenchmark that is doing very large amounts of
frees (or a large munmap or exit) will notice and the costs of a large
munmap/exit are so high that the prefetch will be negligible savings.

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2018-01-24 18:19 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-24  2:30 [PATCH 1/2] free_pcppages_bulk: do not hold lock when picking pages to free Aaron Lu
2018-01-24  2:30 ` [PATCH 2/2] free_pcppages_bulk: prefetch buddy while not holding lock Aaron Lu
2018-01-24 16:43   ` Mel Gorman
2018-01-24 16:57     ` Dave Hansen
2018-01-24 18:19       ` Mel Gorman [this message]
2018-01-24 19:23         ` Dave Hansen
2018-01-24 21:12           ` Mel Gorman
2018-01-25  7:25             ` [PATCH v2 " Aaron Lu
2018-01-24 16:40 ` [PATCH 1/2] free_pcppages_bulk: do not hold lock when picking pages to free Mel Gorman
2018-01-25  7:21   ` [PATCH v2 " Aaron Lu
2018-02-15 12:06     ` Mel Gorman
2018-02-23  1:37       ` Aaron Lu
2018-02-15 12:46     ` Matthew Wilcox
2018-02-15 14:55       ` Mel Gorman
2018-02-23  1:42       ` Aaron Lu
2018-02-05  5:30 ` RFC: eliminate zone->lock contention for will-it-scale/page_fault1 on big server Aaron Lu
2018-02-05  5:31   ` [RFC PATCH 1/2] __free_one_page: skip merge for order-0 page unless compaction is in progress Aaron Lu
2018-02-05 22:17     ` Dave Hansen
2018-02-05  5:32   ` [RFC PATCH 2/2] rmqueue_bulk: avoid touching page structures under zone->lock Aaron Lu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180124181921.vnivr32q72ey7p5i@techsingularity.net \
    --to=mgorman@techsingularity.net \
    --cc=aaron.lu@intel.com \
    --cc=ak@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=dave.hansen@intel.com \
    --cc=kemi.wang@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=tim.c.chen@linux.intel.com \
    --cc=vbabka@suse.cz \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox