From: Dave Hansen <dave.hansen@intel.com>
To: Mel Gorman <mgorman@techsingularity.net>, Aaron Lu <aaron.lu@intel.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Andrew Morton <akpm@linux-foundation.org>,
Huang Ying <ying.huang@intel.com>,
Kemi Wang <kemi.wang@intel.com>,
Tim Chen <tim.c.chen@linux.intel.com>,
Andi Kleen <ak@linux.intel.com>, Michal Hocko <mhocko@suse.com>,
Vlastimil Babka <vbabka@suse.cz>
Subject: Re: [PATCH 2/2] free_pcppages_bulk: prefetch buddy while not holding lock
Date: Wed, 24 Jan 2018 08:57:43 -0800 [thread overview]
Message-ID: <148a42d8-8306-2f2f-7f7c-86bc118f8ccd@intel.com> (raw)
In-Reply-To: <20180124164344.lca63gjn7mefuiac@techsingularity.net>
On 01/24/2018 08:43 AM, Mel Gorman wrote:
> I'm less convinced by this for a microbenchmark. Prefetch has not been a
> universal win in the past and we cannot be sure that it's a good idea on
> all architectures or doesn't have other side-effects such as consuming
> memory bandwidth for data we don't need or evicting cache hot data for
> buddy information that is not used.
I had the same reaction.
But, I think this case is special. We *always* do buddy merging (well,
before the next patch in the series is applied) and check an order-0
page's buddy to try to merge it when it goes into the main allocator.
So, the cacheline will always come in.
IOW, I don't think this has the same downsides normally associated with
prefetch() since the data is always used.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2018-01-24 16:57 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-24 2:30 [PATCH 1/2] free_pcppages_bulk: do not hold lock when picking pages to free Aaron Lu
2018-01-24 2:30 ` [PATCH 2/2] free_pcppages_bulk: prefetch buddy while not holding lock Aaron Lu
2018-01-24 16:43 ` Mel Gorman
2018-01-24 16:57 ` Dave Hansen [this message]
2018-01-24 18:19 ` Mel Gorman
2018-01-24 19:23 ` Dave Hansen
2018-01-24 21:12 ` Mel Gorman
2018-01-25 7:25 ` [PATCH v2 " Aaron Lu
2018-01-24 16:40 ` [PATCH 1/2] free_pcppages_bulk: do not hold lock when picking pages to free Mel Gorman
2018-01-25 7:21 ` [PATCH v2 " Aaron Lu
2018-02-15 12:06 ` Mel Gorman
2018-02-23 1:37 ` Aaron Lu
2018-02-15 12:46 ` Matthew Wilcox
2018-02-15 14:55 ` Mel Gorman
2018-02-23 1:42 ` Aaron Lu
2018-02-05 5:30 ` RFC: eliminate zone->lock contention for will-it-scale/page_fault1 on big server Aaron Lu
2018-02-05 5:31 ` [RFC PATCH 1/2] __free_one_page: skip merge for order-0 page unless compaction is in progress Aaron Lu
2018-02-05 22:17 ` Dave Hansen
2018-02-05 5:32 ` [RFC PATCH 2/2] rmqueue_bulk: avoid touching page structures under zone->lock Aaron Lu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=148a42d8-8306-2f2f-7f7c-86bc118f8ccd@intel.com \
--to=dave.hansen@intel.com \
--cc=aaron.lu@intel.com \
--cc=ak@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=kemi.wang@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=tim.c.chen@linux.intel.com \
--cc=vbabka@suse.cz \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox