From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi0-f72.google.com (mail-oi0-f72.google.com [209.85.218.72]) by kanga.kvack.org (Postfix) with ESMTP id 622476B0007 for ; Tue, 20 Mar 2018 18:58:53 -0400 (EDT) Received: by mail-oi0-f72.google.com with SMTP id 15-v6so1693970oij.6 for ; Tue, 20 Mar 2018 15:58:53 -0700 (PDT) Received: from mail-sor-f41.google.com (mail-sor-f41.google.com. [209.85.220.41]) by mx.google.com with SMTPS id b16-v6sor1266701otb.229.2018.03.20.15.58.52 for (Google Transport Security); Tue, 20 Mar 2018 15:58:52 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: <20180320085452.24641-3-aaron.lu@intel.com> References: <20180320085452.24641-1-aaron.lu@intel.com> <20180320085452.24641-3-aaron.lu@intel.com> From: "Figo.zhang" Date: Tue, 20 Mar 2018 15:58:51 -0700 Message-ID: Subject: Re: [RFC PATCH v2 2/4] mm/__free_one_page: skip merge for order-0 page unless compaction failed Content-Type: multipart/alternative; boundary="000000000000a33f880567e00458" Sender: owner-linux-mm@kvack.org List-ID: To: Aaron Lu Cc: Linux MM , LKML , Andrew Morton , Huang Ying , Dave Hansen , Kemi Wang , Tim Chen , Andi Kleen , Michal Hocko , Vlastimil Babka , Mel Gorman , Matthew Wilcox , Daniel Jordan --000000000000a33f880567e00458 Content-Type: text/plain; charset="UTF-8" 2018-03-20 1:54 GMT-07:00 Aaron Lu : > Running will-it-scale/page_fault1 process mode workload on a 2 sockets > Intel Skylake server showed severe lock contention of zone->lock, as > high as about 80%(42% on allocation path and 35% on free path) CPU > cycles are burnt spinning. With perf, the most time consuming part inside > that lock on free path is cache missing on page structures, mostly on > the to-be-freed page's buddy due to merging. > > One way to avoid this overhead is not do any merging at all for order-0 > pages. With this approach, the lock contention for zone->lock on free > path dropped to 1.1% but allocation side still has as high as 42% lock > contention. In the meantime, the dropped lock contention on free side > doesn't translate to performance increase, instead, it's consumed by > increased lock contention of the per node lru_lock(rose from 5% to 37%) > and the final performance slightly dropped about 1%. > > Though performance dropped a little, it almost eliminated zone lock > contention on free path and it is the foundation for the next patch > that eliminates zone lock contention for allocation path. > > A new document file called "struct_page_filed" is added to explain > the newly reused field in "struct page". > > Suggested-by: Dave Hansen > Signed-off-by: Aaron Lu > --- > Documentation/vm/struct_page_field | 5 +++ > include/linux/mm_types.h | 1 + > mm/compaction.c | 13 +++++- > mm/internal.h | 27 ++++++++++++ > mm/page_alloc.c | 89 ++++++++++++++++++++++++++++++ > +++----- > 5 files changed, 122 insertions(+), 13 deletions(-) > create mode 100644 Documentation/vm/struct_page_field > > diff --git a/Documentation/vm/struct_page_field b/Documentation/vm/struct_ > page_field > new file mode 100644 > index 000000000000..1ab6c19ccc7a > --- /dev/null > +++ b/Documentation/vm/struct_page_field > @@ -0,0 +1,5 @@ > +buddy_merge_skipped: > +Used to indicate this page skipped merging when added to buddy. This > +field only makes sense if the page is in Buddy and is order zero. > +It's a bug if any higher order pages in Buddy has this field set. > +Shares space with index. > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > index fd1af6b9591d..7edc4e102a8e 100644 > --- a/include/linux/mm_types.h > +++ b/include/linux/mm_types.h > @@ -91,6 +91,7 @@ struct page { > pgoff_t index; /* Our offset within mapping. */ > void *freelist; /* sl[aou]b first free object */ > /* page_deferred_list().prev -- second tail page */ > + bool buddy_merge_skipped; /* skipped merging when added to > buddy */ > }; > > union { > diff --git a/mm/compaction.c b/mm/compaction.c > index 2c8999d027ab..fb9031fdca41 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -776,8 +776,19 @@ isolate_migratepages_block(struct compact_control > *cc, unsigned long low_pfn, > * potential isolation targets. > */ > if (PageBuddy(page)) { > - unsigned long freepage_order = > page_order_unsafe(page); > + unsigned long freepage_order; > > + /* > + * If this is a merge_skipped page, do merge now > + * since high-order pages are needed. zone lock > + * isn't taken for the merge_skipped check so the > + * check could be wrong but the worst case is we > + * lose a merge opportunity. > + */ > + if (page_merge_was_skipped(page)) > + try_to_merge_page(page); > + > + freepage_order = page_order_unsafe(page); > /* > * Without lock, we cannot be sure that what we > got is > * a valid page order. Consider only values in the > when the system memory is very very low and try a lot of failures and then go into __alloc_pages_direct_compact() to has a opportunity to do your try_to_merge_page(), is it the best timing for here to do order-0 migration? diff --git a/mm/internal.h b/mm/internal.h > index e6bd35182dae..2bfbaae2d835 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -538,4 +538,31 @@ static inline bool is_migrate_highatomic_page(struct > page *page) > } > > void setup_zone_pageset(struct zone *zone); > + > +static inline bool page_merge_was_skipped(struct page *page) > +{ > + return page->buddy_merge_skipped; > +} > + > +void try_to_merge_page(struct page *page); > + > +#ifdef CONFIG_COMPACTION > +static inline bool can_skip_merge(struct zone *zone, int order) > +{ > + /* Compaction has failed in this zone, we shouldn't skip merging */ > + if (zone->compact_considered) > + return false; > + > + /* Only consider no_merge for order 0 pages */ > + if (order) > + return false; > + > + return true; > +} > +#else /* CONFIG_COMPACTION */ > +static inline bool can_skip_merge(struct zone *zone, int order) > +{ > + return false; > +} > +#endif /* CONFIG_COMPACTION */ > #endif /* __MM_INTERNAL_H */ > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 3cdf1e10d412..eb78014dfbde 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -730,6 +730,16 @@ static inline void clear_page_guard(struct zone > *zone, struct page *page, > unsigned int order, int migratetype) {} > #endif > > +static inline void set_page_merge_skipped(struct page *page) > +{ > + page->buddy_merge_skipped = true; > +} > + > +static inline void clear_page_merge_skipped(struct page *page) > +{ > + page->buddy_merge_skipped = false; > +} > + > static inline void set_page_order(struct page *page, unsigned int order) > { > set_page_private(page, order); > @@ -739,6 +749,13 @@ static inline void set_page_order(struct page *page, > unsigned int order) > static inline void add_to_buddy_common(struct page *page, struct zone > *zone, > unsigned int order, int mt) > { > + /* > + * Always clear buddy_merge_skipped when added to buddy because > + * buddy_merge_skipped shares space with index and index could > + * be used as migratetype for PCP pages. > + */ > + clear_page_merge_skipped(page); > + > set_page_order(page, order); > zone->free_area[order].nr_free++; > } > @@ -769,6 +786,7 @@ static inline void remove_from_buddy(struct page > *page, struct zone *zone, > list_del(&page->lru); > zone->free_area[order].nr_free--; > rmv_page_order(page); > + clear_page_merge_skipped(page); > } > > /* > @@ -839,7 +857,7 @@ static inline int page_is_buddy(struct page *page, > struct page *buddy, > * -- nyc > */ > > -static inline void __free_one_page(struct page *page, > +static inline void do_merge(struct page *page, > unsigned long pfn, > struct zone *zone, unsigned int order, > int migratetype) > @@ -851,16 +869,6 @@ static inline void __free_one_page(struct page *page, > > max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1); > > - VM_BUG_ON(!zone_is_initialized(zone)); > - VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page); > - > - VM_BUG_ON(migratetype == -1); > - if (likely(!is_migrate_isolate(migratetype))) > - __mod_zone_freepage_state(zone, 1 << order, migratetype); > - > - VM_BUG_ON_PAGE(pfn & ((1 << order) - 1), page); > - VM_BUG_ON_PAGE(bad_range(zone, page), page); > - > continue_merging: > while (order < max_order - 1) { > buddy_pfn = __find_buddy_pfn(pfn, order); > @@ -933,6 +941,61 @@ static inline void __free_one_page(struct page *page, > add_to_buddy_head(page, zone, order, migratetype); > } > > +void try_to_merge_page(struct page *page) > +{ > + unsigned long pfn, buddy_pfn, flags; > + struct page *buddy; > + struct zone *zone; > + > + /* > + * No need to do merging if buddy is not free. > + * zone lock isn't taken so this could be wrong but worst case > + * is we lose a merge opportunity. > + */ > + pfn = page_to_pfn(page); > + buddy_pfn = __find_buddy_pfn(pfn, 0); > + buddy = page + (buddy_pfn - pfn); > + if (!PageBuddy(buddy)) > + return; > + > + zone = page_zone(page); > + spin_lock_irqsave(&zone->lock, flags); > + /* Verify again after taking the lock */ > + if (likely(PageBuddy(page) && page_merge_was_skipped(page) && > + PageBuddy(buddy))) { > + int mt = get_pageblock_migratetype(page); > + > + remove_from_buddy(page, zone, 0); > + do_merge(page, pfn, zone, 0, mt); > + } > + spin_unlock_irqrestore(&zone->lock, flags); > +} > + > +static inline void __free_one_page(struct page *page, > + unsigned long pfn, > + struct zone *zone, unsigned int order, > + int migratetype) > +{ > + VM_BUG_ON(!zone_is_initialized(zone)); > + VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page); > + > + VM_BUG_ON(migratetype == -1); > + if (likely(!is_migrate_isolate(migratetype))) > + __mod_zone_freepage_state(zone, 1 << order, migratetype); > + > + VM_BUG_ON_PAGE(pfn & ((1 << order) - 1), page); > + VM_BUG_ON_PAGE(bad_range(zone, page), page); > + > + if (can_skip_merge(zone, order)) { > + add_to_buddy_head(page, zone, 0, migratetype); > + set_page_merge_skipped(page); > + return; > + } > + > + do_merge(page, pfn, zone, order, migratetype); > +} > + > + > /* > * A bad page could be due to a number of fields. Instead of multiple > branches, > * try and check multiple fields with one check. The caller must do a > detailed > @@ -1183,8 +1246,10 @@ static void free_pcppages_bulk(struct zone *zone, > int count, > * can be offset by reduced memory latency later. > To > * avoid excessive prefetching due to large count, > only > * prefetch buddy for the last pcp->batch nr of > pages. > + * > + * If merge can be skipped, no need to prefetch > buddy. > */ > - if (count > pcp->batch) > + if (can_skip_merge(zone, 0) || count > pcp->batch) > continue; > pfn = page_to_pfn(page); > buddy_pfn = __find_buddy_pfn(pfn, 0); > -- > 2.14.3 > > --000000000000a33f880567e00458 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable


2018-03-20 1:54 GMT-07:00 Aaron Lu <aaron.lu@intel.com>= :
Running will-it-scal= e/page_fault1 process mode workload on a 2 sockets
Intel Skylake server showed severe lock contention of zone->lock, as
high as about 80%(42% on allocation path and 35% on free path) CPU
cycles are burnt spinning. With perf, the most time consuming part inside that lock on free path is cache missing on page structures, mostly on
the to-be-freed page's buddy due to merging.

One way to avoid this overhead is not do any merging at all for order-0
pages. With this approach, the lock contention for zone->lock on free path dropped to 1.1% but allocation side still has as high as 42% lock
contention. In the meantime, the dropped lock contention on free side
doesn't translate to performance increase, instead, it's consumed b= y
increased lock contention of the per node lru_lock(rose from 5% to 37%)
and the final performance slightly dropped about 1%.

Though performance dropped a little, it almost eliminated zone lock
contention on free path and it is the foundation for the next patch
that eliminates zone lock contention for allocation path.

A new document file called "struct_page_filed" is added to explai= n
the newly reused field in "struct page".

Suggested-by: Dave Hansen <dave= .hansen@intel.com>
Signed-off-by: Aaron Lu <aaron.lu@= intel.com>
---
=C2=A0Documentation/vm/struct_page_field |=C2=A0 5 +++
=C2=A0include/linux/mm_types.h=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0|=C2= =A0 1 +
=C2=A0mm/compaction.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 | 13 +++++-
=C2=A0mm/internal.h=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 | 27 ++++++++++++
=C2=A0mm/page_alloc.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 | 89 +++++++++++++++++++++++++++++++++-----
=C2=A05 files changed, 122 insertions(+), 13 deletions(-)
=C2=A0create mode 100644 Documentation/vm/struct_page_field

diff --git a/Documentation/vm/struct_page_field b/Documentation/vm/str= uct_page_field
new file mode 100644
index 000000000000..1ab6c19ccc7a
--- /dev/null
+++ b/Documentation/vm/struct_page_field
@@ -0,0 +1,5 @@
+buddy_merge_skipped:
+Used to indicate this page skipped merging when added to buddy. This
+field only makes sense if the page is in Buddy and is order zero.
+It's a bug if any higher order pages in Buddy has this field set.
+Shares space with index.
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index fd1af6b9591d..7edc4e102a8e 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -91,6 +91,7 @@ struct page {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 pgoff_t index;=C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* Our offset within mapping. */
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 void *freelist;=C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/* sl[aou]b first free object */
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* page_deferred_li= st().prev=C2=A0 =C2=A0 -- second tail page */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0bool buddy_merge_sk= ipped; /* skipped merging when added to buddy */
=C2=A0 =C2=A0 =C2=A0 =C2=A0 };

=C2=A0 =C2=A0 =C2=A0 =C2=A0 union {
diff --git a/mm/compaction.c b/mm/compaction.c
index 2c8999d027ab..fb9031fdca41 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -776,8 +776,19 @@ isolate_migratepages_block(struct compact_control= *cc, unsigned long low_pfn,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0* potential i= solation targets.
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0*/
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if (PageBuddy(page)= ) {
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0unsigned long freepage_order =3D page_order_unsafe(page);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0unsigned long freepage_order;

+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0/*
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 * If this is a merge_skipped page, do merge now
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 * since high-order pages are needed. zone lock
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 * isn't taken for the merge_skipped check so the
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 * check could be wrong but the worst case is we
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 * lose a merge opportunity.
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0if (page_merge_was_skipped(page))
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0try_to_merge_page(page);
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0freepage_order =3D page_order_unsafe(page);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 /*
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0* Without lock, we cannot be sure that what we got is
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0* a valid page order. Consider only values in the

when the system memory is very very low and try= a lot of failures and then go into=C2=A0
__alloc_pages_direct_co= mpact() to has a opportunity to do your try_to_merge_page(), is it the best= timing for here to=C2=A0
do order-0 migration?

diff --git a/mm/internal.h b/mm/internal.h
index e6bd35182dae..2bfbaae2d835 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -538,4 +538,31 @@ static inline bool is_migrate_highatomic_page(str= uct page *page)
=C2=A0}

=C2=A0void setup_zone_pageset(struct zone *zone);
+
+static inline bool page_merge_was_skipped(struct page *page)
+{
+=C2=A0 =C2=A0 =C2=A0 =C2=A0return page->buddy_merge_skipped;
+}
+
+void try_to_merge_page(struct page *page);
+
+#ifdef CONFIG_COMPACTION
+static inline bool can_skip_merge(struct zone *zone, int order)
+{
+=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Compaction has failed in this zone, we shoul= dn't skip merging */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0if (zone->compact_considered)
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return false;
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Only consider no_merge for order 0 pages */<= br> +=C2=A0 =C2=A0 =C2=A0 =C2=A0if (order)
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return false;
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0return true;
+}
+#else /* CONFIG_COMPACTION */
+static inline bool can_skip_merge(struct zone *zone, int order)
+{
+=C2=A0 =C2=A0 =C2=A0 =C2=A0return false;
+}
+#endif=C2=A0 /* CONFIG_COMPACTION */
=C2=A0#endif /* __MM_INTERNAL_H */
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3cdf1e10d412..eb78014dfbde 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -730,6 +730,16 @@ static inline void clear_page_guard(struct zone *zone,= struct page *page,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 unsigned int order, int migratetype)= {}
=C2=A0#endif

+static inline void set_page_merge_skipped(struct page *page)
+{
+=C2=A0 =C2=A0 =C2=A0 =C2=A0page->buddy_merge_skipped =3D true;
+}
+
+static inline void clear_page_merge_skipped(struct page *page)
+{
+=C2=A0 =C2=A0 =C2=A0 =C2=A0page->buddy_merge_skipped =3D false;
+}
+
=C2=A0static inline void set_page_order(struct page *page, unsigned int ord= er)
=C2=A0{
=C2=A0 =C2=A0 =C2=A0 =C2=A0 set_page_private(page, order);
@@ -739,6 +749,13 @@ static inline void set_page_order(struct page *page, u= nsigned int order)
=C2=A0static inline void add_to_buddy_common(struct page *page, struct zone= *zone,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 unsigned= int order, int mt)
=C2=A0{
+=C2=A0 =C2=A0 =C2=A0 =C2=A0/*
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 * Always clear buddy_merge_skipped when added = to buddy because
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 * buddy_merge_skipped shares space with index = and index could
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 * be used as migratetype for PCP pages.
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0clear_page_merge_skipped(page);
+
=C2=A0 =C2=A0 =C2=A0 =C2=A0 set_page_order(page, order);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 zone->free_area[order].nr_free++;
=C2=A0}
@@ -769,6 +786,7 @@ static inline void remove_from_buddy(struct page *page,= struct zone *zone,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 list_del(&page->lru);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 zone->free_area[order].nr_free--;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 rmv_page_order(page);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0clear_page_merge_skipped(page);
=C2=A0}

=C2=A0/*
@@ -839,7 +857,7 @@ static inline int page_is_buddy(struct page *page, stru= ct page *buddy,
=C2=A0 * -- nyc
=C2=A0 */

-static inline void __free_one_page(struct page *page,
+static inline void do_merge(struct page *page,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 unsigned long pfn,<= br> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 struct zone *zone, = unsigned int order,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 int migratetype) @@ -851,16 +869,6 @@ static inline void __free_one_page(struct page *page,<= br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 max_order =3D min_t(unsigned int, MAX_ORDER, pa= geblock_order + 1);

-=C2=A0 =C2=A0 =C2=A0 =C2=A0VM_BUG_ON(!zone_is_initialized(zone));
-=C2=A0 =C2=A0 =C2=A0 =C2=A0VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_= CHECK_AT_PREP, page);
-
-=C2=A0 =C2=A0 =C2=A0 =C2=A0VM_BUG_ON(migratetype =3D=3D -1);
-=C2=A0 =C2=A0 =C2=A0 =C2=A0if (likely(!is_migrate_isolate(migratetype= )))
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0__mod_zone_freepage= _state(zone, 1 << order, migratetype);
-
-=C2=A0 =C2=A0 =C2=A0 =C2=A0VM_BUG_ON_PAGE(pfn & ((1 << order) - = 1), page);
-=C2=A0 =C2=A0 =C2=A0 =C2=A0VM_BUG_ON_PAGE(bad_range(zone, page), page); -
=C2=A0continue_merging:
=C2=A0 =C2=A0 =C2=A0 =C2=A0 while (order < max_order - 1) {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 buddy_pfn =3D __fin= d_buddy_pfn(pfn, order);
@@ -933,6 +941,61 @@ static inline void __free_one_page(struct page *page,<= br> =C2=A0 =C2=A0 =C2=A0 =C2=A0 add_to_buddy_head(page, zone, order, migratetyp= e);
=C2=A0}

+void try_to_merge_page(struct page *page)
+{
+=C2=A0 =C2=A0 =C2=A0 =C2=A0unsigned long pfn, buddy_pfn, flags;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0struct page *buddy;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0struct zone *zone;
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0/*
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 * No need to do merging if buddy is not free.<= br> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 * zone lock isn't taken so this could be w= rong but worst case
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 * is we lose a merge opportunity.
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0pfn =3D page_to_pfn(page);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0buddy_pfn =3D __find_buddy_pfn(pfn, 0);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0buddy =3D page + (buddy_pfn - pfn);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0if (!PageBuddy(buddy))
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return;
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0zone =3D page_zone(page);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0spin_lock_irqsave(&zone->lock, flags); +=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Verify again after taking the lock */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0if (likely(PageBuddy(page) && page_merg= e_was_skipped(page) &&
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 PageBuddy(b= uddy))) {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0int mt =3D get_page= block_migratetype(page);
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0remove_from_buddy(p= age, zone, 0);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0do_merge(page, pfn,= zone, 0, mt);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0}
+=C2=A0 =C2=A0 =C2=A0 =C2=A0spin_unlock_irqrestore(&zone->lock,= flags);
+}
+
+static inline void __free_one_page(struct page *page,
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0unsigned long pfn,<= br> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0struct zone *zone, = unsigned int order,
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0int migratetype) +{
+=C2=A0 =C2=A0 =C2=A0 =C2=A0VM_BUG_ON(!zone_is_initialized(zone));
+=C2=A0 =C2=A0 =C2=A0 =C2=A0VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_= CHECK_AT_PREP, page);
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0VM_BUG_ON(migratetype =3D=3D -1);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0if (likely(!is_migrate_isolate(migratetype= )))
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0__mod_zone_freepage= _state(zone, 1 << order, migratetype);
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0VM_BUG_ON_PAGE(pfn & ((1 << order) - = 1), page);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0VM_BUG_ON_PAGE(bad_range(zone, page), page); +
+=C2=A0 =C2=A0 =C2=A0 =C2=A0if (can_skip_merge(zone, order)) {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0add_to_buddy_head(p= age, zone, 0, migratetype);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0set_page_merge_skip= ped(page);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0}
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0do_merge(page, pfn, zone, order, migratetype);<= br> +}
+
+
=C2=A0/*
=C2=A0 * A bad page could be due to a number of fields. Instead of multiple= branches,
=C2=A0 * try and check multiple fields with one check. The caller must do a= detailed
@@ -1183,8 +1246,10 @@ static void free_pcppages_bulk(struct zone *zone, in= t count,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0* can be offset by reduced memory latency later. To
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0* avoid excessive prefetching due to large count, only
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0* prefetch buddy for the last pcp->batch nr of pages. +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 *
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 * If merge can be skipped, no need to prefetch buddy.
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0*/
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0if (count > pcp->batch)
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0if (can_skip_merge(zone, 0) || count > pcp->batch)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 continue;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 pfn =3D page_to_pfn(page);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 buddy_pfn =3D __find_buddy_pfn(pfn, 0);
--
2.14.3


--000000000000a33f880567e00458--