From: Pavel Tatashin <pasha.tatashin@gmail.com>
To: Alexander Duyck <alexander.duyck@gmail.com>
Cc: alexander.h.duyck@linux.intel.com,
Linux Memory Management List <linux-mm@kvack.org>,
Andrew Morton <akpm@linux-foundation.org>,
Pasha Tatashin <pavel.tatashin@microsoft.com>,
Michal Hocko <mhocko@suse.com>,
dave.jiang@intel.com, LKML <linux-kernel@vger.kernel.org>,
willy@infradead.org, davem@davemloft.net,
yi.z.zhang@linux.intel.com, khalid.aziz@oracle.com,
rppt@linux.vnet.ibm.com, Vlastimil Babka <vbabka@suse.cz>,
sparclinux@vger.kernel.org, dan.j.williams@intel.com,
ldufour@linux.vnet.ibm.com, mgorman@techsingularity.net,
mingo@kernel.org, kirill.shutemov@linux.intel.com
Subject: Re: [mm PATCH v2 1/6] mm: Use mm_zero_struct_page from SPARC on all 64b architectures
Date: Sat, 13 Oct 2018 13:49:06 -0400 [thread overview]
Message-ID: <CAGM2reZpV8OuAe2=VP5fiHJ7yTGAOH35U78BsULbKbXZ9fR9ng@mail.gmail.com> (raw)
In-Reply-To: <CAKgT0Uc7DSEqKqfymyBzDhN-TpWZSxSmS=W_iiY0_A7YY-voSg@mail.gmail.com>
Still, lets get some real data, please provide Intel data before vs
after. I could test on an ARM processor.
Pavel
On Sat, Oct 13, 2018 at 1:18 PM Alexander Duyck
<alexander.duyck@gmail.com> wrote:
>
> Well in the case of x86 the call to memset is expensive as well. In
> most cases it is 16 cycles plus 1 cycle per 16 bytes if I recall
> correctly. So for example in the case of skbuff which was a little
> over 192 bytes I know Jesper Brouer and myself were going back and
> forth with the idea of if we should try to do something similar.
>
> I'm suspecting for the 64b architectures impacted by this change there
> should be little to no negative impact. The main reason for that being
> the fact that the compiler can actually drop some of the writes by
> merging them with the later assignments.
>
> Thanks.
>
> - Alex
>
> On Sat, Oct 13, 2018 at 9:58 AM Pavel Tatashin <pasha.tatashin@gmail.com> wrote:
> >
> > I am worried about this change. I added SPARC optimized
> > mm_zero_struct_page() specifically to SPARC because it has a poor
> > performance with small memset()s, since it uses STBI instructions.
> > However, other architectures might not suffer with small memset()s,
> > and have hardware optimized memset variants for small sizes. Don't
> > forget, this is a leaf routine on most arches, so the function call
> > should be cheap. Also, the macro itself is not very flexible: when
> > size of struct page is changed, it also must be modified (we could add
> > fall throughs though), I would add this macro only to those arches
> > that benefit from this change, in other words, I would like to see
> > performance data.
> >
> > I will review the rest of the patches in this series on Monday.
> >
> > Thank you,
> > Pavel
> > On Thu, Oct 11, 2018 at 6:17 PM Alexander Duyck
> > <alexander.h.duyck@linux.intel.com> wrote:
> > >
> > > This change makes it so that we use the same approach that was already in
> > > use on Sparc on all the archtectures that support a 64b long.
> > >
> > > This is mostly motivated by the fact that 8 to 10 store/move instructions
> > > are likely always going to be faster than having to call into a function
> > > that is not specialized for handling page init.
> > >
> > > An added advantage to doing it this way is that the compiler can get away
> > > with combining writes in the __init_single_page call. As a result the
> > > memset call will be reduced to only about 4 write operations, or at least
> > > that is what I am seeing with GCC 6.2 as the flags, LRU poitners, and
> > > count/mapcount seem to be cancelling out at least 4 of the 8 assignments on
> > > my system.
> > >
> > > One change I had to make to the function was to reduce the minimum page
> > > size to 56 to support some powerpc64 configurations.
> > >
> > > Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> > > ---
> > > arch/sparc/include/asm/pgtable_64.h | 30 ------------------------------
> > > include/linux/mm.h | 34 ++++++++++++++++++++++++++++++++++
> > > 2 files changed, 34 insertions(+), 30 deletions(-)
> > >
> > > diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
> > > index 1393a8ac596b..22500c3be7a9 100644
> > > --- a/arch/sparc/include/asm/pgtable_64.h
> > > +++ b/arch/sparc/include/asm/pgtable_64.h
> > > @@ -231,36 +231,6 @@
> > > extern struct page *mem_map_zero;
> > > #define ZERO_PAGE(vaddr) (mem_map_zero)
> > >
> > > -/* This macro must be updated when the size of struct page grows above 80
> > > - * or reduces below 64.
> > > - * The idea that compiler optimizes out switch() statement, and only
> > > - * leaves clrx instructions
> > > - */
> > > -#define mm_zero_struct_page(pp) do { \
> > > - unsigned long *_pp = (void *)(pp); \
> > > - \
> > > - /* Check that struct page is either 64, 72, or 80 bytes */ \
> > > - BUILD_BUG_ON(sizeof(struct page) & 7); \
> > > - BUILD_BUG_ON(sizeof(struct page) < 64); \
> > > - BUILD_BUG_ON(sizeof(struct page) > 80); \
> > > - \
> > > - switch (sizeof(struct page)) { \
> > > - case 80: \
> > > - _pp[9] = 0; /* fallthrough */ \
> > > - case 72: \
> > > - _pp[8] = 0; /* fallthrough */ \
> > > - default: \
> > > - _pp[7] = 0; \
> > > - _pp[6] = 0; \
> > > - _pp[5] = 0; \
> > > - _pp[4] = 0; \
> > > - _pp[3] = 0; \
> > > - _pp[2] = 0; \
> > > - _pp[1] = 0; \
> > > - _pp[0] = 0; \
> > > - } \
> > > -} while (0)
> > > -
> > > /* PFNs are real physical page numbers. However, mem_map only begins to record
> > > * per-page information starting at pfn_base. This is to handle systems where
> > > * the first physical page in the machine is at some huge physical address,
> > > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > > index 273d4dbd3883..dee407998366 100644
> > > --- a/include/linux/mm.h
> > > +++ b/include/linux/mm.h
> > > @@ -102,8 +102,42 @@ static inline void set_max_mapnr(unsigned long limit) { }
> > > * zeroing by defining this macro in <asm/pgtable.h>.
> > > */
> > > #ifndef mm_zero_struct_page
> > > +#if BITS_PER_LONG == 64
> > > +/* This function must be updated when the size of struct page grows above 80
> > > + * or reduces below 64. The idea that compiler optimizes out switch()
> > > + * statement, and only leaves move/store instructions
> > > + */
> > > +#define mm_zero_struct_page(pp) __mm_zero_struct_page(pp)
> > > +static inline void __mm_zero_struct_page(struct page *page)
> > > +{
> > > + unsigned long *_pp = (void *)page;
> > > +
> > > + /* Check that struct page is either 56, 64, 72, or 80 bytes */
> > > + BUILD_BUG_ON(sizeof(struct page) & 7);
> > > + BUILD_BUG_ON(sizeof(struct page) < 56);
> > > + BUILD_BUG_ON(sizeof(struct page) > 80);
> > > +
> > > + switch (sizeof(struct page)) {
> > > + case 80:
> > > + _pp[9] = 0; /* fallthrough */
> > > + case 72:
> > > + _pp[8] = 0; /* fallthrough */
> > > + default:
> > > + _pp[7] = 0; /* fallthrough */
> > > + case 56:
> > > + _pp[6] = 0;
> > > + _pp[5] = 0;
> > > + _pp[4] = 0;
> > > + _pp[3] = 0;
> > > + _pp[2] = 0;
> > > + _pp[1] = 0;
> > > + _pp[0] = 0;
> > > + }
> > > +}
> > > +#else
> > > #define mm_zero_struct_page(pp) ((void)memset((pp), 0, sizeof(struct page)))
> > > #endif
> > > +#endif
> > >
> > > /*
> > > * Default maximum number of active map areas, this limits the number of vmas
> > >
> >
next prev parent reply other threads:[~2018-10-13 17:49 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-10-11 22:13 [mm PATCH v2 0/6] Deferred page init improvements Alexander Duyck
2018-10-11 22:13 ` [mm PATCH v2 1/6] mm: Use mm_zero_struct_page from SPARC on all 64b architectures Alexander Duyck
2018-10-13 16:58 ` Pavel Tatashin
2018-10-13 17:18 ` Alexander Duyck
2018-10-13 17:49 ` Pavel Tatashin [this message]
2018-10-13 18:46 ` Alexander Duyck
2018-10-11 22:13 ` [mm PATCH v2 2/6] mm: Drop meminit_pfn_in_nid as it is redundant Alexander Duyck
2018-10-11 22:13 ` [mm PATCH v2 3/6] mm: Use memblock/zone specific iterator for handling deferred page init Alexander Duyck
2018-10-11 22:13 ` [mm PATCH v2 4/6] mm: Do not set reserved flag for hotplug memory Alexander Duyck
2018-10-11 22:58 ` Dan Williams
2018-10-11 23:22 ` Alexander Duyck
2018-10-11 22:13 ` [mm PATCH v2 5/6] mm: Move hot-plug specific memory init into separate functions and optimize Alexander Duyck
2018-10-11 22:14 ` [mm PATCH v2 6/6] mm: Use common iterator for deferred_init_pages and deferred_free_pages Alexander Duyck
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAGM2reZpV8OuAe2=VP5fiHJ7yTGAOH35U78BsULbKbXZ9fR9ng@mail.gmail.com' \
--to=pasha.tatashin@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=alexander.duyck@gmail.com \
--cc=alexander.h.duyck@linux.intel.com \
--cc=dan.j.williams@intel.com \
--cc=dave.jiang@intel.com \
--cc=davem@davemloft.net \
--cc=khalid.aziz@oracle.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=ldufour@linux.vnet.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=mingo@kernel.org \
--cc=pavel.tatashin@microsoft.com \
--cc=rppt@linux.vnet.ibm.com \
--cc=sparclinux@vger.kernel.org \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
--cc=yi.z.zhang@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox