From: Aaron Lu <aaron.lu@intel.com>
To: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Cc: Linux Memory Management List <linux-mm@kvack.org>,
"'Kirill A. Shutemov'" <kirill.shutemov@linux.intel.com>,
Dave Hansen <dave.hansen@intel.com>,
Tim Chen <tim.c.chen@linux.intel.com>,
Huang Ying <ying.huang@intel.com>,
Andrew Morton <akpm@linux-foundation.org>,
Vlastimil Babka <vbabka@suse.cz>,
Jerome Marchand <jmarchan@redhat.com>,
Andrea Arcangeli <aarcange@redhat.com>,
Mel Gorman <mgorman@techsingularity.net>,
Ebru Akagunduz <ebru.akagunduz@gmail.com>,
linux-kernel@vger.kernel.org,
"Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Subject: Re: [PATCH] thp: reduce usage of huge zero page's atomic counter
Date: Mon, 29 Aug 2016 22:10:12 +0800 [thread overview]
Message-ID: <20160829141011.GA15819@aaronlu.sh.intel.com> (raw)
In-Reply-To: <57C43D0E.8060802@linux.vnet.ibm.com>
On Mon, Aug 29, 2016 at 07:17:58PM +0530, Anshuman Khandual wrote:
> On 08/29/2016 02:23 PM, Aaron Lu wrote:
> > On 08/29/2016 04:49 PM, Anshuman Khandual wrote:
> >> > On 08/29/2016 12:01 PM, Aaron Lu wrote:
> >>> >> The global zero page is used to satisfy an anonymous read fault. If
> >>> >> THP(Transparent HugePage) is enabled then the global huge zero page is used.
> >>> >> The global huge zero page uses an atomic counter for reference counting
> >>> >> and is allocated/freed dynamically according to its counter value.
> >>> >>
> >>> >> CPU time spent on that counter will greatly increase if there are
> >>> >> a lot of processes doing anonymous read faults. This patch proposes a
> >>> >> way to reduce the access to the global counter so that the CPU load
> >>> >> can be reduced accordingly.
> >>> >>
> >>> >> To do this, a new flag of the mm_struct is introduced: MMF_USED_HUGE_ZERO_PAGE.
> >>> >> With this flag, the process only need to touch the global counter in
> >>> >> two cases:
> >>> >> 1 The first time it uses the global huge zero page;
> >>> >> 2 The time when mm_user of its mm_struct reaches zero.
> >>> >>
> >>> >> Note that right now, the huge zero page is eligible to be freed as soon
> >>> >> as its last use goes away. With this patch, the page will not be
> >>> >> eligible to be freed until the exit of the last process from which it
> >>> >> was ever used.
> >>> >>
> >>> >> And with the use of mm_user, the kthread is not eligible to use huge
> >>> >> zero page either. Since no kthread is using huge zero page today, there
> >>> >> is no difference after applying this patch. But if that is not desired,
> >>> >> I can change it to when mm_count reaches zero.
> >>> >>
> >>> >> Case used for test on Haswell EP:
> >>> >> usemem -n 72 --readonly -j 0x200000 100G
> >> >
> >> > Is this benchmark publicly available ? Does not seem to be this one
> >> > https://github.com/gnubert/usemem.git, Does it ?
> > Sorry, forgot to attach its link.
> > It's this one:
> > https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git
> >
> > And the above mentioned usemem is:
> > https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/tree/usemem.c
>
> Hey Aaron,
>
> Thanks for pointing out. I did ran similar test on a POWER8 box using 16M
> steps (huge page size is 16MB on it) instead of 2MB. But the perf profile
> looked different. The perf command line was like this on a 32 CPU system.
>
> perf record ./usemem -n 256 --readonly -j 0x1000000 100G
>
> But the relative weight of the above mentioned function came out to be
> pretty less compared to what you have reported from your experiment
> which is around 54.03%.
>
> 0.07% usemem [kernel.vmlinux] [k] get_huge_zero_page
>
> Seems way out of the mark. Can you please confirm your exact perf record
> command line and how many CPUs you have on the system.
Haswell EP has 72 CPUs.
Since the huge page size is 16MB on your system, maybe you can try:
perf record ./usemem -n 32 --readonly -j 0x1000000 800G
Regards,
Aaron
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-08-29 14:10 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-08-29 6:31 Aaron Lu
2016-08-29 8:49 ` Anshuman Khandual
2016-08-29 8:53 ` Aaron Lu
2016-08-29 13:47 ` Anshuman Khandual
2016-08-29 14:10 ` Aaron Lu [this message]
2016-08-29 22:50 ` Andrew Morton
2016-08-30 3:09 ` Aaron Lu
2016-08-30 3:39 ` Andrew Morton
2016-08-30 4:44 ` Anshuman Khandual
2016-08-30 4:56 ` Andrew Morton
2016-08-30 5:54 ` Aaron Lu
2016-08-30 6:47 ` Anshuman Khandual
2016-08-30 5:51 ` Aaron Lu
2016-08-30 5:14 ` Anshuman Khandual
2016-08-30 5:19 ` Andrew Morton
2016-08-30 15:59 ` Sergey Senozhatsky
2016-08-31 2:08 ` [PATCH v2] " Aaron Lu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160829141011.GA15819@aaronlu.sh.intel.com \
--to=aaron.lu@intel.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.vnet.ibm.com \
--cc=dave.hansen@intel.com \
--cc=ebru.akagunduz@gmail.com \
--cc=jmarchan@redhat.com \
--cc=khandual@linux.vnet.ibm.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=tim.c.chen@linux.intel.com \
--cc=vbabka@suse.cz \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox