linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Punit Agrawal <punit.agrawal@arm.com>
To: Jan Stancek <jstancek@redhat.com>
Cc: linux-mm@kvack.org, lwoodman <lwoodman@redhat.com>,
	Rafael Aquini <aquini@redhat.com>,
	Andrea Arcangeli <aarcange@redhat.com>
Subject: Re: [bug?] mallocstress poor performance with THP on arm64 system
Date: Thu, 15 Feb 2018 18:40:22 +0000	[thread overview]
Message-ID: <87sha23xm1.fsf@e105922-lin.cambridge.arm.com> (raw)
In-Reply-To: <1847959563.1954032.1518649501357.JavaMail.zimbra@redhat.com> (Jan Stancek's message of "Wed, 14 Feb 2018 18:05:01 -0500 (EST)")

Jan Stancek <jstancek@redhat.com> writes:

> Hi,
>
> mallocstress[1] LTP testcase takes ~5+ minutes to complete
> on some arm64 systems (e.g. 4 node, 64 CPU, 256GB RAM):
>  real    7m58.089s
>  user    0m0.513s
>  sys     24m27.041s
>
> But if I turn off THP ("transparent_hugepage=never") it's a lot faster:
>  real    0m4.185s
>  user    0m0.298s
>  sys     0m13.954s
>

>From the config fragment below the kernel is using 64k pages which
matches up with the 512MB default hugepage at PMD level.

With transparent hugepage enabled, the kernel tries to allocate
hugepages on page faults. Each fault taken by 'mallocstress' test this
ends up allocating in 512MB chunks but only uses the first few bytes.

You can change the default transparent hugepage option to madvise (by
setting CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y or by "echo madvise >
/sys/kernel/mm/transparent_hugepage/enabled").

The other option is to ignore 'mallocstress' runtime as it is not
representative and for certain workloads (e.g., VMs) it can be useful to
boot with transparent_hugepage=always.

Thanks,
Punit

> Perf suggests, that most time is spent in clear_page().
>
> -   94.25%    94.24%  mallocstress  [kernel.kallsyms]   [k] clear_page
>      94.24% thread_start
>         start_thread
>         alloc_mem
>         allocate_free
>       - malloc
>          - 94.24% _int_malloc
>             - 94.24% sysmalloc
>                  el0_da
>                  do_mem_abort
>                  do_translation_fault
>                  do_page_fault
>                  handle_mm_fault
>                - __handle_mm_fault
>                   - 94.22% do_huge_pmd_anonymous_page
>                      - __do_huge_pmd_anonymous_page
>                         - 94.21% clear_huge_page
>                              clear_page
>
> Percent│
>        │
>        │
>        │    Disassembly of section load0:
>        │
>        │    ffff0000087f0540 <load0>:
>   0.00 │      mrs    x1, dczid_el0
>   0.00 │      and    w1, w1, #0xf
>        │      mov    x2, #0x4                        // #4
>        │      lsl    x1, x2, x1
> 100.00 │10:   dc     zva, x0
>        │      add    x0, x0, x1
>        │      tst    x0, #0xffff
>        │    ↑ b.ne   10
>        │    ← ret
>
> # uname -r
> 4.15.3
>
> # grep HUGE -r .config
> CONFIG_CGROUP_HUGETLB=y
> CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y
> CONFIG_HAVE_ARCH_HUGE_VMAP=y
> CONFIG_SYS_SUPPORTS_HUGETLBFS=y
> CONFIG_TRANSPARENT_HUGEPAGE=y
> CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y
> # CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set
> CONFIG_TRANSPARENT_HUGE_PAGECACHE=y
> CONFIG_HUGETLBFS=y
> CONFIG_HUGETLB_PAGE=y
>
> # grep _PAGE -r .config
> CONFIG_ARM64_PAGE_SHIFT=16
> CONFIG_PAGE_COUNTER=y
> CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y
> # CONFIG_ARM64_4K_PAGES is not set
> # CONFIG_ARM64_16K_PAGES is not set
> CONFIG_ARM64_64K_PAGES=y
> CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
> CONFIG_TRANSPARENT_HUGE_PAGECACHE=y
> CONFIG_IDLE_PAGE_TRACKING=y
> CONFIG_PROC_PAGE_MONITOR=y
> CONFIG_HUGETLB_PAGE=y
> CONFIG_ARCH_HAS_GIGANTIC_PAGE=y
> # CONFIG_PAGE_OWNER is not set
> # CONFIG_PAGE_EXTENSION is not set
> # CONFIG_DEBUG_PAGEALLOC is not set
> # CONFIG_PAGE_POISONING is not set
> # CONFIG_DEBUG_PAGE_REF is not set
>
> # cat /proc/meminfo  | grep Huge
> Hugepagesize:     524288 kB

I noticed 512MB - that's a _huge_ hugepage.

The config suggests that the kernel is running with 64k pages.
>
> # numactl -H
> available: 4 nodes (0-3)
> node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
> node 0 size: 65308 MB
> node 0 free: 64892 MB
> node 1 cpus: 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
> node 1 size: 65404 MB
> node 1 free: 62804 MB
> node 2 cpus: 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
> node 2 size: 65404 MB
> node 2 free: 62847 MB
> node 3 cpus: 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63
> node 3 size: 65402 MB
> node 3 free: 64671 MB
> node distances:
> node   0   1   2   3 
>   0:  10  15  20  20 
>   1:  15  10  20  20 
>   2:  20  20  10  15 
>   3:  20  20  15  10
>
> Regards,
> Jan
>
> [1] https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/mem/mtest07/mallocstress.c
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

      parent reply	other threads:[~2018-02-15 18:40 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1523287676.1950020.1518648233654.JavaMail.zimbra@redhat.com>
2018-02-14 23:05 ` Jan Stancek
2018-02-15  9:02   ` Kirill A. Shutemov
2018-02-16 17:30     ` Punit Agrawal
2018-02-15 18:40   ` Punit Agrawal [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87sha23xm1.fsf@e105922-lin.cambridge.arm.com \
    --to=punit.agrawal@arm.com \
    --cc=aarcange@redhat.com \
    --cc=aquini@redhat.com \
    --cc=jstancek@redhat.com \
    --cc=linux-mm@kvack.org \
    --cc=lwoodman@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox