linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: Prathu Baronia <prathu.baronia@oneplus.com>
Cc: alexander.duyck@gmail.com, chintan.pandya@oneplus.com,
	ying.huang@intel.com, akpm@linux-foundation.com,
	linux-mm@kvack.org, gregkh@linuxfoundation.com,
	gthelen@google.com, jack@suse.cz, ken.lin@oneplus.com,
	gasine.xu@oneplus.com
Subject: Re: [PATCH v2] mm: Optimized hugepage zeroing & copying from user
Date: Tue, 14 Apr 2020 19:03:12 +0200	[thread overview]
Message-ID: <20200414170312.GR4629@dhcp22.suse.cz> (raw)
In-Reply-To: <20200414153829.GA15230@oneplus.com>

On Tue 14-04-20 21:08:32, Prathu Baronia wrote:
> In !HIGHMEM cases, specially in 64-bit architectures, we don't need temp mapping
> of pages. Hence, k(map|unmap)_atomic() acts as nothing more than multiple
> barrier() calls, for example for a 2MB hugepage in clear_huge_page() these are
> called 512 times i.e. to map and unmap each subpage that means in total 2048
> barrier calls. This called for optimization. Simply getting VADDR from page does
> the job for us. This also applies to the copy_user_huge_page() function.

I still have hard time to see why kmap machinery should introduce any
slowdown here. Previous data posted while discussing v1 didn't really
show anything outside of the noise.

> With kmap_atomic() out of the picture we can use memset and memcpy for sizes
> larger than 4K. Instead of a left-right approach to access the target subpage,
> getting the VADDR from the page and using memset directly in a simple experiment
> we observed a 64% improvement in time over the current approach.
> 
> With this(v2) patch we observe 65.85%(under controlled conditions) improvement
> over the current approach. 
>
> Currently process_huge_page iterates over subpages in a left-right manner
> targeting the subpage that was accessed to be processed at last to keep the
> cache hot around the faulting address. This caused a latency issue because as we
> observed in the case of ARM64 the reverse access is much slower than forward
> access and much much slower than oneshot access because of the pre-fetcher
> behaviour. The following simple userspace experiment to allocate
> 100MB(total_size) of pages and writing to it using memset(oneshot), forward
> order loop and a reverse order loop gave us a good insight:-
> 
> --------------------------------------------------------------------------------
> Test code snippet:
> --------------------------------------------------------------------------------
>   /* One shot memset */
>   memset (r, 0xd, total_size);
> 
>   /* traverse in forward order */
>   for (j = 0; j < total_pages; j++)
>     {
>       memset (q + (j * SZ_4K), 0xc, SZ_4K);
>     }
> 
>   /* traverse in reverse order */
>   for (i = 0; i < total_pages; i++)
>     {
>       memset (p + total_size - (i + 1) * SZ_4K, 0xb, SZ_4K);
>     }
> ----------------------------------------------------------------------
> Results:
> ----------------------------------------------------------------------
> Results for ARM64 target (SM8150 , CPU0 & 6 are online, running at max
> frequency)
> All numbers are mean of 100 iterations. Variation is ignorable.

It would be really nice to provide std

> ----------------------------------------------------------------------
> - Oneshot : 3389.26 us
> - Forward : 8876.16 us
> - Reverse : 18157.6 us
> ----------------------------------------------------------------------
> 
> ----------------------------------------------------------------------
> Results for x86-64 (Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz, only CPU 0 in max
> frequency, DDR also running at max frequency.)
> All numbers are mean of 100 iterations. Variation is ignorable.
> ----------------------------------------------------------------------
> - Oneshot : 3203.49 us
> - Forward : 5766.46 us
> - Reverse : 5187.86 us
> ----------------------------------------------------------------------
> 
> Hence refactor the function process_huge_page() to process the hugepage
> in oneshot manner using oneshot version of routines clear_huge_page() and
> copy_user_huge_page() for !HIGHMEM cases.

> These oneshot routines do zeroing using memset and copying using memcpy since we
> observed after extensive testing on ARM64 and some local testing on x86 memset
> and memcpy routines are highly optimized and with the above data points in hand
> it made sense to utilize them directly instead of looping over all subpages.
> These oneshot routines do zero and copy with a small offset(default kept as 32KB for
> now) to keep the cache hot around the faulting address. This offset is dependent
> on the cache size and hence can be kept as a tunable configuration option.

No. There is absolutely zero reason to add a config option for this. The
kernel should have all the information to make an educated guess.

Also before going any further. The patch which has introduced the
optimization was c79b57e462b5 ("mm: hugetlb: clear target sub-page last
when clearing huge page"). It is based on an artificial benchmark which
to my knowledge doesn't represent any real workload. Your measurements
are based on a different benchmark. Your numbers clearly show that some
assumptions used for the optimization are not architecture neutral.

In such a case I would much rather revert the optimization and only
build an additional complexity based on real workloads.

-- 
Michal Hocko
SUSE Labs


  reply	other threads:[~2020-04-14 17:03 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-14 15:38 Prathu Baronia
2020-04-14 17:03 ` Michal Hocko [this message]
2020-04-14 17:41   ` Daniel Jordan
     [not found]   ` <20200414184743.GB2097@oneplus.com>
2020-04-14 19:32     ` Alexander Duyck
2020-04-15  3:40       ` Huang, Ying
2020-04-15 11:09         ` Michal Hocko
2020-04-19 12:05       ` Prathu Baronia
2020-04-14 19:40     ` Michal Hocko
2020-04-15  3:27 ` Huang, Ying
2020-04-16  1:21   ` Huang, Ying
2020-04-19 15:58   ` Prathu Baronia
2020-04-20  0:18     ` Huang, Ying
2020-04-21  9:36       ` Prathu Baronia
2020-04-21 10:09         ` Will Deacon
2020-04-21 12:47           ` Vlastimil Babka
2020-04-21 12:48             ` Vlastimil Babka
2020-04-21 13:39               ` Will Deacon
2020-04-21 13:48                 ` Vlastimil Babka
2020-04-21 13:56                   ` Chintan Pandya
2020-04-22  8:18                   ` Will Deacon
2020-04-22 11:19                     ` Will Deacon
2020-04-22 14:38                       ` Prathu Baronia
2020-05-01  8:58                         ` Prathu Baronia
2020-05-05  8:59                           ` Will Deacon
2020-04-21 13:00             ` Michal Hocko
2020-04-21 13:10               ` Will Deacon
2020-04-17  7:48 ` [mm] 134c8b410f: vm-scalability.median -7.9% regression kernel test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200414170312.GR4629@dhcp22.suse.cz \
    --to=mhocko@suse.com \
    --cc=akpm@linux-foundation.com \
    --cc=alexander.duyck@gmail.com \
    --cc=chintan.pandya@oneplus.com \
    --cc=gasine.xu@oneplus.com \
    --cc=gregkh@linuxfoundation.com \
    --cc=gthelen@google.com \
    --cc=jack@suse.cz \
    --cc=ken.lin@oneplus.com \
    --cc=linux-mm@kvack.org \
    --cc=prathu.baronia@oneplus.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox