linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Linus Torvalds <torvalds@linux-foundation.org>
To: "Huang, Ying" <ying.huang@intel.com>
Cc: kernel test robot <yujie.liu@intel.com>,
	oe-lkp@lists.linux.dev, lkp@intel.com,
	 Andrew Morton <akpm@linux-foundation.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	 Hugh Dickins <hughd@google.com>,
	Nadav Amit <nadav.amit@gmail.com>,
	 Linux Memory Management List <linux-mm@kvack.org>,
	linux-arch@vger.kernel.org, feng.tang@intel.com,
	 zhengjun.xing@linux.intel.com, fengwei.yin@intel.com
Subject: Re: [linux-next:master] [mm] 5df397dec7: will-it-scale.per_thread_ops -53.3% regression
Date: Tue, 6 Dec 2022 10:41:28 -0800	[thread overview]
Message-ID: <CAHk-=wjDzVL+r6NmnU--tyEfDYhUB-5m=PQBZTQ2Es8bx7Mz+w@mail.gmail.com> (raw)
In-Reply-To: <87ilipffws.fsf@yhuang6-desk2.ccr.corp.intel.com>

On Mon, Dec 5, 2022 at 6:03 PM Huang, Ying <ying.huang@intel.com> wrote:
>
> >
> > I assume that this test is doing a lot of mmap/munmap on dirty shared
> > memory regions (both because of the regression, and because of the
> > name of that test ;)
>
> I have checked the source code of will-it-scale/page_fault3.  Yes, it
> exactly does that.

Heh. I took a look at that test-case, and yeah, it's just doing a
128MB shared mapping, dirtying it one page at a time, and unmapping it
in a loop.

It doesn't even look like a very good benchmark for that, because the
_first_ time around the loop it does it is very different in that it
has to actually create the file extents.

So that benchmark starts out testing something different than what the
steady state is.

But yeah, that's pretty much the worst possible case for this all, and
yes, I suspect it's more about the TLB batching than anything else.

And I think I see the issue. We actually have a reasonably big batch
size most of the time, but this benchmark triggers that dirty shared
page logic on every page, and that in turn means that we stop batching
immediately - even when we only have the initial tiny on-stack batch.

So instead of batching MAX_GATHER_BATCH pages at a time (roughly 500
pages per go), we end up batching just eight pages (MMU_GATHER_BUNDLE)
at a time.

I didn't think of that degenerate case.

Let me think about this a while, but I think I'll have a patch for you
to test once I've dealt with a couple more pull requests.

                  Linus


  reply	other threads:[~2022-12-06 18:41 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-05  8:59 kernel test robot
2022-12-05 20:43 ` Linus Torvalds
2022-12-06  2:02   ` Huang, Ying
2022-12-06 18:41     ` Linus Torvalds [this message]
     [not found]       ` <CAHk-=whjis-wTZKH20xoBW3=1qyygYoxJORxXx8ZpJbc6KtROw@mail.gmail.com>
2022-12-07  5:39         ` Huang, Ying
2022-12-07  5:54           ` Hugh Dickins
2022-12-07 20:17           ` Linus Torvalds
2022-12-07 22:20             ` Andrew Morton
2022-12-07  2:12   ` Yujie Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAHk-=wjDzVL+r6NmnU--tyEfDYhUB-5m=PQBZTQ2Es8bx7Mz+w@mail.gmail.com' \
    --to=torvalds@linux-foundation.org \
    --cc=akpm@linux-foundation.org \
    --cc=feng.tang@intel.com \
    --cc=fengwei.yin@intel.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lkp@intel.com \
    --cc=nadav.amit@gmail.com \
    --cc=oe-lkp@lists.linux.dev \
    --cc=ying.huang@intel.com \
    --cc=yujie.liu@intel.com \
    --cc=zhengjun.xing@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox