From: Byungchul Park <byungchul@sk.com>
To: Dave Hansen <dave.hansen@intel.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
kernel_team@skhynix.com, akpm@linux-foundation.org,
ying.huang@intel.com, namit@vmware.com, xhao@linux.alibaba.com,
mgorman@techsingularity.net, hughd@google.com,
willy@infradead.org, david@redhat.com, peterz@infradead.org,
luto@kernel.org, tglx@linutronix.de, mingo@redhat.com,
bp@alien8.de, dave.hansen@linux.intel.com
Subject: Re: [v4 0/3] Reduce TLB flushes under some specific conditions
Date: Fri, 10 Nov 2023 10:08:52 +0900 [thread overview]
Message-ID: <20231110010852.GB72073@system.software.com> (raw)
In-Reply-To: <64cb078b-d2e7-417f-8125-b38d423163ce@intel.com>
On Thu, Nov 09, 2023 at 06:26:08AM -0800, Dave Hansen wrote:
> On 11/8/23 20:59, Byungchul Park wrote:
> > Can you believe it? I saw the number of TLB full flush reduced about
> > 80% and iTLB miss reduced about 50%, and the time wise performance
> > always shows at least 1% stable improvement with the workload I tested
> > with, XSBench. However, I believe that it would help more with other
> > ones or any real ones. It'd be appreciated to let me know if I'm missing
> > something.
>
> I see that you've moved a substantial amount of code out of arch/x86.
> That's great.
>
> But there doesn't appear to be any improvement in the justification or
> performance data. The page flag is also here, which is horribly frowned
> upon. It's an absolute no-go with this level of justification.
>
> I'd really suggest not sending any more of these out until those issues
> are rectified. I know I definitely won't be reviewing them in this state.
Make sense. Lemme think it more and improve it.
Byungchul
next prev parent reply other threads:[~2023-11-10 1:09 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-09 4:59 Byungchul Park
2023-11-09 4:59 ` [v4 1/3] mm/rmap: Recognize read-only TLB entries during batched TLB flush Byungchul Park
2023-11-09 20:26 ` kernel test robot
2023-11-09 4:59 ` [v4 2/3] mm: Defer TLB flush by keeping both src and dst folios at migration Byungchul Park
2023-11-09 14:36 ` Matthew Wilcox
2023-11-10 1:29 ` Byungchul Park
2024-01-15 7:55 ` Byungchul Park
2023-11-09 17:09 ` kernel test robot
2023-11-09 19:07 ` kernel test robot
2023-11-09 4:59 ` [v4 3/3] mm: Pause migrc mechanism at high memory pressure Byungchul Park
2023-11-09 5:20 ` [v4 0/3] Reduce TLB flushes under some specific conditions Huang, Ying
2023-11-10 1:32 ` Byungchul Park
2023-11-15 2:57 ` Byungchul Park
2023-11-09 14:26 ` Dave Hansen
2023-11-10 1:08 ` Byungchul Park [this message]
2023-11-15 6:43 ` Byungchul Park
2024-01-15 7:58 ` Byungchul Park
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231110010852.GB72073@system.software.com \
--to=byungchul@sk.com \
--cc=akpm@linux-foundation.org \
--cc=bp@alien8.de \
--cc=dave.hansen@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=hughd@google.com \
--cc=kernel_team@skhynix.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=mgorman@techsingularity.net \
--cc=mingo@redhat.com \
--cc=namit@vmware.com \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=willy@infradead.org \
--cc=xhao@linux.alibaba.com \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox