From: Rongwei Wang <rongwei.wrw@gmail.com>
To: "zhangpeng (AS)" <zhangpeng362@huawei.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Cc: akpm@linux-foundation.org, dennisszhou@gmail.com,
shakeelb@google.com, jack@suse.cz, surenb@google.com,
kent.overstreet@linux.dev, mhocko@suse.cz, vbabka@suse.cz,
yuzhao@google.com, yu.ma@intel.com, wangkefeng.wang@huawei.com,
sunnanyong@huawei.com
Subject: Re: [RFC PATCH v2 2/2] mm: convert mm's rss stats to use atomic mode
Date: Sat, 20 Apr 2024 11:13:25 +0800 [thread overview]
Message-ID: <6a3b8095-8f49-47e0-a347-9e4a51806bf8@gmail.com> (raw)
In-Reply-To: <c1c79eb5-4d48-40e5-6f17-f8bc42f2d274@huawei.com>
On 2024/4/19 11:32, zhangpeng (AS) wrote:
> On 2024/4/19 10:30, Rongwei Wang wrote:
>
>> On 2024/4/18 22:20, Peng Zhang wrote:
>>> From: ZhangPeng <zhangpeng362@huawei.com>
>>>
>>> Since commit f1a7941243c1 ("mm: convert mm's rss stats into
>>> percpu_counter"), the rss_stats have converted into percpu_counter,
>>> which convert the error margin from (nr_threads * 64) to approximately
>>> (nr_cpus ^ 2). However, the new percpu allocation in mm_init() causes a
>>> performance regression on fork/exec/shell. Even after commit
>>> 14ef95be6f55
>>> ("kernel/fork: group allocation/free of per-cpu counters for mm
>>> struct"),
>>> the performance of fork/exec/shell is still poor compared to previous
>>> kernel versions.
>>>
>>> To mitigate performance regression, we delay the allocation of percpu
>>> memory for rss_stats. Therefore, we convert mm's rss stats to use
>>> percpu_counter atomic mode. For single-thread processes, rss_stat is in
>>> atomic mode, which reduces the memory consumption and performance
>>> regression caused by using percpu. For multiple-thread processes,
>>> rss_stat is switched to the percpu mode to reduce the error margin.
>>> We convert rss_stats from atomic mode to percpu mode only when the
>>> second thread is created.
>> Hi, Zhang Peng
>>
>> This regression we also found it in lmbench these days. I have not
>> test your patch, but it seems will solve a lot for it.
>> And I see this patch not fix the regression in multi-threads, that's
>> because of the rss_stat switched to percpu mode?
>> (If I'm wrong, please correct me.) And It seems percpu_counter also
>> has a bad effect in exit_mmap().
>>
>> If so, I'm wondering if we can further improving it on the
>> exit_mmap() path in multi-threads scenario, e.g. to determine which
>> CPUs the process has run on (mm_cpumask()? I'm not sure).
>>
> Hi, Rongwei,
>
> Yes, this patch only fixes the regression in single-thread processes. How
> much bad effect does percpu_counter have in exit_mmap()? IMHO, the
> addition
Actually, I not sure, just found a little free percpu hotspot in
exit_mmap() path when comparing 4 core vs 32 cores.
I can test more next.
> of mm counter is already in batch mode, maybe I miss something?
>
>>>
>>> After lmbench test, we can get 2% ~ 4% performance improvement
>>> for lmbench fork_proc/exec_proc/shell_proc and 6.7% performance
>>> improvement for lmbench page_fault (before batch mode[1]).
>>>
>>> The test results are as follows:
>>>
>>> base base+revert base+this patch
>>>
>>> fork_proc 416.3ms 400.0ms (3.9%) 398.6ms (4.2%)
>>> exec_proc 2095.9ms 2061.1ms (1.7%) 2047.7ms (2.3%)
>>> shell_proc 3028.2ms 2954.7ms (2.4%) 2961.2ms (2.2%)
>>> page_fault 0.3603ms 0.3358ms (6.8%) 0.3361ms (6.7%)
>> I think the regression will becomes more obvious if more cores. How
>> about your test machine?
>>
> Maybe multi-core is not a factor in the performance of the lmbench
> test here.
> Both of my test machines have 96 cores.
>
>> Thanks,
>> -wrw
>>>
>>> [1]
>>> https://lore.kernel.org/all/20240412064751.119015-1-wangkefeng.wang@huawei.com/
>>>
>>> Suggested-by: Jan Kara <jack@suse.cz>
>>> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>>> ---
>>> include/linux/mm.h | 50
>>> +++++++++++++++++++++++++++++++------
>>> include/trace/events/kmem.h | 4 +--
>>> kernel/fork.c | 18 +++++++------
>>> 3 files changed, 56 insertions(+), 16 deletions(-)
>>>
>>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>>> index d261e45bb29b..8f1bfbd54697 100644
>>> --- a/include/linux/mm.h
>>> +++ b/include/linux/mm.h
>>> @@ -2631,30 +2631,66 @@ static inline bool
>>> get_user_page_fast_only(unsigned long addr,
>>> */
>>> static inline unsigned long get_mm_counter(struct mm_struct *mm,
>>> int member)
>>> {
>>> - return percpu_counter_read_positive(&mm->rss_stat[member]);
>>> + struct percpu_counter *fbc = &mm->rss_stat[member];
>>> +
>>> + if (percpu_counter_initialized(fbc))
>>> + return percpu_counter_read_positive(fbc);
>>> +
>>> + return percpu_counter_atomic_read(fbc);
>>> }
>>> void mm_trace_rss_stat(struct mm_struct *mm, int member);
>>> static inline void add_mm_counter(struct mm_struct *mm, int
>>> member, long value)
>>> {
>>> - percpu_counter_add(&mm->rss_stat[member], value);
>>> + struct percpu_counter *fbc = &mm->rss_stat[member];
>>> +
>>> + if (percpu_counter_initialized(fbc))
>>> + percpu_counter_add(fbc, value);
>>> + else
>>> + percpu_counter_atomic_add(fbc, value);
>>> mm_trace_rss_stat(mm, member);
>>> }
>>> static inline void inc_mm_counter(struct mm_struct *mm, int member)
>>> {
>>> - percpu_counter_inc(&mm->rss_stat[member]);
>>> -
>>> - mm_trace_rss_stat(mm, member);
>>> + add_mm_counter(mm, member, 1);
>>> }
>>> static inline void dec_mm_counter(struct mm_struct *mm, int member)
>>> {
>>> - percpu_counter_dec(&mm->rss_stat[member]);
>>> + add_mm_counter(mm, member, -1);
>>> +}
>>> - mm_trace_rss_stat(mm, member);
>>> +static inline s64 mm_counter_sum(struct mm_struct *mm, int member)
>>> +{
>>> + struct percpu_counter *fbc = &mm->rss_stat[member];
>>> +
>>> + if (percpu_counter_initialized(fbc))
>>> + return percpu_counter_sum(fbc);
>>> +
>>> + return percpu_counter_atomic_read(fbc);
>>> +}
>>> +
>>> +static inline s64 mm_counter_sum_positive(struct mm_struct *mm, int
>>> member)
>>> +{
>>> + struct percpu_counter *fbc = &mm->rss_stat[member];
>>> +
>>> + if (percpu_counter_initialized(fbc))
>>> + return percpu_counter_sum_positive(fbc);
>>> +
>>> + return percpu_counter_atomic_read(fbc);
>>> +}
>>> +
>>> +static inline int mm_counter_switch_to_pcpu_many(struct mm_struct *mm)
>>> +{
>>> + return percpu_counter_switch_to_pcpu_many(mm->rss_stat,
>>> NR_MM_COUNTERS);
>>> +}
>>> +
>>> +static inline void mm_counter_destroy_many(struct mm_struct *mm)
>>> +{
>>> + percpu_counter_destroy_many(mm->rss_stat, NR_MM_COUNTERS);
>>> }
>>> /* Optimized variant when folio is already known not to be anon */
>>> diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h
>>> index 6e62cc64cd92..a4e40ae6a8c8 100644
>>> --- a/include/trace/events/kmem.h
>>> +++ b/include/trace/events/kmem.h
>>> @@ -399,8 +399,8 @@ TRACE_EVENT(rss_stat,
>>> __entry->mm_id = mm_ptr_to_hash(mm);
>>> __entry->curr = !!(current->mm == mm);
>>> __entry->member = member;
>>> - __entry->size =
>>> (percpu_counter_sum_positive(&mm->rss_stat[member])
>>> - << PAGE_SHIFT);
>>> + __entry->size = (mm_counter_sum_positive(mm, member)
>>> + << PAGE_SHIFT);
>>> ),
>>> TP_printk("mm_id=%u curr=%d type=%s size=%ldB",
>>> diff --git a/kernel/fork.c b/kernel/fork.c
>>> index 99076dbe27d8..0214273798c5 100644
>>> --- a/kernel/fork.c
>>> +++ b/kernel/fork.c
>>> @@ -823,7 +823,7 @@ static void check_mm(struct mm_struct *mm)
>>> "Please make sure 'struct resident_page_types[]' is
>>> updated as well");
>>> for (i = 0; i < NR_MM_COUNTERS; i++) {
>>> - long x = percpu_counter_sum(&mm->rss_stat[i]);
>>> + long x = mm_counter_sum(mm, i);
>>> if (unlikely(x))
>>> pr_alert("BUG: Bad rss-counter state mm:%p type:%s
>>> val:%ld\n",
>>> @@ -1301,16 +1301,10 @@ static struct mm_struct *mm_init(struct
>>> mm_struct *mm, struct task_struct *p,
>>> if (mm_alloc_cid(mm))
>>> goto fail_cid;
>>> - if (percpu_counter_init_many(mm->rss_stat, 0,
>>> GFP_KERNEL_ACCOUNT,
>>> - NR_MM_COUNTERS))
>>> - goto fail_pcpu;
>>> -
>>> mm->user_ns = get_user_ns(user_ns);
>>> lru_gen_init_mm(mm);
>>> return mm;
>>> -fail_pcpu:
>>> - mm_destroy_cid(mm);
>>> fail_cid:
>>> destroy_context(mm);
>>> fail_nocontext:
>>> @@ -1730,6 +1724,16 @@ static int copy_mm(unsigned long clone_flags,
>>> struct task_struct *tsk)
>>> if (!oldmm)
>>> return 0;
>>> + /*
>>> + * For single-thread processes, rss_stat is in atomic mode, which
>>> + * reduces the memory consumption and performance regression
>>> caused by
>>> + * using percpu. For multiple-thread processes, rss_stat is
>>> switched to
>>> + * the percpu mode to reduce the error margin.
>>> + */
>>> + if (clone_flags & CLONE_THREAD)
>>> + if (mm_counter_switch_to_pcpu_many(oldmm))
>>> + return -ENOMEM;
>>> +
>>> if (clone_flags & CLONE_VM) {
>>> mmget(oldmm);
>>> mm = oldmm;
>>
>>
next prev parent reply other threads:[~2024-04-20 3:13 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-18 14:20 [RFC PATCH v2 0/2] " Peng Zhang
2024-04-18 14:20 ` [RFC PATCH v2 1/2] percpu_counter: introduce atomic mode for percpu_counter Peng Zhang
2024-04-18 19:40 ` Andrew Morton
2024-04-19 2:55 ` zhangpeng (AS)
2024-04-26 8:11 ` Dennis Zhou
2024-04-29 7:45 ` zhangpeng (AS)
2024-04-18 14:20 ` [RFC PATCH v2 2/2] mm: convert mm's rss stats to use atomic mode Peng Zhang
2024-04-19 2:30 ` Rongwei Wang
2024-04-19 3:32 ` zhangpeng (AS)
2024-04-20 3:13 ` Rongwei Wang [this message]
2024-04-20 8:44 ` zhangpeng (AS)
2024-05-16 11:50 ` Kairui Song
2024-05-16 15:14 ` Mateusz Guzik
2024-05-17 3:29 ` Kairui Song
2024-05-17 18:08 ` Mateusz Guzik
2024-05-19 14:13 ` Dennis Zhou
2024-04-24 4:29 ` [RFC PATCH v2 0/2] " zhangpeng (AS)
2024-04-24 4:51 ` Dennis Zhou
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6a3b8095-8f49-47e0-a347-9e4a51806bf8@gmail.com \
--to=rongwei.wrw@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=dennisszhou@gmail.com \
--cc=jack@suse.cz \
--cc=kent.overstreet@linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.cz \
--cc=shakeelb@google.com \
--cc=sunnanyong@huawei.com \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
--cc=wangkefeng.wang@huawei.com \
--cc=yu.ma@intel.com \
--cc=yuzhao@google.com \
--cc=zhangpeng362@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox