From: Liu Shixin <liushixin2@huawei.com>
To: Michal Hocko <mhocko@suse.com>,
Andrew Morton <akpm@linux-foundation.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
huang ying <huang.ying.caritas@gmail.com>,
Aaron Lu <aaron.lu@intel.com>,
Dave Hansen <dave.hansen@intel.com>,
Jesper Dangaard Brouer <brouer@redhat.com>,
Vlastimil Babka <vbabka@suse.cz>, Kemi Wang <kemi.wang@intel.com>,
"Kefeng Wang" <wangkefeng.wang@huawei.com>,
<linux-kernel@vger.kernel.org>, <linux-mm@kvack.org>
Subject: Re: [PATCH -next v2] mm, proc: collect percpu free pages into the free pages
Date: Tue, 23 Aug 2022 20:46:43 +0800 [thread overview]
Message-ID: <6b2977fc-1e4a-f3d4-db24-7c4699e0773f@huawei.com> (raw)
In-Reply-To: <YwSGqtEICW5AlhWr@dhcp22.suse.cz>
On 2022/8/23 15:50, Michal Hocko wrote:
> On Mon 22-08-22 14:12:07, Andrew Morton wrote:
>> On Mon, 22 Aug 2022 11:33:54 +0800 Liu Shixin <liushixin2@huawei.com> wrote:
>>
>>> The page on pcplist could be used, but not counted into memory free or
>>> avaliable, and pcp_free is only showed by show_mem() for now. Since commit
>>> d8a759b57035 ("mm, page_alloc: double zone's batchsize"), there is a
>>> significant decrease in the display of free memory, with a large number
>>> of cpus and zones, the number of pages in the percpu list can be very
>>> large, so it is better to let user to know the pcp count.
>>>
>>> On a machine with 3 zones and 72 CPUs. Before commit d8a759b57035, the
>>> maximum amount of pages in the pcp lists was theoretically 162MB(3*72*768KB).
>>> After the patch, the lists can hold 324MB. It has been observed to be 114MB
>>> in the idle state after system startup in practice(increased 80 MB).
>>>
>> Seems reasonable.
> I have asked in the previous incarnation of the patch but haven't really
> received any answer[1]. Is this a _real_ problem? The absolute amount of
> memory could be perceived as a lot but is this really noticeable wrt
> overall memory on those systems?
This may not obvious when the memory is sufficient. However, as products monitor the
memory to plan it. The change has caused warning. We have also considered using /proc/zoneinfo
to calculate the total number of pcplists. However, we think it is more appropriate to add
the total number of pcplists to free and available pages. After all, this part is also free pages.
> Also the patch is accounting these pcp caches as free memory but that
> can be misleading as this memory is not readily available for use in
> general. E.g. MemAvailable is documented as:
> An estimate of how much memory is available for starting new
> applications, without swapping.
> but pcp caches are drained only after direct reclaim fails which can
> imply a lot of reclaim and runtime disruption.
Maybe it makes more sense to add it only to free? Or handle it like page cache?
>
> [1] http://lkml.kernel.org/r/YwMv1A1rVNZQuuOo@dhcp22.suse.cz
>
>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>> index 033f1e26d15b..f89928d3ad4e 100644
>>> --- a/mm/page_alloc.c
>>> +++ b/mm/page_alloc.c
>>> @@ -5853,6 +5853,26 @@ static unsigned long nr_free_zone_pages(int offset)
>>> return sum;
>>> }
>>>
>>> +static unsigned long nr_free_zone_pcplist_pages(struct zone *zone)
>>> +{
>>> + unsigned long sum = 0;
>>> + int cpu;
>>> +
>>> + for_each_online_cpu(cpu)
>>> + sum += per_cpu_ptr(zone->per_cpu_pageset, cpu)->count;
>>> + return sum;
>>> +}
>>> +
>>> +static unsigned long nr_free_pcplist_pages(void)
>>> +{
>>> + unsigned long sum = 0;
>>> + struct zone *zone;
>>> +
>>> + for_each_zone(zone)
>>> + sum += nr_free_zone_pcplist_pages(zone);
>>> + return sum;
>>> +}
>> Prevention of races against zone/node hotplug?
> Memory hotplug doesn't remove nodes nor its zones.
>
next prev parent reply other threads:[~2022-08-23 12:46 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-08-22 2:33 [PATCH -next] " Liu Shixin
2022-08-22 3:33 ` [PATCH -next v2] " Liu Shixin
2022-08-22 21:12 ` Andrew Morton
2022-08-22 21:13 ` Andrew Morton
2022-08-23 13:12 ` Liu Shixin
2022-08-23 7:50 ` Michal Hocko
2022-08-23 12:46 ` Liu Shixin [this message]
2022-08-23 13:37 ` Michal Hocko
2022-08-24 10:05 ` Liu Shixin
2022-08-24 10:12 ` Michal Hocko
2023-11-24 17:54 ` Dmytro Maluka
2023-11-25 2:22 ` Kefeng Wang
2023-11-27 8:50 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6b2977fc-1e4a-f3d4-db24-7c4699e0773f@huawei.com \
--to=liushixin2@huawei.com \
--cc=aaron.lu@intel.com \
--cc=akpm@linux-foundation.org \
--cc=brouer@redhat.com \
--cc=dave.hansen@intel.com \
--cc=gregkh@linuxfoundation.org \
--cc=huang.ying.caritas@gmail.com \
--cc=kemi.wang@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=vbabka@suse.cz \
--cc=wangkefeng.wang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox