From: Huan Yang <link@vivo.com>
To: "Huang, Ying" <ying.huang@intel.com>
Cc: Michal Hocko <mhocko@suse.com>, Tejun Heo <tj@kernel.org>,
Zefan Li <lizefan.x@bytedance.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Jonathan Corbet <corbet@lwn.net>,
Roman Gushchin <roman.gushchin@linux.dev>,
Shakeel Butt <shakeelb@google.com>,
Muchun Song <muchun.song@linux.dev>,
Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@redhat.com>,
Matthew Wilcox <willy@infradead.org>,
Kefeng Wang <wangkefeng.wang@huawei.com>,
Peter Xu <peterx@redhat.com>,
"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
Yosry Ahmed <yosryahmed@google.com>,
Liu Shixin <liushixin2@huawei.com>,
Hugh Dickins <hughd@google.com>,
cgroups@vger.kernel.org, linux-doc@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
opensource.kernel@vivo.com
Subject: Re: [RFC 0/4] Introduce unbalance proactive reclaim
Date: Mon, 13 Nov 2023 14:28:20 +0800 [thread overview]
Message-ID: <a09e21a6-6a1e-44ec-9187-600a0a969a45@vivo.com> (raw)
In-Reply-To: <87edgufakm.fsf@yhuang6-desk2.ccr.corp.intel.com>
在 2023/11/13 14:10, Huang, Ying 写道:
> Huan Yang <link@vivo.com> writes:
>
>> 在 2023/11/10 20:24, Michal Hocko 写道:
>>> On Fri 10-11-23 11:48:49, Huan Yang wrote:
>>> [...]
>>>> Also, When the application enters the foreground, the startup speed
>>>> may be slower. Also trace show that here are a lot of block I/O.
>>>> (usually 1000+ IO count and 200+ms IO Time) We usually observe very
>>>> little block I/O caused by zram refault.(read: 1698.39MB/s, write:
>>>> 995.109MB/s), usually, it is faster than random disk reads.(read:
>>>> 48.1907MB/s write: 49.1654MB/s). This test by zram-perf and I change a
>>>> little to test UFS.
>>>>
>>>> Therefore, if the proactive reclamation encounters many file pages,
>>>> the application may become slow when it is opened.
>>> OK, this is an interesting information. From the above it seems that
>>> storage based IO refaults are order of magnitude more expensive than
>>> swap (zram in this case). That means that the memory reclaim should
>>> _in general_ prefer anonymous memory reclaim over refaulted page cache,
>>> right? Or is there any reason why "frozen" applications are any
>>> different in this case?
>> Frozen applications mean that the application process is no longer active,
>> so once its private anonymous page data is swapped out, the anonymous
>> pages will not be refaulted until the application becomes active again.
>>
>> On the contrary, page caches are usually shared. Even if the
>> application that
>> first read the file is no longer active, other processes may still
>> read the file.
>> Therefore, it is not reasonable to use the proactive reclamation
>> interface to
>> reclaim page caches without considering memory pressure.
> No. Not all page caches are shared. For example, the page caches used
> for use-once streaming IO. And, they should be reclaimed firstly.
Yes, but this part is done very well in MGLRU and does not require our
intervention.
Moreover, the reclaim speed of clean files is very fast, but compared to it,
the reclaim speed of anonymous pages is a bit slower.
>
> So, your solution may work good for your specific use cases, but it's
Yes, this approach is not universal.
> not a general solution. Per my understanding, you want to reclaim only
> private pages to avoid impact the performance of other applications.
> Privately mapped anonymous pages is easy to be identified (And I suggest
> that you can find a way to avoid reclaim shared mapped anonymous pages).
Yes, it is not good to reclaim shared anonymous pages, and it needs to be
identified. In the future, we will consider how to filter them.
Thanks.
> There's some heuristics to identify use-once page caches in reclaiming
> code. Why doesn't it work for your situation?
As mentioned above, the default reclaim algorithm is suitable for recycling
file pages, but we do not need to intervene in it.
Direct reclaim or kswapd of these use-once file pages is very fast and will
not cause lag or other effects.
Our overall goal is to actively and reasonably compress unused anonymous
pages based on certain strategies, in order to increase available memory to
a certain extent, avoid lag, and prevent applications from being killed.
Therefore, using the proactive reclaim interface, combined with LRU
algorithm
and reclaim tendencies, is a good way to achieve our goal.
>
> [snip]
>
> --
> Best Regards,
> Huang, Ying
--
Thanks,
Huan Yang
next prev parent reply other threads:[~2023-11-13 6:28 UTC|newest]
Thread overview: 58+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-08 6:58 Huan Yang
2023-11-08 6:58 ` [PATCH 1/4] mm: vmscan: LRU unbalance cgroup reclaim Huan Yang
2023-11-08 6:58 ` [PATCH 2/4] mm: multi-gen LRU: MGLRU unbalance reclaim Huan Yang
2023-11-08 12:34 ` kernel test robot
2023-11-09 11:08 ` kernel test robot
2023-12-04 6:53 ` Dan Carpenter
2023-11-08 6:58 ` [PATCH 3/4] mm: memcg: implement unbalance proactive reclaim Huan Yang
2023-11-08 6:58 ` [PATCH 4/4] mm: memcg: apply proactive reclaim into cgroupv1 Huan Yang
2023-11-08 21:06 ` kernel test robot
2023-11-08 7:35 ` [RFC 0/4] Introduce unbalance proactive reclaim Huang, Ying
2023-11-08 7:53 ` Huan Yang
2023-11-08 8:09 ` Huang, Ying
2023-11-08 8:14 ` Yosry Ahmed
2023-11-08 8:21 ` Huan Yang
2023-11-08 9:00 ` Yosry Ahmed
2023-11-08 9:05 ` Huan Yang
2023-11-08 8:00 ` Yosry Ahmed
2023-11-08 8:26 ` Huan Yang
2023-11-08 8:59 ` Yosry Ahmed
2023-11-08 9:12 ` Huan Yang
2023-11-08 14:06 ` Michal Hocko
2023-11-09 1:56 ` Huan Yang
2023-11-09 3:15 ` Huang, Ying
2023-11-09 3:38 ` Huan Yang
2023-11-09 9:57 ` Michal Hocko
2023-11-09 10:29 ` Huan Yang
2023-11-09 10:39 ` Michal Hocko
2023-11-09 10:50 ` Huan Yang
2023-11-09 12:40 ` Michal Hocko
2023-11-09 13:07 ` Huan Yang
2023-11-09 13:46 ` Michal Hocko
2023-11-10 3:48 ` Huan Yang
2023-11-10 12:24 ` Michal Hocko
2023-11-13 2:17 ` Huan Yang
2023-11-13 6:10 ` Huang, Ying
2023-11-13 6:28 ` Huan Yang [this message]
2023-11-13 8:05 ` Huang, Ying
2023-11-13 8:26 ` Huan Yang
2023-11-14 9:54 ` Michal Hocko
2023-11-14 9:56 ` Michal Hocko
2023-11-15 6:52 ` Huang, Ying
2023-11-14 9:50 ` Michal Hocko
2023-11-10 1:19 ` Huang, Ying
2023-11-10 2:44 ` Huan Yang
2023-11-10 4:00 ` Huang, Ying
2023-11-10 6:21 ` Huan Yang
2023-11-10 12:32 ` Michal Hocko
2023-11-13 1:54 ` Huan Yang
2023-11-14 10:04 ` Michal Hocko
2023-11-14 12:37 ` Huan Yang
2023-11-14 13:03 ` Michal Hocko
2023-11-15 2:11 ` Huan Yang
2023-11-09 9:53 ` Michal Hocko
2023-11-09 10:55 ` Huan Yang
2023-11-09 12:45 ` Michal Hocko
2023-11-09 13:10 ` Huan Yang
2023-11-08 16:14 ` Andrew Morton
2023-11-09 1:58 ` Huan Yang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a09e21a6-6a1e-44ec-9187-600a0a969a45@vivo.com \
--to=link@vivo.com \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=corbet@lwn.net \
--cc=david@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=liushixin2@huawei.com \
--cc=lizefan.x@bytedance.com \
--cc=mhocko@suse.com \
--cc=muchun.song@linux.dev \
--cc=opensource.kernel@vivo.com \
--cc=peterx@redhat.com \
--cc=roman.gushchin@linux.dev \
--cc=shakeelb@google.com \
--cc=tj@kernel.org \
--cc=vishal.moola@gmail.com \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
--cc=ying.huang@intel.com \
--cc=yosryahmed@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox