From: Andrew Morton <akpm@linux-foundation.org>
To: xu.xin.sc@gmail.com
Cc: ran.xiaokai@zte.com.cn, yang.yang29@zte.com.cn,
jiang.xuexin@zte.com.cn, imbrenda@linux.ibm.com,
david@redhat.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, xu xin <xu.xin16@zte.com.cn>
Subject: Re: [PATCH v3 0/5] ksm: support tracking KSM-placed zero-pages
Date: Mon, 17 Oct 2022 16:55:41 -0700 [thread overview]
Message-ID: <20221017165541.6e2d3cebdc1ba13861ea4b2b@linux-foundation.org> (raw)
In-Reply-To: <20221011022006.322158-1-xu.xin16@zte.com.cn>
On Tue, 11 Oct 2022 02:20:06 +0000 xu.xin.sc@gmail.com wrote:
> From: xu xin <xu.xin16@zte.com.cn>
>
> use_zero_pages is good, not just because of cache colouring as described
> in doc, but also because use_zero_pages can accelerate merging empty pages
> when there are plenty of empty pages (full of zeros) as the time of
> page-by-page comparisons (unstable_tree_search_insert) is saved.
>
> But there is something to improve, that is, when enabling use_zero_pages,
> all empty pages will be merged with kernel zero pages instead of with each
> other as use_zero_pages is disabled, and then these zero-pages are no longer
> managed and monitor by KSM, which leads to two issues at least:
Sorry, but I'm struggling to understand what real value this patchset
offers.
> 1) MADV_UNMERGEABLE and other ways to trigger unsharing will *not*
> unshare the shared zeropage as placed by KSM (which is against the
> MADV_UNMERGEABLE documentation at least); see the link:
> https://lore.kernel.org/lkml/4a3daba6-18f9-d252-697c-197f65578c44@redhat.com/
Is that causing users any real-world problem? If not, just change the
documentation?
> 2) we cannot know how many pages are zero pages placed by KSM when
> enabling use_zero_pages, which leads to KSM not being transparent
> with all actual merged pages by KSM.
Why is this a problem?
A full description of the real-world end-user operational benefits of
these changes would help, please.
next prev parent reply other threads:[~2022-10-17 23:55 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-10-11 2:20 xu.xin.sc
2022-10-11 2:21 ` [PATCH v3 1/5] ksm: abstract the function try_to_get_old_rmap_item xu.xin.sc
2022-10-11 2:22 ` [PATCH v3 2/5] ksm: support unsharing zero pages placed by KSM xu.xin.sc
2022-10-21 10:17 ` David Hildenbrand
2022-10-21 12:54 ` David Hildenbrand
2022-11-09 10:40 ` David Hildenbrand
2022-11-14 3:02 ` xu xin
2022-10-11 2:22 ` [PATCH v3 3/5] ksm: count all " xu.xin.sc
2022-10-11 2:22 ` [PATCH v3 4/5] ksm: count zero pages for each process xu.xin.sc
2022-10-11 2:23 ` [PATCH v3 5/5] ksm: add zero_pages_sharing documentation xu.xin.sc
2022-10-17 23:55 ` Andrew Morton [this message]
2022-10-18 9:00 ` Re:[PATCH v3 0/5] ksm: support tracking KSM-placed zero-pages xu xin
2022-10-18 22:54 ` [PATCH " Andrew Morton
2022-10-21 10:18 ` David Hildenbrand
2022-10-24 3:07 ` xu xin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20221017165541.6e2d3cebdc1ba13861ea4b2b@linux-foundation.org \
--to=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=imbrenda@linux.ibm.com \
--cc=jiang.xuexin@zte.com.cn \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ran.xiaokai@zte.com.cn \
--cc=xu.xin.sc@gmail.com \
--cc=xu.xin16@zte.com.cn \
--cc=yang.yang29@zte.com.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox