From: Zi Yan <ziy@nvidia.com>
To: ran xiaokai <ranxiaokai627@163.com>
Cc: 21cnbao@gmail.com, akpm@linux-foundation.org, david@redhat.com,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
mhocko@kernel.org, v-songbaohua@oppo.com, xu.xin16@zte.com.cn,
yang.yang29@zte.com.cn
Subject: Re: [PATCH linux-next] mm: huge_memory: fix misused mapping_large_folio_support() for anon folios
Date: Wed, 05 Jun 2024 07:08:31 -0700 [thread overview]
Message-ID: <D667F08C-0CCE-4D5E-89A3-56674B0893DE@nvidia.com> (raw)
In-Reply-To: <20240605095406.891512-1-ranxiaokai627@163.com>
[-- Attachment #1: Type: text/plain, Size: 4458 bytes --]
On 5 Jun 2024, at 2:54, ran xiaokai wrote:
>> On Tue, Jun 4, 2024 at 5:47?PM <xu.xin16@zte.com.cn> wrote:
>>>
>>> From: Ran Xiaokai <ran.xiaokai@zte.com.cn>
>>>
>>> When I did a large folios split test, a WARNING
>>> "[ 5059.122759][ T166] Cannot split file folio to non-0 order"
>>> was triggered. But my test cases are only for anonmous folios.
>>> while mapping_large_folio_support() is only reasonable for page
>>> cache folios.
>>>
>>> In split_huge_page_to_list_to_order(), the folio passed to
>>> mapping_large_folio_support() maybe anonmous folio. The
>>> folio_test_anon() check is missing. So the split of the anonmous THP
>>> is failed. This is also the same for shmem_mapping(). We'd better add
>>> a check for both. But the shmem_mapping() in __split_huge_page() is
>>> not involved, as for anonmous folios, the end parameter is set to -1, so
>>> (head[i].index >= end) is always false. shmem_mapping() is not called.
>>>
>>> Using /sys/kernel/debug/split_huge_pages to verify this, with this
>>> patch, large anon THP is successfully split and the warning is ceased.
>>>
>>> Signed-off-by: Ran Xiaokai <ran.xiaokai@zte.com.cn>
>>> Cc: xu xin <xu.xin16@zte.com.cn>
>>> Cc: Yang Yang <yang.yang29@zte.com.cn>
>>> ---
>>> mm/huge_memory.c | 38 ++++++++++++++++++++------------------
>>> 1 file changed, 20 insertions(+), 18 deletions(-)
>>>
>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>> index 317de2afd371..4c9c7e5ea20c 100644
>>> --- a/mm/huge_memory.c
>>> +++ b/mm/huge_memory.c
>>> @@ -3009,31 +3009,33 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
>>> if (new_order >= folio_order(folio))
>>> return -EINVAL;
>>>
>>> - /* Cannot split anonymous THP to order-1 */
>>> - if (new_order == 1 && folio_test_anon(folio)) {
>>> - VM_WARN_ONCE(1, "Cannot split to order-1 folio");
>>> - return -EINVAL;
>>> - }
>>> -
>>> if (new_order) {
>>> /* Only swapping a whole PMD-mapped folio is supported */
>>> if (folio_test_swapcache(folio))
>>> return -EINVAL;
>>> - /* Split shmem folio to non-zero order not supported */
>>> - if (shmem_mapping(folio->mapping)) {
>>> - VM_WARN_ONCE(1,
>>> - "Cannot split shmem folio to non-0 order");
>>> - return -EINVAL;
>>> - }
>>> - /* No split if the file system does not support large folio */
>>> - if (!mapping_large_folio_support(folio->mapping)) {
>>> - VM_WARN_ONCE(1,
>>> - "Cannot split file folio to non-0 order");
>>> - return -EINVAL;
>>> +
>>> + if (folio_test_anon(folio)) {
>>> + /* Cannot split anonymous THP to order-1 */
>>> + if (new_order == 1) {
>>> + VM_WARN_ONCE(1, "Cannot split to order-1 folio");
>>> + return -EINVAL;
>>> + }
>>> + } else {
>>> + /* Split shmem folio to non-zero order not supported */
>>> + if (shmem_mapping(folio->mapping)) {
>>> + VM_WARN_ONCE(1,
>>> + "Cannot split shmem folio to non-0 order");
>>> + return -EINVAL;
>>> + }
>>> + /* No split if the file system does not support large folio */
>>> + if (!mapping_large_folio_support(folio->mapping)) {
>>> + VM_WARN_ONCE(1,
>>> + "Cannot split file folio to non-0 order");
>>> + return -EINVAL;
>>> + }
>>
>> Am I missing something? if file system doesn't support large folio,
>> how could the large folio start to exist from the first place while its
>> mapping points to a file which doesn't support large folio?
>
> I think it is the CONFIG_READ_ONLY_THP_FOR_FS case.
> khugepaged will try to collapse read-only file-backed pages to 2M THP.
Can you add this information to the commit log in your next version?
Best Regards,
Yan, Zi
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]
next prev parent reply other threads:[~2024-06-05 14:09 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-04 5:47 xu.xin16
2024-06-04 7:57 ` David Hildenbrand
2024-06-04 13:52 ` Zi Yan
2024-06-04 13:57 ` Zi Yan
2024-06-05 2:56 ` ran xiaokai
2024-06-05 2:20 ` ran xiaokai
2024-06-05 7:25 ` David Hildenbrand
2024-06-05 8:30 ` ran xiaokai
2024-06-05 8:33 ` David Hildenbrand
2024-06-05 9:06 ` Barry Song
[not found] ` <20240605095406.891512-1-ranxiaokai627@163.com>
2024-06-05 14:08 ` Zi Yan [this message]
[not found] ` <c110eb46-3c9d-40c3-ab16-5bd9f75b6501@redhat.com>
2024-06-06 1:34 ` Barry Song
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=D667F08C-0CCE-4D5E-89A3-56674B0893DE@nvidia.com \
--to=ziy@nvidia.com \
--cc=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=ranxiaokai627@163.com \
--cc=v-songbaohua@oppo.com \
--cc=xu.xin16@zte.com.cn \
--cc=yang.yang29@zte.com.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox