linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Kefeng Wang <wangkefeng.wang@huawei.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	<linux-kernel@vger.kernel.org>, <linux-mm@kvack.org>,
	David Hildenbrand <david@redhat.com>,
	Gregory Price <gregory.price@memverge.com>
Subject: Re: [PATCH v2 3/7] fs/proc/page: respect folio head-page flag placement
Date: Sat, 11 Nov 2023 17:49:38 +0800	[thread overview]
Message-ID: <438ba640-c205-4034-886e-6a7231f3d210@huawei.com> (raw)
In-Reply-To: <ZU50JT0OVdAh9q5W@casper.infradead.org>



On 2023/11/11 2:19, Matthew Wilcox wrote:
> On Fri, Nov 10, 2023 at 11:33:20AM +0800, Kefeng Wang wrote:
>> kpageflags reads page-flags directly from the page, even when the
>> respective flag is only updated on the headpage of a folio.
>>
>> Since most flags are stored in head flags, make k = folio->flags,
>> and add new p = page->flags used for per-page flags.
> 
> You'd do better to steal Greg's commit message.

Sure

> 
>> Originally-from: Gregory Price <gregory.price@memverge.com>
>> Suggested-by: Matthew Wilcox <willy@infradead.org>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> 
>> @@ -202,7 +202,7 @@ u64 stable_page_flags(struct page *page)
>>   	u |= kpf_copy_bit(k, KPF_MLOCKED,	PG_mlocked);
>>   
>>   #ifdef CONFIG_MEMORY_FAILURE
>> -	u |= kpf_copy_bit(k, KPF_HWPOISON,	PG_hwpoison);
>> +	u |= kpf_copy_bit(p, KPF_HWPOISON,	PG_hwpoison);
> 
> This is correct.
> 
>> @@ -211,13 +211,13 @@ u64 stable_page_flags(struct page *page)
>>   
>>   	u |= kpf_copy_bit(k, KPF_RESERVED,	PG_reserved);
>>   	u |= kpf_copy_bit(k, KPF_MAPPEDTODISK,	PG_mappedtodisk);
>> -	u |= kpf_copy_bit(k, KPF_PRIVATE,	PG_private);
>> -	u |= kpf_copy_bit(k, KPF_PRIVATE_2,	PG_private_2);
>> -	u |= kpf_copy_bit(k, KPF_OWNER_PRIVATE,	PG_owner_priv_1);
>> +	u |= kpf_copy_bit(p, KPF_PRIVATE,	PG_private);
>> +	u |= kpf_copy_bit(p, KPF_PRIVATE_2,	PG_private_2);
>> +	u |= kpf_copy_bit(p, KPF_OWNER_PRIVATE,	PG_owner_priv_1);
> 
> This is not.  PG_private is not, I believe, set on tail pages.
> Ditto the other two.  If you know differently ... ?

k = folio->flags, p = page->flags
as PG_private/private_2/owner_priv_1 use PF_ANY page policy,
so they should per-page check, confused...

See checked page flags,

PG_error        PF_NO_TAIL
PG_dirty        PF_HEAD
PG_uptodate     PF_NO_TAIL
PG_writeback    PF_NO_TAIL
PG_lru          PF_HEAD
PG_referenced 	PF_HEAD
PG_active       PF_HEAD
PG_reclaim      PF_NO_TAIL
PG_swapbacked   PF_NO_TAIL
PG_unevictable  PF_HEAD
PG_mlocked      PF_NO_TAIL
PG_hwpoison     PF_ANY
PG_uncached     PF_NO_COMPOUND	
PG_reserved     PF_NO_COMPOUND
PG_mappedtodisk PF_NO_TAIL
PG_private      PF_ANY
PG_private_2    PF_ANY
PG_owner_priv_1 PF_ANY

above part has 4 types,

1) PF_HEAD        - should use k

2) PF_ANY         - should use p

3) PF_NO_TAIL
    - PageXXX will check head page flags, so suppose to use k,
      but here use bit check, I wonder it should use p?

4) PF_NO_COMPOUND
   - should per-page check
> 
>>   #ifdef CONFIG_ARCH_USES_PG_ARCH_X
>> -	u |= kpf_copy_bit(k, KPF_ARCH_2,	PG_arch_2);
>> -	u |= kpf_copy_bit(k, KPF_ARCH_3,	PG_arch_3);
>> +	u |= kpf_copy_bit(p, KPF_ARCH_2,	PG_arch_2);
>> +	u |= kpf_copy_bit(p, KPF_ARCH_3,	PG_arch_3);
>>   #endif
> 
> I also don't think this is correct, but there are many uses of
> PG_arch* and I may have missed something.
> 

And 3 arch page flags,

PG_arch_1
  - PG_dcache_clean, from achetlb.rst, I think it is per-folio, use k
	
PG_arch_2
  - only arm64 mte, PG_mte_tagged
PG_arch_3
  - only arm64 mte, PG_mte_lock

the two PG_arch only used by arm64 mte, they are per-page flag, use p

Correct me if I am wrong, thanks.



  reply	other threads:[~2023-11-11  9:50 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-10  3:33 [PATCH v2 0/7] mm: remove page idle and young wrapper Kefeng Wang
2023-11-10  3:33 ` [PATCH v2 1/7] fs/proc/page: remove unneeded PageTail && PageSlab check Kefeng Wang
2023-11-10 18:11   ` Matthew Wilcox
2023-11-10  3:33 ` [PATCH v2 2/7] fs/proc/page: use a folio in stable_page_flags() Kefeng Wang
2023-11-10 18:15   ` Matthew Wilcox
2023-11-11  3:21     ` Kefeng Wang
2023-11-10 21:50   ` Gregory Price
2023-11-10  3:33 ` [PATCH v2 3/7] fs/proc/page: respect folio head-page flag placement Kefeng Wang
2023-11-10 18:19   ` Matthew Wilcox
2023-11-11  9:49     ` Kefeng Wang [this message]
2023-11-10  3:33 ` [PATCH v2 4/7] mm: huge_memory: use more folio api in __split_huge_page_tail() Kefeng Wang
2023-11-10 18:20   ` Matthew Wilcox
2023-11-11 10:00     ` Kefeng Wang
2023-11-10  3:33 ` [PATCH v2 5/7] mm: task_mmu: use a folio in smaps_account() Kefeng Wang
2023-11-10 18:29   ` Matthew Wilcox
2023-11-11 10:57     ` Kefeng Wang
2023-11-10  3:33 ` [PATCH v2 6/7] mm: task_mmu: use a folio in clear_refs_pte_range() Kefeng Wang
2023-11-10  3:33 ` [PATCH v2 7/7] page_idle: kill page idle and young wrapper Kefeng Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=438ba640-c205-4034-886e-6a7231f3d210@huawei.com \
    --to=wangkefeng.wang@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=gregory.price@memverge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox