linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Zi Yan <ziy@nvidia.com>
Cc: Matthew Wilcox <willy@infradead.org>,
	Baolin Wang <baolin.wang@linux.alibaba.com>,
	akpm@linux-foundation.org, hannes@cmpxchg.org,
	lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com,
	npache@redhat.com, ryan.roberts@arm.com, dev.jain@arm.com,
	vbabka@suse.cz, rppt@kernel.org, surenb@google.com,
	mhocko@suse.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 2/2] mm: convert do_set_pmd() to take a folio
Date: Thu, 8 May 2025 09:36:02 +0200	[thread overview]
Message-ID: <a5056791-0a3e-40f6-bb83-7f39ef76b346@redhat.com> (raw)
In-Reply-To: <A243EBEA-22E7-4F57-9293-177500463B38@nvidia.com>

On 08.05.25 01:46, Zi Yan wrote:
> On 7 May 2025, at 17:24, David Hildenbrand wrote:
> 
>> On 07.05.25 14:10, Matthew Wilcox wrote:
>>> On Wed, May 07, 2025 at 05:26:13PM +0800, Baolin Wang wrote:
>>>> In do_set_pmd(), we always use the folio->page to build PMD mappings for
>>>> the entire folio. Since all callers of do_set_pmd() already hold a stable
>>>> folio, converting do_set_pmd() to take a folio is safe and more straightforward.
>>>
>>> What testing did you do of this?
>>>
>>>> -vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
>>>> +vm_fault_t do_set_pmd(struct vm_fault *vmf, struct folio *folio)
>>>>    {
>>>> -	struct folio *folio = page_folio(page);
>>>>    	struct vm_area_struct *vma = vmf->vma;
>>>>    	bool write = vmf->flags & FAULT_FLAG_WRITE;
>>>>    	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
>>>>    	pmd_t entry;
>>>>    	vm_fault_t ret = VM_FAULT_FALLBACK;
>>>> +	struct page *page;
>>>
>>> Because I see nowhere in this patch that you initialise 'page'.
>>>
>>> And that's really the important part.  You seem to be assuming that a
>>> folio will never be larger than PMD size, and I'm not comfortable with
>>> that assumption.  It's a limitation I put in place a few years ago so we
>>> didn't have to find and fix all those assumptions immediately, but I
>>> imagine that some day we'll want to have larger folios.
>>>
>>> So unless you can derive _which_ page in the folio we want to map from
>>> the vmf, NACK this patch.
>>
>> Agreed. Probably folio + idx is our best bet.
>>
>> Which raises an interesting question: I assume in the future, when we have a 4 MiB folio on x86-64 that is *misaligned* in VA space regarding PMDs (e.g., aligned to 1 MiB but not 2 MiB), we could still allow to use a PMD for the middle part.
> 
> It might not be possible if the folio comes from buddy allocator due to how
> buddy allocator merges a PFN with its buddy (see __find_buddy_pfn() in mm/internal.h).
> A 4MB folio will always be two 2MB-aligned parts. In addition, VA and PA need
> to have the same lower 9+12 bits for a PMD mapping. So PMD mappings for
> a 4MB folio would always be two PMDs. Let me know if I miss anything.

PA is clear. But is mis-alignment in VA space impossible on all 
architectures? I certainly remember it being impossible on x86-64 and 
s390x (remaining PMD entry bits used for something else).

-- 
Cheers,

David / dhildenb



  reply	other threads:[~2025-05-08  7:36 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-05-07  9:26 [PATCH 1/2] mm: khugepaged: convert set_huge_pmd() " Baolin Wang
2025-05-07  9:26 ` [PATCH 2/2] mm: convert do_set_pmd() " Baolin Wang
2025-05-07 12:10   ` Matthew Wilcox
2025-05-07 12:36     ` Baolin Wang
2025-05-07 16:47       ` Matthew Wilcox
2025-05-08  2:23         ` Baolin Wang
2025-05-08  7:37           ` David Hildenbrand
2025-05-07 21:24     ` David Hildenbrand
2025-05-07 23:46       ` Zi Yan
2025-05-08  7:36         ` David Hildenbrand [this message]
2025-05-08 13:12           ` Matthew Wilcox
2025-05-07 12:04 ` [PATCH 1/2] mm: khugepaged: convert set_huge_pmd() " Matthew Wilcox
2025-05-07 21:19   ` David Hildenbrand
2025-05-08  2:08     ` Baolin Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a5056791-0a3e-40f6-bb83-7f39ef76b346@redhat.com \
    --to=david@redhat.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=dev.jain@arm.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mhocko@suse.com \
    --cc=npache@redhat.com \
    --cc=rppt@kernel.org \
    --cc=ryan.roberts@arm.com \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox