linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Alistair Popple <apopple@nvidia.com>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	 nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org,
	 Andrew Morton <akpm@linux-foundation.org>,
	Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
	 "Liam R. Howlett" <Liam.Howlett@oracle.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	 Mike Rapoport <rppt@kernel.org>,
	Suren Baghdasaryan <surenb@google.com>,
	 Michal Hocko <mhocko@suse.com>, Zi Yan <ziy@nvidia.com>,
	 Baolin Wang <baolin.wang@linux.alibaba.com>,
	Nico Pache <npache@redhat.com>,
	 Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
	 Dan Williams <dan.j.williams@intel.com>,
	Oscar Salvador <osalvador@suse.de>
Subject: Re: [PATCH v2 2/3] mm/huge_memory: don't mark refcounted folios special in vmf_insert_folio_pmd()
Date: Thu, 12 Jun 2025 12:17:58 +1000	[thread overview]
Message-ID: <xdkrref3md2rfc3sou6lta2vcevz6e4ckjd6q67znpipkvxbmw@gftpxkrtlqnx> (raw)
In-Reply-To: <20250611120654.545963-3-david@redhat.com>

On Wed, Jun 11, 2025 at 02:06:53PM +0200, David Hildenbrand wrote:
> Marking PMDs that map a "normal" refcounted folios as special is
> against our rules documented for vm_normal_page().
> 
> Fortunately, there are not that many pmd_special() check that can be
> mislead, and most vm_normal_page_pmd()/vm_normal_folio_pmd() users that
> would get this wrong right now are rather harmless: e.g., none so far
> bases decisions whether to grab a folio reference on that decision.
> 
> Well, and GUP-fast will fallback to GUP-slow. All in all, so far no big
> implications as it seems.
> 
> Getting this right will get more important as we use
> folio_normal_page_pmd() in more places.
> 
> Fix it by teaching insert_pfn_pmd() to properly handle folios and
> pfns -- moving refcount/mapcount/etc handling in there, renaming it to
> insert_pmd(), and distinguishing between both cases using a new simple
> "struct folio_or_pfn" structure.
> 
> Use folio_mk_pmd() to create a pmd for a folio cleanly.
> 
> Fixes: 6c88f72691f8 ("mm/huge_memory: add vmf_insert_folio_pmd()")
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  mm/huge_memory.c | 58 ++++++++++++++++++++++++++++++++----------------
>  1 file changed, 39 insertions(+), 19 deletions(-)
> 
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 49b98082c5401..7e3e9028873e5 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1372,9 +1372,17 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)
>  	return __do_huge_pmd_anonymous_page(vmf);
>  }
>  
> -static int insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
> -		pmd_t *pmd, pfn_t pfn, pgprot_t prot, bool write,
> -		pgtable_t pgtable)
> +struct folio_or_pfn {
> +	union {
> +		struct folio *folio;
> +		pfn_t pfn;
> +	};
> +	bool is_folio;
> +};

I know it's simple, but I'm still not a fan particularly as these types of
patterns tend to proliferate once introduced. See below for a suggestion.

> +static int insert_pmd(struct vm_area_struct *vma, unsigned long addr,
> +		pmd_t *pmd, struct folio_or_pfn fop, pgprot_t prot,
> +		bool write, pgtable_t pgtable)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
>  	pmd_t entry;
> @@ -1382,8 +1390,11 @@ static int insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
>  	lockdep_assert_held(pmd_lockptr(mm, pmd));
>  
>  	if (!pmd_none(*pmd)) {
> +		const unsigned long pfn = fop.is_folio ? folio_pfn(fop.folio) :
> +					  pfn_t_to_pfn(fop.pfn);
> +
>  		if (write) {
> -			if (pmd_pfn(*pmd) != pfn_t_to_pfn(pfn)) {
> +			if (pmd_pfn(*pmd) != pfn) {
>  				WARN_ON_ONCE(!is_huge_zero_pmd(*pmd));
>  				return -EEXIST;
>  			}
> @@ -1396,11 +1407,19 @@ static int insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
>  		return -EEXIST;
>  	}
>  
> -	entry = pmd_mkhuge(pfn_t_pmd(pfn, prot));
> -	if (pfn_t_devmap(pfn))
> -		entry = pmd_mkdevmap(entry);
> -	else
> -		entry = pmd_mkspecial(entry);
> +	if (fop.is_folio) {
> +		entry = folio_mk_pmd(fop.folio, vma->vm_page_prot);
> +
> +		folio_get(fop.folio);
> +		folio_add_file_rmap_pmd(fop.folio, &fop.folio->page, vma);
> +		add_mm_counter(mm, mm_counter_file(fop.folio), HPAGE_PMD_NR);
> +	} else {
> +		entry = pmd_mkhuge(pfn_t_pmd(fop.pfn, prot));
> +		if (pfn_t_devmap(fop.pfn))
> +			entry = pmd_mkdevmap(entry);
> +		else
> +			entry = pmd_mkspecial(entry);
> +	}

Could we change insert_pfn_pmd() to insert_pmd_entry() and have callers call
something like pfn_to_pmd_entry() or folio_to_pmd_entry() to create the pmd_t
entry as appropriate, which is then passed to insert_pmd_entry() to do the bits
common to both?

>  	if (write) {
>  		entry = pmd_mkyoung(pmd_mkdirty(entry));
>  		entry = maybe_pmd_mkwrite(entry, vma);
> @@ -1431,6 +1450,9 @@ vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write)
>  	unsigned long addr = vmf->address & PMD_MASK;
>  	struct vm_area_struct *vma = vmf->vma;
>  	pgprot_t pgprot = vma->vm_page_prot;
> +	struct folio_or_pfn fop = {
> +		.pfn = pfn,
> +	};
>  	pgtable_t pgtable = NULL;
>  	spinlock_t *ptl;
>  	int error;
> @@ -1458,8 +1480,8 @@ vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write)
>  	pfnmap_setup_cachemode_pfn(pfn_t_to_pfn(pfn), &pgprot);
>  
>  	ptl = pmd_lock(vma->vm_mm, vmf->pmd);
> -	error = insert_pfn_pmd(vma, addr, vmf->pmd, pfn, pgprot, write,
> -			pgtable);
> +	error = insert_pmd(vma, addr, vmf->pmd, fop, pgprot, write,
> +			   pgtable);
>  	spin_unlock(ptl);
>  	if (error && pgtable)
>  		pte_free(vma->vm_mm, pgtable);
> @@ -1474,6 +1496,10 @@ vm_fault_t vmf_insert_folio_pmd(struct vm_fault *vmf, struct folio *folio,
>  	struct vm_area_struct *vma = vmf->vma;
>  	unsigned long addr = vmf->address & PMD_MASK;
>  	struct mm_struct *mm = vma->vm_mm;
> +	struct folio_or_pfn fop = {
> +		.folio = folio,
> +		.is_folio = true,
> +	};
>  	spinlock_t *ptl;
>  	pgtable_t pgtable = NULL;
>  	int error;
> @@ -1491,14 +1517,8 @@ vm_fault_t vmf_insert_folio_pmd(struct vm_fault *vmf, struct folio *folio,
>  	}
>  
>  	ptl = pmd_lock(mm, vmf->pmd);
> -	if (pmd_none(*vmf->pmd)) {
> -		folio_get(folio);
> -		folio_add_file_rmap_pmd(folio, &folio->page, vma);
> -		add_mm_counter(mm, mm_counter_file(folio), HPAGE_PMD_NR);
> -	}
> -	error = insert_pfn_pmd(vma, addr, vmf->pmd,
> -			pfn_to_pfn_t(folio_pfn(folio)), vma->vm_page_prot,
> -			write, pgtable);
> +	error = insert_pmd(vma, addr, vmf->pmd, fop, vma->vm_page_prot,
> +			   write, pgtable);
>  	spin_unlock(ptl);
>  	if (error && pgtable)
>  		pte_free(mm, pgtable);
> -- 
> 2.49.0
> 


  reply	other threads:[~2025-06-12  2:18 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-11 12:06 [PATCH v2 0/3] mm/huge_memory: vmf_insert_folio_*() and vmf_insert_pfn_pud() fixes David Hildenbrand
2025-06-11 12:06 ` [PATCH v2 1/3] mm/huge_memory: don't ignore queried cachemode in vmf_insert_pfn_pud() David Hildenbrand
2025-06-12  1:56   ` Alistair Popple
2025-06-12  6:55     ` David Hildenbrand
2025-06-12  4:34   ` Dan Williams
2025-06-12  6:46     ` David Hildenbrand
2025-06-12 15:28   ` Lorenzo Stoakes
2025-06-12 15:36     ` David Hildenbrand
2025-06-12 15:59       ` Lorenzo Stoakes
2025-06-12 16:00         ` David Hildenbrand
2025-06-12 17:59   ` Jason Gunthorpe
2025-06-11 12:06 ` [PATCH v2 2/3] mm/huge_memory: don't mark refcounted folios special in vmf_insert_folio_pmd() David Hildenbrand
2025-06-12  2:17   ` Alistair Popple [this message]
2025-06-12  7:06     ` David Hildenbrand
2025-06-12  4:36   ` Dan Williams
2025-06-12 16:10   ` Lorenzo Stoakes
2025-06-13  7:44     ` David Hildenbrand
2025-06-12 18:02   ` Jason Gunthorpe
2025-06-11 12:06 ` [PATCH v2 3/3] mm/huge_memory: don't mark refcounted folios special in vmf_insert_folio_pud() David Hildenbrand
2025-06-12  4:40   ` Dan Williams
2025-06-12 16:49   ` Lorenzo Stoakes
2025-06-12 17:00     ` David Hildenbrand
2025-06-12 17:08       ` Lorenzo Stoakes
2025-06-12 17:41         ` David Hildenbrand
2025-06-12 18:02   ` Jason Gunthorpe
2025-06-11 23:08 ` [PATCH v2 0/3] mm/huge_memory: vmf_insert_folio_*() and vmf_insert_pfn_pud() fixes Andrew Morton
2025-06-12  7:34   ` David Hildenbrand
2025-06-12  2:26 ` Alistair Popple
2025-06-12  4:20   ` Dan Williams
2025-06-12  7:18     ` David Hildenbrand
2025-06-12  8:27       ` David Hildenbrand
2025-06-12 16:56         ` Marc Herbert
2025-06-12 16:19 ` Lorenzo Stoakes
2025-06-12 16:22   ` David Hildenbrand
2025-06-12 16:30     ` Lorenzo Stoakes

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=xdkrref3md2rfc3sou6lta2vcevz6e4ckjd6q67znpipkvxbmw@gftpxkrtlqnx \
    --to=apopple@nvidia.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=dan.j.williams@intel.com \
    --cc=david@redhat.com \
    --cc=dev.jain@arm.com \
    --cc=linux-cxl@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mhocko@suse.com \
    --cc=npache@redhat.com \
    --cc=nvdimm@lists.linux.dev \
    --cc=osalvador@suse.de \
    --cc=rppt@kernel.org \
    --cc=ryan.roberts@arm.com \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox