linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yan Zhao <yan.y.zhao@intel.com>
To: Michael Roth <michael.roth@amd.com>
Cc: <kvm@vger.kernel.org>, <linux-coco@lists.linux.dev>,
	<linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>,
	<jroedel@suse.de>, <thomas.lendacky@amd.com>,
	<pbonzini@redhat.com>, <seanjc@google.com>, <vbabka@suse.cz>,
	<amit.shah@amd.com>, <pratikrajesh.sampat@amd.com>,
	<ashish.kalra@amd.com>, <liam.merwick@oracle.com>,
	<david@redhat.com>, <vannapurve@google.com>,
	<ackerleytng@google.com>, <quic_eberman@quicinc.com>
Subject: Re: [PATCH 5/5] KVM: Add hugepage support for dedicated guest memory
Date: Fri, 14 Mar 2025 17:50:26 +0800	[thread overview]
Message-ID: <Z9P74jUTsWo3CHGy@yzhao56-desk.sh.intel.com> (raw)
In-Reply-To: <20241212063635.712877-6-michael.roth@amd.com>

> +static struct folio *kvm_gmem_get_huge_folio(struct inode *inode, pgoff_t index,
> +					     unsigned int order)
> +{
> +	pgoff_t npages = 1UL << order;
> +	pgoff_t huge_index = round_down(index, npages);
> +	struct address_space *mapping  = inode->i_mapping;
> +	gfp_t gfp = mapping_gfp_mask(mapping) | __GFP_NOWARN;
> +	loff_t size = i_size_read(inode);
> +	struct folio *folio;
> +
> +	/* Make sure hugepages would be fully-contained by inode */
> +	if ((huge_index + npages) * PAGE_SIZE > size)
> +		return NULL;
> +
> +	if (filemap_range_has_page(mapping, (loff_t)huge_index << PAGE_SHIFT,
> +				   (loff_t)(huge_index + npages - 1) << PAGE_SHIFT))
> +		return NULL;
> +
> +	folio = filemap_alloc_folio(gfp, order);
> +	if (!folio)
> +		return NULL;
Instead of returning NULL here, what about invoking __filemap_get_folio()
directly as below?

> +	if (filemap_add_folio(mapping, folio, huge_index, gfp)) {
> +		folio_put(folio);
> +		return NULL;
> +	}
> +
> +	return folio;
> +}
> +
>  /*
>   * Returns a locked folio on success.  The caller is responsible for
>   * setting the up-to-date flag before the memory is mapped into the guest.
> @@ -284,8 +314,15 @@ static int kvm_gmem_prepare_folio(struct kvm *kvm, struct file *file,
>   */
>  static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index)
>  {
> -	/* TODO: Support huge pages. */
> -	return filemap_grab_folio(inode->i_mapping, index);
> +	struct folio *folio = NULL;
> +
> +	if (gmem_2m_enabled)
> +		folio = kvm_gmem_get_huge_folio(inode, index, PMD_ORDER);
> +
> +	if (!folio)
Also need to check IS_ERR(folio).

> +		folio = filemap_grab_folio(inode->i_mapping, index);
> +
> +	return folio;
>  }
Could we introduce a common helper to calculate max_order by checking for
gfn/index alignment and ensuring memory attributes in range are uniform?

Then we can pass in the max_order to kvm_gmem_get_folio() and only allocate huge
folio when it's necessary.

static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index, int max_order)
{                                                                                
        struct folio *folio = NULL;                                              
                                                                                 
        if (max_order >= PMD_ORDER) {                                            
                fgf_t fgp_flags = FGP_LOCK | FGP_ACCESSED | FGP_CREAT;           
                                                                                 
                fgp_flags |= fgf_set_order(1U << (PAGE_SHIFT + PMD_ORDER));      
                folio = __filemap_get_folio(inode->i_mapping, index, fgp_flags,  
                        mapping_gfp_mask(inode->i_mapping));                     
        }                                                                        
                                                                                 
        if (!folio || IS_ERR(folio))                                             
                folio = filemap_grab_folio(inode->i_mapping, index);             
                                                                                 
        return folio;                                                            
}


  reply	other threads:[~2025-03-14  9:52 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-12  6:36 [PATCH RFC v1 0/5] KVM: gmem: 2MB THP support and preparedness tracking changes Michael Roth
2024-12-12  6:36 ` [PATCH 1/5] KVM: gmem: Don't rely on __kvm_gmem_get_pfn() for preparedness Michael Roth
2025-01-22 14:39   ` Tom Lendacky
2025-02-20  1:12     ` Michael Roth
2024-12-12  6:36 ` [PATCH 2/5] KVM: gmem: Don't clear pages that have already been prepared Michael Roth
2024-12-12  6:36 ` [PATCH 3/5] KVM: gmem: Hold filemap invalidate lock while allocating/preparing folios Michael Roth
2025-03-14  9:20   ` Yan Zhao
2025-04-07  8:25     ` Yan Zhao
2025-04-23 20:30       ` Ackerley Tng
2025-05-19 17:04         ` Ackerley Tng
2025-05-21  6:46           ` Yan Zhao
2025-06-03  1:05             ` Vishal Annapurve
2025-06-03  1:31               ` Yan Zhao
2025-06-04  6:28                 ` Vishal Annapurve
2025-06-12 12:40                   ` Yan Zhao
2025-06-12 14:43                     ` Vishal Annapurve
2025-07-03  6:29                       ` Yan Zhao
2025-06-13 15:19                     ` Michael Roth
2025-06-13 18:04                     ` Michael Roth
2025-07-03  6:33                       ` Yan Zhao
2024-12-12  6:36 ` [PATCH 4/5] KVM: SEV: Improve handling of large ranges in gmem prepare callback Michael Roth
2024-12-12  6:36 ` [PATCH 5/5] KVM: Add hugepage support for dedicated guest memory Michael Roth
2025-03-14  9:50   ` Yan Zhao [this message]
2024-12-20 11:31 ` [PATCH RFC v1 0/5] KVM: gmem: 2MB THP support and preparedness tracking changes David Hildenbrand
2025-01-07 12:11   ` Shah, Amit
2025-01-22 14:25     ` David Hildenbrand
2025-03-14  9:09       ` Yan Zhao
2025-03-14  9:33         ` David Hildenbrand
2025-03-14 11:19           ` Yan Zhao
2025-03-18  2:24             ` Yan Zhao
2025-03-18 19:13               ` David Hildenbrand
2025-03-19  7:39                 ` Yan Zhao
2025-02-11  1:16 ` Vishal Annapurve
2025-02-20  1:09   ` Michael Roth
2025-03-14  9:16     ` Yan Zhao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z9P74jUTsWo3CHGy@yzhao56-desk.sh.intel.com \
    --to=yan.y.zhao@intel.com \
    --cc=ackerleytng@google.com \
    --cc=amit.shah@amd.com \
    --cc=ashish.kalra@amd.com \
    --cc=david@redhat.com \
    --cc=jroedel@suse.de \
    --cc=kvm@vger.kernel.org \
    --cc=liam.merwick@oracle.com \
    --cc=linux-coco@lists.linux.dev \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=michael.roth@amd.com \
    --cc=pbonzini@redhat.com \
    --cc=pratikrajesh.sampat@amd.com \
    --cc=quic_eberman@quicinc.com \
    --cc=seanjc@google.com \
    --cc=thomas.lendacky@amd.com \
    --cc=vannapurve@google.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox