linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yan Zhao <yan.y.zhao@intel.com>
To: Michael Roth <michael.roth@amd.com>
Cc: Vishal Annapurve <vannapurve@google.com>,
	Ackerley Tng <ackerleytng@google.com>, <kvm@vger.kernel.org>,
	<linux-coco@lists.linux.dev>, <linux-mm@kvack.org>,
	<linux-kernel@vger.kernel.org>, <jroedel@suse.de>,
	<thomas.lendacky@amd.com>, <pbonzini@redhat.com>,
	<seanjc@google.com>, <vbabka@suse.cz>, <amit.shah@amd.com>,
	<pratikrajesh.sampat@amd.com>, <ashish.kalra@amd.com>,
	<liam.merwick@oracle.com>, <david@redhat.com>,
	<quic_eberman@quicinc.com>
Subject: Re: [PATCH 3/5] KVM: gmem: Hold filemap invalidate lock while allocating/preparing folios
Date: Thu, 3 Jul 2025 14:33:54 +0800	[thread overview]
Message-ID: <aGYkUjdvChdZWTXF@yzhao56-desk.sh.intel.com> (raw)
In-Reply-To: <20250613180418.bo4vqveigxsq2ouu@amd.com>

On Fri, Jun 13, 2025 at 01:04:18PM -0500, Michael Roth wrote:
> On Thu, Jun 12, 2025 at 08:40:59PM +0800, Yan Zhao wrote:
> > On Tue, Jun 03, 2025 at 11:28:35PM -0700, Vishal Annapurve wrote:
> > > On Mon, Jun 2, 2025 at 6:34 PM Yan Zhao <yan.y.zhao@intel.com> wrote:
> > > >
> > > > On Mon, Jun 02, 2025 at 06:05:32PM -0700, Vishal Annapurve wrote:
> > > > > On Tue, May 20, 2025 at 11:49 PM Yan Zhao <yan.y.zhao@intel.com> wrote:
> > > > > >
> > > > > > On Mon, May 19, 2025 at 10:04:45AM -0700, Ackerley Tng wrote:
> > > > > > > Ackerley Tng <ackerleytng@google.com> writes:
> > > > > > >
> > > > > > > > Yan Zhao <yan.y.zhao@intel.com> writes:
> > > > > > > >
> > > > > > > >> On Fri, Mar 14, 2025 at 05:20:21PM +0800, Yan Zhao wrote:
> > > > > > > >>> This patch would cause host deadlock when booting up a TDX VM even if huge page
> > > > > > > >>> is turned off. I currently reverted this patch. No further debug yet.
> > > > > > > >> This is because kvm_gmem_populate() takes filemap invalidation lock, and for
> > > > > > > >> TDX, kvm_gmem_populate() further invokes kvm_gmem_get_pfn(), causing deadlock.
> > > > > > > >>
> > > > > > > >> kvm_gmem_populate
> > > > > > > >>   filemap_invalidate_lock
> > > > > > > >>   post_populate
> > > > > > > >>     tdx_gmem_post_populate
> > > > > > > >>       kvm_tdp_map_page
> > > > > > > >>        kvm_mmu_do_page_fault
> > > > > > > >>          kvm_tdp_page_fault
> > > > > > > >>       kvm_tdp_mmu_page_fault
> > > > > > > >>         kvm_mmu_faultin_pfn
> > > > > > > >>           __kvm_mmu_faultin_pfn
> > > > > > > >>             kvm_mmu_faultin_pfn_private
> > > > > > > >>               kvm_gmem_get_pfn
> > > > > > > >>                 filemap_invalidate_lock_shared
> > > > > > > >>
> > > > > > > >> Though, kvm_gmem_populate() is able to take shared filemap invalidation lock,
> > > > > > > >> (then no deadlock), lockdep would still warn "Possible unsafe locking scenario:
> > > > > > > >> ...DEADLOCK" due to the recursive shared lock, since commit e918188611f0
> > > > > > > >> ("locking: More accurate annotations for read_lock()").
> > > > > > > >>
> > > > > > > >
> > > > > > > > Thank you for investigating. This should be fixed in the next revision.
> > > > > > > >
> > > > > > >
> > > > > > > This was not fixed in v2 [1], I misunderstood this locking issue.
> > > > > > >
> > > > > > > IIUC kvm_gmem_populate() gets a pfn via __kvm_gmem_get_pfn(), then calls
> > > > > > > part of the KVM fault handler to map the pfn into secure EPTs, then
> > > > > > > calls the TDX module for the copy+encrypt.
> > > > > > >
> > > > > > > Regarding this lock, seems like KVM'S MMU lock is already held while TDX
> > > > > > > does the copy+encrypt. Why must the filemap_invalidate_lock() also be
> > > > > > > held throughout the process?
> > > > > > If kvm_gmem_populate() does not hold filemap invalidate lock around all
> > > > > > requested pages, what value should it return after kvm_gmem_punch_hole() zaps a
> > > > > > mapping it just successfully installed?
> > > > > >
> > > > > > TDX currently only holds the read kvm->mmu_lock in tdx_gmem_post_populate() when
> > > > > > CONFIG_KVM_PROVE_MMU is enabled, due to both slots_lock and the filemap
> > > > > > invalidate lock being taken in kvm_gmem_populate().
> > > > >
> > > > > Does TDX need kvm_gmem_populate path just to ensure SEPT ranges are
> > > > > not zapped during tdh_mem_page_add and tdh_mr_extend operations? Would
> > > > > holding KVM MMU read lock during these operations sufficient to avoid
> > > > > having to do this back and forth between TDX and gmem layers?
> > > > I think the problem here is because in kvm_gmem_populate(),
> > > > "__kvm_gmem_get_pfn(), post_populate(), and kvm_gmem_mark_prepared()"
> > > > must be wrapped in filemap invalidate lock (shared or exclusive), right?
> > > >
> > > > Then, in TDX's post_populate() callback, the filemap invalidate lock is held
> > > > again by kvm_tdp_map_page() --> ... ->kvm_gmem_get_pfn().
> > > 
> > > I am contesting the need of kvm_gmem_populate path altogether for TDX.
> > > Can you help me understand what problem does kvm_gmem_populate path
> > > help with for TDX?
> > There is a long discussion on the list about this.
> > 
> > Basically TDX needs 3 steps for KVM_TDX_INIT_MEM_REGION.
> > 1. Get the PFN
> > 2. map the mirror page table
> > 3. invoking tdh_mem_page_add().
> > Holding filemap invalidation lock around the 3 steps helps ensure that the PFN
> > passed to tdh_mem_page_add() is a valid one.
> 
> Since those requirements are already satisfied with kvm_gmem_populate(),
> then maybe this issue is more with the fact that tdh_mem_page_add() is
> making a separate call to kvm_gmem_get_pfn() even though the callback
> has been handed a stable PFN that's protected with the filemap
> invalidate lock.
> 
> Maybe some variant of kvm_tdp_map_page()/kvm_mmu_do_page_fault() that
> can be handed the PFN and related fields up-front rather than grabbing
> them later would be a more direct way to solve this? That would give us
> more flexibility on the approaches I mentioned in my other response for
> how to protect shareability state.

I prefer Vishal's proposal over this one.

> This also seems more correct in the sense that the current path triggers:
> 
>   tdx_gmem_post_populate
>     kvm_tdp_mmu_page_fault
>       kvm_gmem_get_pfn
>         kvm_gmem_prepare_folio
> 
> even the kvm_gmem_populate() intentially avoids call kvm_gmem_get_pfn() in
> favor of __kvm_gmem_get_pfn() specifically to avoid triggering the preparation
> hooks, since kvm_gmem_populate() is a special case of preparation that needs
> to be handled seperately/differently from the fault-time hooks.
> 
> This probably doesn't affect TDX because TDX doesn't make use of prepare
> hooks, but since it's complicating things here it seems like we should address
> it directly rather than work around it. Maybe it could even be floated as a
> patch directly against kvm/next?
Posted an RFC for discussion.
https://lore.kernel.org/lkml/20250703062641.3247-1-yan.y.zhao@intel.com/

Thanks
Yan


  reply	other threads:[~2025-07-03  6:36 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-12  6:36 [PATCH RFC v1 0/5] KVM: gmem: 2MB THP support and preparedness tracking changes Michael Roth
2024-12-12  6:36 ` [PATCH 1/5] KVM: gmem: Don't rely on __kvm_gmem_get_pfn() for preparedness Michael Roth
2025-01-22 14:39   ` Tom Lendacky
2025-02-20  1:12     ` Michael Roth
2024-12-12  6:36 ` [PATCH 2/5] KVM: gmem: Don't clear pages that have already been prepared Michael Roth
2024-12-12  6:36 ` [PATCH 3/5] KVM: gmem: Hold filemap invalidate lock while allocating/preparing folios Michael Roth
2025-03-14  9:20   ` Yan Zhao
2025-04-07  8:25     ` Yan Zhao
2025-04-23 20:30       ` Ackerley Tng
2025-05-19 17:04         ` Ackerley Tng
2025-05-21  6:46           ` Yan Zhao
2025-06-03  1:05             ` Vishal Annapurve
2025-06-03  1:31               ` Yan Zhao
2025-06-04  6:28                 ` Vishal Annapurve
2025-06-12 12:40                   ` Yan Zhao
2025-06-12 14:43                     ` Vishal Annapurve
2025-07-03  6:29                       ` Yan Zhao
2025-06-13 15:19                     ` Michael Roth
2025-06-13 18:04                     ` Michael Roth
2025-07-03  6:33                       ` Yan Zhao [this message]
2024-12-12  6:36 ` [PATCH 4/5] KVM: SEV: Improve handling of large ranges in gmem prepare callback Michael Roth
2024-12-12  6:36 ` [PATCH 5/5] KVM: Add hugepage support for dedicated guest memory Michael Roth
2025-03-14  9:50   ` Yan Zhao
2024-12-20 11:31 ` [PATCH RFC v1 0/5] KVM: gmem: 2MB THP support and preparedness tracking changes David Hildenbrand
2025-01-07 12:11   ` Shah, Amit
2025-01-22 14:25     ` David Hildenbrand
2025-03-14  9:09       ` Yan Zhao
2025-03-14  9:33         ` David Hildenbrand
2025-03-14 11:19           ` Yan Zhao
2025-03-18  2:24             ` Yan Zhao
2025-03-18 19:13               ` David Hildenbrand
2025-03-19  7:39                 ` Yan Zhao
2025-02-11  1:16 ` Vishal Annapurve
2025-02-20  1:09   ` Michael Roth
2025-03-14  9:16     ` Yan Zhao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aGYkUjdvChdZWTXF@yzhao56-desk.sh.intel.com \
    --to=yan.y.zhao@intel.com \
    --cc=ackerleytng@google.com \
    --cc=amit.shah@amd.com \
    --cc=ashish.kalra@amd.com \
    --cc=david@redhat.com \
    --cc=jroedel@suse.de \
    --cc=kvm@vger.kernel.org \
    --cc=liam.merwick@oracle.com \
    --cc=linux-coco@lists.linux.dev \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=michael.roth@amd.com \
    --cc=pbonzini@redhat.com \
    --cc=pratikrajesh.sampat@amd.com \
    --cc=quic_eberman@quicinc.com \
    --cc=seanjc@google.com \
    --cc=thomas.lendacky@amd.com \
    --cc=vannapurve@google.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox