From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6F1FC04A95 for ; Thu, 29 Sep 2022 00:07:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5D45F6B0072; Wed, 28 Sep 2022 20:07:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 583E46B0073; Wed, 28 Sep 2022 20:07:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 424D66B0074; Wed, 28 Sep 2022 20:07:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 316736B0072 for ; Wed, 28 Sep 2022 20:07:35 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 0003E1A05E8 for ; Thu, 29 Sep 2022 00:07:34 +0000 (UTC) X-FDA: 79963184028.28.0EBE8C2 Received: from gandalf.ozlabs.org (gandalf.ozlabs.org [150.107.74.76]) by imf16.hostedemail.com (Postfix) with ESMTP id B1E4A180007 for ; Thu, 29 Sep 2022 00:07:33 +0000 (UTC) Received: from authenticated.ozlabs.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mail.ozlabs.org (Postfix) with ESMTPSA id 4MdDFX0r2Xz4xGT; Thu, 29 Sep 2022 10:07:28 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ellerman.id.au; s=201909; t=1664410050; bh=GwvU25m5bzZtcwbq1vk4klelynKDX7wZXovJlOtFPZg=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=FUpZRFN8dZjxo35PBva+4BGvKAXME6ca06lcDrMGsOxJPhJz4Aq078U02JBUauu+x TISIaYIhTpOpsSogDFXWxHEPU1hYl+Sw4wm0HLaXVowkKiBc4FpP5FCN/DAkwrKOde 3pHYiX3uSVUBCWSQMjpzUKBkXJPky0SFtO1b2pETuMOX3jVTE9hhKP2xsunbxgLKkf 3uVMzIxpjnO/+2I0MOYMk1fk1MWJzYhgQyBEAQnndRs2T6MEinUnruV7FWca1A1uk+ pboH/UlxEaOaFe/pn84bW0r3y2y7g9m3W+8TtLrykVymtIk+ogHgfKvl/ciucEJ9wV HVxTa6eo2DNkQ== From: Michael Ellerman To: Alistair Popple , linux-mm@kvack.org, Andrew Morton Cc: Nicholas Piggin , Felix Kuehling , Alex Deucher , Christian =?utf-8?Q?K=C3=B6nig?= , "Pan, Xinhui" , David Airlie , Daniel Vetter , Ben Skeggs , Karol Herbst , Lyude Paul , Ralph Campbell , "Matthew Wilcox (Oracle)" , Alex Sierra , John Hubbard , linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, amd-gfx@lists.freedesktop.org, nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Jason Gunthorpe , Dan Williams , Alistair Popple Subject: Re: [PATCH 1/7] mm/memory.c: Fix race when faulting a device private page In-Reply-To: References: Date: Thu, 29 Sep 2022 10:07:26 +1000 Message-ID: <87fsgbf3gh.fsf@mpe.ellerman.id.au> MIME-Version: 1.0 Content-Type: text/plain ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1664410054; a=rsa-sha256; cv=none; b=ogZUZBAuehI0AA0OtK4+vAbSFDQS4reS6QcvfZv8hTcS4tN/Bb9uhu+/Usu22Ao9Cb3cYI MQtv+eDb//bYzSUGfLOE3ZrnPm0VY+A5YlBltHeAy9QWqVZkHAa2nOqEE5Tcn1AzwoYyMy qOjemO60FJGIvdr0WAavwTkWCOwGkF4= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=ellerman.id.au header.s=201909 header.b=FUpZRFN8; spf=pass (imf16.hostedemail.com: domain of mpe@ellerman.id.au designates 150.107.74.76 as permitted sender) smtp.mailfrom=mpe@ellerman.id.au; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1664410054; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GwvU25m5bzZtcwbq1vk4klelynKDX7wZXovJlOtFPZg=; b=4WXmgA+XgkZny1KhXrRr7lEsbtZkX4HrqPrjKpnbRs6HN3Rjg7uEuf7HCvs6F38TAGfOLD ZmBXWxOs8Bu7CTuFWCPiG/Sf2FAgYm8iK2gJ0HQ+PVG62XmvldvDfS9xjoHhW386r16dRU vEdixX1GJFb9WQvXlABqiOncyG0pCZ4= X-Stat-Signature: sexkpey6een8zqdbsfbikafy8oweyxcn X-Rspamd-Queue-Id: B1E4A180007 X-Rspam-User: Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=ellerman.id.au header.s=201909 header.b=FUpZRFN8; spf=pass (imf16.hostedemail.com: domain of mpe@ellerman.id.au designates 150.107.74.76 as permitted sender) smtp.mailfrom=mpe@ellerman.id.au; dmarc=none X-Rspamd-Server: rspam01 X-HE-Tag: 1664410053-274210 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Alistair Popple writes: > When the CPU tries to access a device private page the migrate_to_ram() > callback associated with the pgmap for the page is called. However no > reference is taken on the faulting page. Therefore a concurrent > migration of the device private page can free the page and possibly the > underlying pgmap. This results in a race which can crash the kernel due > to the migrate_to_ram() function pointer becoming invalid. It also means > drivers can't reliably read the zone_device_data field because the page > may have been freed with memunmap_pages(). > > Close the race by getting a reference on the page while holding the ptl > to ensure it has not been freed. Unfortunately the elevated reference > count will cause the migration required to handle the fault to fail. To > avoid this failure pass the faulting page into the migrate_vma functions > so that if an elevated reference count is found it can be checked to see > if it's expected or not. > > Signed-off-by: Alistair Popple > --- > arch/powerpc/kvm/book3s_hv_uvmem.c | 15 ++++++----- > drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 17 +++++++------ > drivers/gpu/drm/amd/amdkfd/kfd_migrate.h | 2 +- > drivers/gpu/drm/amd/amdkfd/kfd_svm.c | 11 +++++--- > include/linux/migrate.h | 8 ++++++- > lib/test_hmm.c | 7 ++--- > mm/memory.c | 16 +++++++++++- > mm/migrate.c | 34 ++++++++++++++----------- > mm/migrate_device.c | 18 +++++++++---- > 9 files changed, 87 insertions(+), 41 deletions(-) > > diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c > index 5980063..d4eacf4 100644 > --- a/arch/powerpc/kvm/book3s_hv_uvmem.c > +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c > @@ -508,10 +508,10 @@ unsigned long kvmppc_h_svm_init_start(struct kvm *kvm) > static int __kvmppc_svm_page_out(struct vm_area_struct *vma, > unsigned long start, > unsigned long end, unsigned long page_shift, > - struct kvm *kvm, unsigned long gpa) > + struct kvm *kvm, unsigned long gpa, struct page *fault_page) > { > unsigned long src_pfn, dst_pfn = 0; > - struct migrate_vma mig; > + struct migrate_vma mig = { 0 }; > struct page *dpage, *spage; > struct kvmppc_uvmem_page_pvt *pvt; > unsigned long pfn; > @@ -525,6 +525,7 @@ static int __kvmppc_svm_page_out(struct vm_area_struct *vma, > mig.dst = &dst_pfn; > mig.pgmap_owner = &kvmppc_uvmem_pgmap; > mig.flags = MIGRATE_VMA_SELECT_DEVICE_PRIVATE; > + mig.fault_page = fault_page; > > /* The requested page is already paged-out, nothing to do */ > if (!kvmppc_gfn_is_uvmem_pfn(gpa >> page_shift, kvm, NULL)) > @@ -580,12 +581,14 @@ static int __kvmppc_svm_page_out(struct vm_area_struct *vma, > static inline int kvmppc_svm_page_out(struct vm_area_struct *vma, > unsigned long start, unsigned long end, > unsigned long page_shift, > - struct kvm *kvm, unsigned long gpa) > + struct kvm *kvm, unsigned long gpa, > + struct page *fault_page) > { > int ret; > > mutex_lock(&kvm->arch.uvmem_lock); > - ret = __kvmppc_svm_page_out(vma, start, end, page_shift, kvm, gpa); > + ret = __kvmppc_svm_page_out(vma, start, end, page_shift, kvm, gpa, > + fault_page); > mutex_unlock(&kvm->arch.uvmem_lock); > > return ret; > @@ -736,7 +739,7 @@ static int kvmppc_svm_page_in(struct vm_area_struct *vma, > bool pagein) > { > unsigned long src_pfn, dst_pfn = 0; > - struct migrate_vma mig; > + struct migrate_vma mig = { 0 }; > struct page *spage; > unsigned long pfn; > struct page *dpage; > @@ -994,7 +997,7 @@ static vm_fault_t kvmppc_uvmem_migrate_to_ram(struct vm_fault *vmf) > > if (kvmppc_svm_page_out(vmf->vma, vmf->address, > vmf->address + PAGE_SIZE, PAGE_SHIFT, > - pvt->kvm, pvt->gpa)) > + pvt->kvm, pvt->gpa, vmf->page)) > return VM_FAULT_SIGBUS; > else > return 0; I don't have a UV test system, but as-is it doesn't even compile :) kvmppc_svm_page_out() is called via some paths other than the migrate_to_ram callback. I think it's correct to just pass fault_page = NULL when it's not called from the migrate_to_ram callback? Incremental diff below. cheers diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c index d4eacf410956..965c9e9e500b 100644 --- a/arch/powerpc/kvm/book3s_hv_uvmem.c +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c @@ -637,7 +637,7 @@ void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *slot, pvt->remove_gfn = true; if (__kvmppc_svm_page_out(vma, addr, addr + PAGE_SIZE, - PAGE_SHIFT, kvm, pvt->gpa)) + PAGE_SHIFT, kvm, pvt->gpa, NULL)) pr_err("Can't page out gpa:0x%lx addr:0x%lx\n", pvt->gpa, addr); } else { @@ -1068,7 +1068,7 @@ kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gpa, if (!vma || vma->vm_start > start || vma->vm_end < end) goto out; - if (!kvmppc_svm_page_out(vma, start, end, page_shift, kvm, gpa)) + if (!kvmppc_svm_page_out(vma, start, end, page_shift, kvm, gpa, NULL)) ret = H_SUCCESS; out: mmap_read_unlock(kvm->mm);