From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ot0-f199.google.com (mail-ot0-f199.google.com [74.125.82.199]) by kanga.kvack.org (Postfix) with ESMTP id A82666B0033 for ; Wed, 13 Dec 2017 22:12:41 -0500 (EST) Received: by mail-ot0-f199.google.com with SMTP id r11so2357222ote.20 for ; Wed, 13 Dec 2017 19:12:41 -0800 (PST) Received: from huawei.com ([45.249.212.35]) by mx.google.com with ESMTPS id i131si993527oih.357.2017.12.13.19.12.39 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 13 Dec 2017 19:12:40 -0800 (PST) Subject: Re: [PATCH 1/2] mm: Add kernel MMU notifier to manage IOTLB/DEVTLB References: <1513213366-22594-1-git-send-email-baolu.lu@linux.intel.com> <1513213366-22594-2-git-send-email-baolu.lu@linux.intel.com> From: Bob Liu Message-ID: Date: Thu, 14 Dec 2017 11:10:28 +0800 MIME-Version: 1.0 In-Reply-To: <1513213366-22594-2-git-send-email-baolu.lu@linux.intel.com> Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Lu Baolu , Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , Alex Williamson , Joerg Roedel , David Woodhouse Cc: Rik van Riel , Michal Hocko , Dave Jiang , Dave Hansen , x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, Vegard Nossum , Andy Lutomirski , Huang Ying , Matthew Wilcox , Andrew Morton , "Paul E . McKenney" , "Kirill A . Shutemov" , Kees Cook , "xieyisheng (A)" On 2017/12/14 9:02, Lu Baolu wrote: > From: Huang Ying > > Shared Virtual Memory (SVM) allows a kernel memory mapping to be > shared between CPU and and a device which requested a supervisor > PASID. Both devices and IOMMU units have TLBs that cache entries > from CPU's page tables. We need to get a chance to flush them at > the same time when we flush the CPU TLBs. > > We already have an existing MMU notifiers for userspace updates, > however we lack the same thing for kernel page table updates. To Sorry, I didn't get which situation need this notification. Could you please describe the full scenario? Thanks, Liubo > implement the MMU notification mechanism for the kernel address > space, a kernel MMU notifier chain is defined and will be called > whenever the CPU TLB is flushed for the kernel address space. > > As consumer of this notifier, the IOMMU SVM implementations will > register callbacks on this notifier and manage the cache entries > in both IOTLB and DevTLB. > > Cc: Ashok Raj > Cc: Dave Hansen > Cc: Thomas Gleixner > Cc: Ingo Molnar > Cc: "H. Peter Anvin" > Cc: Andy Lutomirski > Cc: Rik van Riel > Cc: Kees Cook > Cc: Andrew Morton > Cc: Kirill A. Shutemov > Cc: Matthew Wilcox > Cc: Dave Jiang > Cc: Michal Hocko > Cc: Paul E. McKenney > Cc: Vegard Nossum > Cc: x86@kernel.org > Cc: linux-mm@kvack.org > > Tested-by: CQ Tang > Signed-off-by: Huang Ying > Signed-off-by: Lu Baolu > --- > arch/x86/mm/tlb.c | 2 ++ > include/linux/mmu_notifier.h | 33 +++++++++++++++++++++++++++++++++ > mm/mmu_notifier.c | 27 +++++++++++++++++++++++++++ > 3 files changed, 62 insertions(+) > > diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c > index 3118392cd..5ff104f 100644 > --- a/arch/x86/mm/tlb.c > +++ b/arch/x86/mm/tlb.c > @@ -6,6 +6,7 @@ > #include > #include > #include > +#include > > #include > #include > @@ -567,6 +568,7 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end) > info.end = end; > on_each_cpu(do_kernel_range_flush, &info, 1); > } > + kernel_mmu_notifier_invalidate_range(start, end); > } > > void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) > diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h > index b25dc9d..44d7c06 100644 > --- a/include/linux/mmu_notifier.h > +++ b/include/linux/mmu_notifier.h > @@ -408,6 +408,25 @@ extern void mmu_notifier_call_srcu(struct rcu_head *rcu, > void (*func)(struct rcu_head *rcu)); > extern void mmu_notifier_synchronize(void); > > +struct kernel_mmu_address_range { > + unsigned long start; > + unsigned long end; > +}; > + > +/* > + * Before the virtual address range managed by kernel (vmalloc/kmap) > + * is reused, That is, remapped to the new physical addresses, the > + * kernel MMU notifier will be called with KERNEL_MMU_INVALIDATE_RANGE > + * and struct kernel_mmu_address_range as parameters. This is used to > + * manage the remote TLB. > + */ > +#define KERNEL_MMU_INVALIDATE_RANGE 1 > +extern int kernel_mmu_notifier_register(struct notifier_block *nb); > +extern int kernel_mmu_notifier_unregister(struct notifier_block *nb); > + > +extern int kernel_mmu_notifier_invalidate_range(unsigned long start, > + unsigned long end); > + > #else /* CONFIG_MMU_NOTIFIER */ > > static inline int mm_has_notifiers(struct mm_struct *mm) > @@ -474,6 +493,20 @@ static inline void mmu_notifier_mm_destroy(struct mm_struct *mm) > #define pudp_huge_clear_flush_notify pudp_huge_clear_flush > #define set_pte_at_notify set_pte_at > > +static inline int kernel_mmu_notifier_register(struct notifier_block *nb) > +{ > + return 0; > +} > + > +static inline int kernel_mmu_notifier_unregister(struct notifier_block *nb) > +{ > + return 0; > +} > + > +static inline void kernel_mmu_notifier_invalidate_range(unsigned long start, > + unsigned long end) > +{ > +} > #endif /* CONFIG_MMU_NOTIFIER */ > > #endif /* _LINUX_MMU_NOTIFIER_H */ > diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c > index 96edb33..52f816a 100644 > --- a/mm/mmu_notifier.c > +++ b/mm/mmu_notifier.c > @@ -393,3 +393,30 @@ void mmu_notifier_unregister_no_release(struct mmu_notifier *mn, > mmdrop(mm); > } > EXPORT_SYMBOL_GPL(mmu_notifier_unregister_no_release); > + > +static ATOMIC_NOTIFIER_HEAD(kernel_mmu_notifier_list); > + > +int kernel_mmu_notifier_register(struct notifier_block *nb) > +{ > + return atomic_notifier_chain_register(&kernel_mmu_notifier_list, nb); > +} > +EXPORT_SYMBOL_GPL(kernel_mmu_notifier_register); > + > +int kernel_mmu_notifier_unregister(struct notifier_block *nb) > +{ > + return atomic_notifier_chain_unregister(&kernel_mmu_notifier_list, nb); > +} > +EXPORT_SYMBOL_GPL(kernel_mmu_notifier_unregister); > + > +int kernel_mmu_notifier_invalidate_range(unsigned long start, > + unsigned long end) > +{ > + struct kernel_mmu_address_range range = { > + .start = start, > + .end = end, > + }; > + > + return atomic_notifier_call_chain(&kernel_mmu_notifier_list, > + KERNEL_MMU_INVALIDATE_RANGE, > + &range); > +} > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org