From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 72B30CA101F for ; Fri, 12 Sep 2025 12:38:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CD0C48E0016; Fri, 12 Sep 2025 08:38:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C59DA8E0003; Fri, 12 Sep 2025 08:38:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AFB118E0016; Fri, 12 Sep 2025 08:38:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 9693B8E0003 for ; Fri, 12 Sep 2025 08:38:18 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 5FB011407E4 for ; Fri, 12 Sep 2025 12:38:18 +0000 (UTC) X-FDA: 83880551076.15.182BE2B Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf30.hostedemail.com (Postfix) with ESMTP id A1BC680007 for ; Fri, 12 Sep 2025 12:38:15 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=cj9ccOMz; spf=pass (imf30.hostedemail.com: domain of agordeev@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=agordeev@linux.ibm.com; dmarc=pass (policy=none) header.from=ibm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1757680695; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=F/QMkXR0QmFDH2hSNcKmtWCaKqZWJp/K34Yf9Q/vV4o=; b=dFvcNLGuI+otiAZvTONGiHoA811EvGCCd2cKhJGUBD6cGsflCzrB0cUop9ucn0prCxsXZO 6FVuWEUVjVHlYVHslU+MHUvhm3FAOHjykE55pwmHethMyURazOm+yAxLF3Ccm70YgAuJKk PeqZfRzxva0H7kVnBL8mIcn/wV7WxIU= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=cj9ccOMz; spf=pass (imf30.hostedemail.com: domain of agordeev@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=agordeev@linux.ibm.com; dmarc=pass (policy=none) header.from=ibm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1757680695; a=rsa-sha256; cv=none; b=Qd1wSgffIIml+3OUg0shf46gMoDt26w/p6aKOkdf3/ZsyyIgkNds9gQdq/XSFZaVy8Me1y 9rKtOJWjPFR5wm19WHG3oh+Vn80x43p0g9tytI6SMM6sQheV+LHQqlZA4Oflnn/E+J+qvS +GKAVmjWTO91voz4O8guvsA56rZ+2o8= Received: from pps.filterd (m0353729.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 58C4NGak023577; Fri, 12 Sep 2025 12:37:48 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-type:date:from:in-reply-to:message-id:mime-version :references:subject:to; s=pp1; bh=F/QMkXR0QmFDH2hSNcKmtWCaKqZWJp /K34Yf9Q/vV4o=; b=cj9ccOMz3g4C+PjNeaWiuuTaP4HNhRLpXSoNy8fDYBJ3tk 7VmA5YJda8fjEnU3nV9LYEJ0vevAyi5kPStuOoThm2LttsPn7OqSUIv9hpgYHrDl VM/aZj0XTYswaLwEhX4chi7Xmx3veV2pPSKQE7Xpwhzl/TPEp2EAuPOeuv511Ow4 GdjQgA67d2m8iwQ5Lwg5nwGJfFU1oEES3U34uXlfpBV2PZpk1btufFqXlMgXkSrh hrBnft6mO7kvHVsAtqgMa22uMd+6dFXCBNXIQ9bxYg5JQfOYQABNv/34YA++priZ YK5q9EfpN5W8w2jVpUGi5mI4pVFy6aKSSO89ulWA== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 490cmxbgbt-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 12 Sep 2025 12:37:48 +0000 (GMT) Received: from m0353729.ppops.net (m0353729.ppops.net [127.0.0.1]) by pps.reinject (8.18.1.12/8.18.0.8) with ESMTP id 58CCZk7V019437; Fri, 12 Sep 2025 12:37:47 GMT Received: from ppma21.wdc07v.mail.ibm.com (5b.69.3da9.ip4.static.sl-reverse.com [169.61.105.91]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 490cmxbgbr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 12 Sep 2025 12:37:47 +0000 (GMT) Received: from pps.filterd (ppma21.wdc07v.mail.ibm.com [127.0.0.1]) by ppma21.wdc07v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 58CAHmfd007982; Fri, 12 Sep 2025 12:37:46 GMT Received: from smtprelay01.fra02v.mail.ibm.com ([9.218.2.227]) by ppma21.wdc07v.mail.ibm.com (PPS) with ESMTPS id 49109q2yat-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 12 Sep 2025 12:37:46 +0000 Received: from smtpav01.fra02v.mail.ibm.com (smtpav01.fra02v.mail.ibm.com [10.20.54.100]) by smtprelay01.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 58CCbiaW43581748 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 12 Sep 2025 12:37:44 GMT Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id F244B20043; Fri, 12 Sep 2025 12:37:43 +0000 (GMT) Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 2D0D220040; Fri, 12 Sep 2025 12:37:43 +0000 (GMT) Received: from li-008a6a4c-3549-11b2-a85c-c5cc2836eea2.ibm.com (unknown [9.155.204.135]) by smtpav01.fra02v.mail.ibm.com (Postfix) with ESMTPS; Fri, 12 Sep 2025 12:37:43 +0000 (GMT) Date: Fri, 12 Sep 2025 14:37:41 +0200 From: Alexander Gordeev To: David Hildenbrand Cc: Kevin Brodsky , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andreas Larsson , Andrew Morton , Boris Ostrovsky , Borislav Petkov , Catalin Marinas , Christophe Leroy , Dave Hansen , "David S. Miller" , "H. Peter Anvin" , Ingo Molnar , Jann Horn , Juergen Gross , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Michael Ellerman , Michal Hocko , Mike Rapoport , Nicholas Piggin , Peter Zijlstra , Ryan Roberts , Suren Baghdasaryan , Thomas Gleixner , Vlastimil Babka , Will Deacon , Yeoreum Yun , linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, xen-devel@lists.xenproject.org, Mark Rutland Subject: Re: [PATCH v2 2/7] mm: introduce local state for lazy_mmu sections Message-ID: <9ed5441f-cc03-472a-adc6-b9d3ad525664-agordeev@linux.ibm.com> References: <47ee1df7-1602-4200-af94-475f84ca8d80@arm.com> <29383ee2-d6d6-4435-9052-d75a263a5c45@redhat.com> <9de08024-adfc-421b-8799-62653468cf63@arm.com> <4b4971fd-0445-4d86-8f3a-6ba3d68d15b7@arm.com> <4aa28016-5678-4c66-8104-8dcc3fa2f5ce@redhat.com> <15d01c8b-5475-442e-9df5-ca37b0d5dc04@arm.com> <7953a735-6129-4d22-be65-ce736630d539@redhat.com> <781a6450-1c0b-4603-91cf-49f16cd78c28@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-TM-AS-GCONF: 00 X-Proofpoint-GUID: DzRH41cgO_SjjRs72WZBVybY8ExotUyL X-Proofpoint-ORIG-GUID: 5szjY0KVwgiaHLPdjEz_BJ9YvOIOK4Xh X-Authority-Analysis: v=2.4 cv=J52q7BnS c=1 sm=1 tr=0 ts=68c4141c cx=c_pps a=GFwsV6G8L6GxiO2Y/PsHdQ==:117 a=GFwsV6G8L6GxiO2Y/PsHdQ==:17 a=kj9zAlcOel0A:10 a=yJojWOMRYYMA:10 a=VnNF1IyMAAAA:8 a=nv4XAxrWCHwsSIOw55kA:9 a=CjuIK1q_8ugA:10 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwOTA2MDAyNSBTYWx0ZWRfX86gaOQ1LrR1y +AynjQ4AgALVhtZYKnZDDT8c+j9DRk8HERAKDNNp36tCtsrKRRJnQfrOFLxyYdnAr8hyqmeGA1f 1prxUpnPJ5rXvIAgmJzbfZjhxXBnmxRp2iqjJgJ3i90OwHrL2TrKP54cHw5M3qK3XhAF0n2TpPW RsnjxBxBdvNnEesXLpwgkw7dWSF2rosg2AqDmw/wB76gRXWMNRK5mr5x7N99pH4pGAXXTiU4uVS i159bhlfOwCXD+7CDYya0yNVx3NBZFrGp9nJP7w5HEd4jjYX/OX0RgcnOvpZcS7CewzKLkB5WKW CUaIccrW4ap26mcSEi1Rt8r0K+KbR+7MT6HbSo+gGNNB0kqns8XawW6Wq5nSDOWHRsM/E9JdJ0e dqDgxgLq X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1117,Hydra:6.1.9,FMLib:17.12.80.40 definitions=2025-09-12_04,2025-09-11_02,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 clxscore=1015 suspectscore=0 spamscore=0 phishscore=0 bulkscore=0 adultscore=0 malwarescore=0 priorityscore=1501 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2507300000 definitions=main-2509060025 X-Rspamd-Queue-Id: A1BC680007 X-Rspamd-Server: rspam05 X-Stat-Signature: 5ur3qbq66wfqqtd8eaz3dbimdiqfk7zu X-Rspam-User: X-HE-Tag: 1757680695-733420 X-HE-Meta: U2FsdGVkX1/k20IHvidTRYMSys1+kc8pClT1/UBf0gwutKU5gMwviZ/XclcrsXeaLNIy1bXKxhhVBE1R4phep7Zzv5SsVCnUKP5l9XmG95kL7fhvQLFh5ORJqEa40roDLN2nkwMhyOpEZoQcj5iZsRnbjbYhIfGezJObRP+SkG8lwZEPDiBRjj1IcxzOtIma0oKd3dvK/3o+zgocPDgy3Dz2/Hp4RASKrv6R/owaSHlk0SgdRVNZF3HMhAx2cdITtBHMQYnWfnOl161g01vKUSfHu2Yx9KjSE0Tgro+mDuvc+Ut/3zsbT3PDFNOdGpinJr41AWpjdrQ3z3wtKxosSE4iD6Kmc/Z6t/ACq9/9TsS145AUPWCQ6e6QZ801pbUJk7XHXVdd6PgCyMbuDANzdJZK0MOdppzvn7YZT3Q0ztHMT7M8PRvzTYWtwSEjdm5c1ZUNFFdTqpc9KutTF+Eo9UYbuBcAtLyiz3LxNBlv6J1j3IJfrpQm8NbzkIM71TlbQU+b2/DW4yX8bDjYNSib8r77aXjzre8n1KEcL++V5eWUZ6vFpqF3DQ4jUOx+3yvZ782xVSY2JvOoCus9zsE9uAsvbn8vOAK9gXL71H12f4rk//Q2Hox4BkiRVmUrEMlkf5Qb1O32k9ov2DkDvxovjSrvH86L61BZL+zttg2WiZy/2y9W9E5FTJWN/k+p6+KAAkB1JPAd8wAs0bG7+voLZ6NO1tWmitgx1xm0xTt3JSdGKExkGAFCUXlYbWnHoe8laZYszlKoBMY1GahpDYKFevr5TqI4nH1qFXp609FqkGuuMP9nivFjAhw9qFjLk/EdQKqR06Tfji8UgX8Mcl0i03jzUU7sl76YdzERFz3ogwlsCcKD4bWjwyOxC5g3wsyUDeGoNFIf4+6MOpSD8i2lB793xfVt8d2cNox//EkCgSp+sysLrNKZggFXnUynwdNsy9hmhztaWbS2ggNPdHN ozOmWc76 KciWGT/zEZFgoh7NK6YSsDbRzL70oeWvaUI4GEhTJn3Tjph7zO7O78I2+sGkERtPW7JzbB+vWwWUCCVdkkr6IpxathSPwXtGq9HTCm1RF6BXrLBOoVpGuJ6cNA7OKZ9Q9A85KpUzGUBK4i2HX+iS4Uco4qZb+IEya2UYSd3bs7Xa1+VvVPppTxkGcPDl2gYY6+ZSr3ZZjqZhzyuVG8K15segPdFzicwfzkM7zm+Dhx3kvCphd+zKzmcnBwaREK5vdzCOYeNNMa3vICcTHVS9MRyNCyKMiG4qgCNWNxPNUsfl6WwzMZOzvFNmx8UOtWwSyW9QDNrO4AR8yhsM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Sep 12, 2025 at 10:55:50AM +0200, David Hildenbrand wrote: Hi David, Kevin, > Great, looking forward to seeing this all getting cleaned up and done > properly for good. I am currently working on lazy mmu for s390 and this nesting initiative kind of interferres. Well, in fact it looks like it does not, but I am bit lost in last couple of iterations ;) The prerequisite for s390 would be something like the change below. With that change I can store the context in a per-cpu structure and use it later in arch-specific ptep_* primitives. Moreover, with a further (experimental) rework we could use a custom kasan sanitizer to spot false directly compiled PTE accesses, as opposed to set_pte()/ptep_get() accessors. I am not quite sure see whether this could be derailed by the new lazy mmu API. At least I do not immediately see any obvious problem. But may be you do? [PATCH] mm: Make lazy MMU mode context-aware The lazy MMU mode is assumed to be context-independent in a sense the MMU does not need any additional data in lazy mode. Yet, s390 architecture may benefit strongly if it knows the exact page table entries being changed while in lazy mode. Introduce arch_enter_lazy_mmu_mode_pte() that is provided with the process memory space and the page table being operated on as the prerequisite for s390 optimization. It is expected to be called only against PTE page tables and never cross the page table boundary. There is no change for architectures that do not need any context. Signed-off-by: Alexander Gordeev --- fs/proc/task_mmu.c | 2 +- include/linux/pgtable.h | 8 ++++++++ mm/madvise.c | 8 ++++---- mm/memory.c | 8 ++++---- mm/mprotect.c | 2 +- mm/mremap.c | 2 +- mm/vmalloc.c | 6 +++--- 7 files changed, 22 insertions(+), 14 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 751479eb128f..02fcd2771b2a 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -2493,7 +2493,7 @@ static int pagemap_scan_pmd_entry(pmd_t *pmd, unsigned long start, return 0; } - arch_enter_lazy_mmu_mode(); + arch_enter_lazy_mmu_mode_pte(vma->vm_mm, start, end, start_pte); if ((p->arg.flags & PM_SCAN_WP_MATCHING) && !p->vec_out) { /* Fast path for performing exclusive WP */ diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 0b6e1f781d86..16235c198bcb 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -235,6 +235,14 @@ static inline int pmd_dirty(pmd_t pmd) #define arch_enter_lazy_mmu_mode() do {} while (0) #define arch_leave_lazy_mmu_mode() do {} while (0) #define arch_flush_lazy_mmu_mode() do {} while (0) + +static inline void arch_enter_lazy_mmu_mode_pte(struct mm_struct *mm, + unsigned long addr, + unsigned long end, + pte_t *ptep) +{ + arch_enter_lazy_mmu_mode(); +} #endif #ifndef pte_batch_hint diff --git a/mm/madvise.c b/mm/madvise.c index 1d44a35ae85c..d36d4dc42378 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -448,7 +448,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, if (!start_pte) return 0; flush_tlb_batched_pending(mm); - arch_enter_lazy_mmu_mode(); + arch_enter_lazy_mmu_mode_pte(mm, addr, end, start_pte); for (; addr < end; pte += nr, addr += nr * PAGE_SIZE) { nr = 1; ptent = ptep_get(pte); @@ -509,7 +509,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, if (!start_pte) break; flush_tlb_batched_pending(mm); - arch_enter_lazy_mmu_mode(); + arch_enter_lazy_mmu_mode_pte(mm, addr, end, start_pte); if (!err) nr = 0; continue; @@ -678,7 +678,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, if (!start_pte) return 0; flush_tlb_batched_pending(mm); - arch_enter_lazy_mmu_mode(); + arch_enter_lazy_mmu_mode_pte(mm, addr, end, start_pte); for (; addr != end; pte += nr, addr += PAGE_SIZE * nr) { nr = 1; ptent = ptep_get(pte); @@ -743,7 +743,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, if (!start_pte) break; flush_tlb_batched_pending(mm); - arch_enter_lazy_mmu_mode(); + arch_enter_lazy_mmu_mode_pte(mm, addr, end, pte); if (!err) nr = 0; continue; diff --git a/mm/memory.c b/mm/memory.c index b0cda5aab398..93c0b8457eb0 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1131,7 +1131,7 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); orig_src_pte = src_pte; orig_dst_pte = dst_pte; - arch_enter_lazy_mmu_mode(); + arch_enter_lazy_mmu_mode_pte(src_mm, addr, end, src_pte); do { nr = 1; @@ -1723,7 +1723,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, return addr; flush_tlb_batched_pending(mm); - arch_enter_lazy_mmu_mode(); + arch_enter_lazy_mmu_mode_pte(mm, addr, end, start_pte); do { bool any_skipped = false; @@ -2707,7 +2707,7 @@ static int remap_pte_range(struct mm_struct *mm, pmd_t *pmd, mapped_pte = pte = pte_alloc_map_lock(mm, pmd, addr, &ptl); if (!pte) return -ENOMEM; - arch_enter_lazy_mmu_mode(); + arch_enter_lazy_mmu_mode_pte(mm, addr, end, mapped_pte); do { BUG_ON(!pte_none(ptep_get(pte))); if (!pfn_modify_allowed(pfn, prot)) { @@ -3024,7 +3024,7 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd, return -EINVAL; } - arch_enter_lazy_mmu_mode(); + arch_enter_lazy_mmu_mode_pte(mm, addr, end, mapped_pte); if (fn) { do { diff --git a/mm/mprotect.c b/mm/mprotect.c index 88608d0dc2c2..919c1dedff87 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -106,7 +106,7 @@ static long change_pte_range(struct mmu_gather *tlb, target_node = numa_node_id(); flush_tlb_batched_pending(vma->vm_mm); - arch_enter_lazy_mmu_mode(); + arch_enter_lazy_mmu_mode_pte(vma->vm_mm, addr, end, pte); do { oldpte = ptep_get(pte); if (pte_present(oldpte)) { diff --git a/mm/mremap.c b/mm/mremap.c index 60f6b8d0d5f0..08b9cb3bb9ef 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -233,7 +233,7 @@ static int move_ptes(struct pagetable_move_control *pmc, if (new_ptl != old_ptl) spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); flush_tlb_batched_pending(vma->vm_mm); - arch_enter_lazy_mmu_mode(); + arch_enter_lazy_mmu_mode_pte(mm, old_addr, old_end, old_pte); for (; old_addr < old_end; old_pte++, old_addr += PAGE_SIZE, new_pte++, new_addr += PAGE_SIZE) { diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 6dbcdceecae1..29cfc64970a5 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -105,7 +105,7 @@ static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, if (!pte) return -ENOMEM; - arch_enter_lazy_mmu_mode(); + arch_enter_lazy_mmu_mode_pte(&init_mm, addr, end, pte); do { if (unlikely(!pte_none(ptep_get(pte)))) { @@ -359,7 +359,7 @@ static void vunmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, unsigned long size = PAGE_SIZE; pte = pte_offset_kernel(pmd, addr); - arch_enter_lazy_mmu_mode(); + arch_enter_lazy_mmu_mode_pte(&init_mm, addr, end, pte); do { #ifdef CONFIG_HUGETLB_PAGE @@ -526,7 +526,7 @@ static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr, if (!pte) return -ENOMEM; - arch_enter_lazy_mmu_mode(); + arch_enter_lazy_mmu_mode_pte(&init_mm, addr, end, pte); do { struct page *page = pages[*nr]; > David / dhildenb Thanks!