From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 292DAC369C2 for ; Fri, 25 Apr 2025 19:32:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4EF806B0008; Fri, 25 Apr 2025 15:31:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 49EB86B000C; Fri, 25 Apr 2025 15:31:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 367E96B000D; Fri, 25 Apr 2025 15:31:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 19D0D6B0008 for ; Fri, 25 Apr 2025 15:31:58 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 1208D5B9A7 for ; Fri, 25 Apr 2025 19:31:59 +0000 (UTC) X-FDA: 83373561558.29.061291E Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf01.hostedemail.com (Postfix) with ESMTP id 97DC640009 for ; Fri, 25 Apr 2025 19:31:56 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=eTnIrkju; dmarc=pass (policy=quarantine) header.from=redhat.com; spf=pass (imf01.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745609516; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=nrYbbQSySK8OLKh12FO2t07RFwtdpfiRESaN7ZV7Leo=; b=DA1PVfqIYyfMnHehG6mYOOMW3ZN52KBDotK0A1oMCiYZiQDkILhn9Yfb2Z+lkCwEmR41LY ft8v2eyjq0x/7iUe7TMSCKrhB4NP6JIj7bN7YwGQxvprEnA1gDjHKO0PCtvROBjhm/U/6P agDdHye89BOu0QtU+z82AIun28almfo= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=eTnIrkju; dmarc=pass (policy=quarantine) header.from=redhat.com; spf=pass (imf01.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745609516; a=rsa-sha256; cv=none; b=h77aiJvm0WPjYtuPF+aVOuBHo+WxCBF48kvoyNrNIQQlekowasbZmhKy8qgdnGWWpVWhTZ LZ86wVHmREjv4RdWyGYxjpG9IlTRN3eG/b1h/nahUDIOyqIvdLvB+9lUp+v7q/pldskHUf wMpHcB6xL9vVPxkcl6rNVSfLfanSmrs= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1745609515; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=nrYbbQSySK8OLKh12FO2t07RFwtdpfiRESaN7ZV7Leo=; b=eTnIrkjuwAUMOhZ3+A4usXVeEiEj6xq6uaewT0Jb96Rd/WUxLVUHXNhKavssj6AUCVsGFS xiG1mYlaj6y/LngQx+ymFzq2QUCf/taP4WvJaSLxZahbxMKio1ES3uaAAF9T1bOqRFJBQx 5DEHfcRQRa2UOwrO3ZhzsYHfzC/H6xs= Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-65-m80w20XJMPuZv2c2DK3Yfg-1; Fri, 25 Apr 2025 15:31:54 -0400 X-MC-Unique: m80w20XJMPuZv2c2DK3Yfg-1 X-Mimecast-MFC-AGG-ID: m80w20XJMPuZv2c2DK3Yfg_1745609514 Received: by mail-qv1-f69.google.com with SMTP id 6a1803df08f44-6ed0526b507so44748216d6.0 for ; Fri, 25 Apr 2025 12:31:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745609514; x=1746214314; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=nrYbbQSySK8OLKh12FO2t07RFwtdpfiRESaN7ZV7Leo=; b=UvVNI4hDZ8Kft6Ap9bSh9eVmdrs41FizfUhfFSSIr+garPDN3Mi9a6K5J5YopN2T5q PL+anweuODLR6ch2Wrg+6yxm81S1PEoRhqqGlbT6mVTxHA+BjH74/PK5YVTzgbGLm+e9 HGqNUAHo2Xo2E2tg2GNyTxNGyR5epi6WdG5D7WBdqZjuUKT9hbIixvfFr0ox7zbcakbb guk1CLpl+pxk/G6/iFs64nCcHA+gley9p0PqjMKUlpAlyIojeQDCokWIra8xhebP+SOf V89a3bQ6aK34q8AVR4P9Yns89sgjotiv4Qr17Esw51iM7fTCaw0mrl2rFkBFd7VDZnhV zRJA== X-Forwarded-Encrypted: i=1; AJvYcCXKM+ehXb3bUWRMl+HuQ3fRAEuU4rxTyEjAJjFvNdCezflsMUcUpUPR5hojVZ687wnevOE1eUqkCQ==@kvack.org X-Gm-Message-State: AOJu0YwXT0cwEzeBOKrsAvEwYboSaHunEQaEnEnPIcexIz4/nEAZP/gX rqiqaM+OzEk/WjJd+h8QaUcY/aPPX1UCqrqm5YKgujxNz49aEgBGWRhD1pUF2QX+kMD+VXrw41J Ot8+0z47xOaYcBeq34lBFzrvOZNu4Fz+owUdTBnl8SBld61KS X-Gm-Gg: ASbGnctS3CAETKPjLfetBK3+6476XUcxoCpm/qBjXmU/pCul4eBeDFxHWCmtAmT8yWt 8IOldUB4XyTHf09uvM+K18fEA0UW+MyvdAAFYxzRFnnt3ulzeo7Dc1A+Evkd9KFOzSLGLYgFrFA 7ln4XPTZf7UI6VClF017jeeNFdezg9DriTFolJ4ZtMmzejz82lWxeSzxG09essSrRMgoWIRcaZb T9LkFcVTkV4gW5hPG4CYS9u3zvxNFJF5k19rCKsc/pV9I2FBJ9LgJAIqx8QQdPKROM4Ptq6zgro DRQ= X-Received: by 2002:a05:6214:260e:b0:6e4:4011:9df7 with SMTP id 6a1803df08f44-6f4cb9d3063mr67389886d6.16.1745609513935; Fri, 25 Apr 2025 12:31:53 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHuFeye1Pg4uChOid/ZXXFZAWI36zPtd4Gkr5VYk4K8JYNC09QMimMlctuGeX7CWMMP3iJGqQ== X-Received: by 2002:a05:6214:260e:b0:6e4:4011:9df7 with SMTP id 6a1803df08f44-6f4cb9d3063mr67389486d6.16.1745609513528; Fri, 25 Apr 2025 12:31:53 -0700 (PDT) Received: from x1.local ([85.131.185.92]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6f4c0aae675sm25488566d6.100.2025.04.25.12.31.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 25 Apr 2025 12:31:52 -0700 (PDT) Date: Fri, 25 Apr 2025 15:31:48 -0400 From: Peter Xu To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-trace-kernel@vger.kernel.org, Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Simona Vetter , Andrew Morton , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn , Pedro Falcato Subject: Re: [PATCH v1 02/11] mm: convert track_pfn_insert() to pfnmap_sanitize_pgprot() Message-ID: References: <20250425081715.1341199-1-david@redhat.com> <20250425081715.1341199-3-david@redhat.com> MIME-Version: 1.0 In-Reply-To: <20250425081715.1341199-3-david@redhat.com> X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: _JyI6HLBofmRAXCiXsY1mqhNvizsTo3VP_rFe_6eddI_1745609514 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-Stat-Signature: aixwqgfp5tpjbgat9ns434i9z9wxwjtu X-Rspamd-Queue-Id: 97DC640009 X-Rspam-User: X-Rspamd-Server: rspam05 X-HE-Tag: 1745609516-120681 X-HE-Meta: U2FsdGVkX19HUH9p2hFjbFlSfglq696uUWS0m62HF34+HngooP7XXGLztt53VAIH9HybYCCUrh0qyVbiUDhGpZyRXtSbKVC52kcqkx3qLcaAUxkEFcNm8ttPtRkn2RjJMV+6yxxSHPNhJ3ayPLVbc9eyp2c1Ra+Xrd0p/dlzYlkE5rKVjDCmqQLqZGMW7VUPT+0duMQ7DvrWWhIXystw9XxSfJlpy7yRTIs5Tr0KE2qylY/z29DWsNF/7EV9cUOuj8fuwpGZFeKjAkMvATp7WBSyy0ouMP1neND0Bb7e1CZhieXEGNkcxLY/E/VbE/VI3gTzeUjXElM8X+H6HX4BsB4rDduGH72vZrBDb2TQonKGMXUgnUY6R/q3PXh0gYb7v7MRuHpKyv1valF6EvzN2VTdVsJrO/sSu8yHJlOBLSIIrRbglP5ljYEPP0rnPrV5T+BGqiN2iYgER5dvqazyzofUOkuZ0Ih7Y09oYenbLb9HC4dd+zLExWJ4X7dh70HCzm1A0FVMxn480/dxZOwUUFRe6aktPnigqGsAFTkI5Evzn8u8r/zpTiN1awF/Sn47efJ10Z523Oheemhr86V2Rz2BmOlom+MTiRS9mkQEtbehDz49NEyW5FgXh2SNZrzg+4ejiM42mKKU5zxiHe9+NP5qdtWY1v2d4QA1eP/yOSE21VZ1AmDL0p1ld0gR45fmt36jhsesDetOmIU3H6a/yYQPLQoxpE2XC94K/3sWpYgUtzGbpH7PuxkOZBCEnQkeffIf0v17REghvaLlITOvadJCBclc3vTNK67rwY2TrPud/9ymmTrADiGCNDs9Cd3LQZFDCH2lCmlXynvxdi8HU75b1SpeaSO4X5rIIBTZCLYQQUhKEMJWI+KgAFc46dJz3Jgsz0W/8fXdaq4KmXg5sVriWrNEts1NQ/qW/mPLWJUyTJXIrOETx1V0UFK6pp4X6j3gJLSPdGVqDJAu1UT I44KqT5m NASDc03EVl9k6yGQlrvHM/KaI4Ol053DNwSlBQ+7OZOzNaInuHQ8n7hIxkgVTbTCYf/p2xinLlioqOedIx4tSO7CTF4auAU6xuiaLWN9S9yP6cjPgkZZq4Fbfva47soDhEEvk1ZB2kXcbdoUhNVLfmx29LMywc4w32RT+S3Muc/YUVxzo/SzqbRmtadNR1yTdlFOc/MGkihf1fZy55cyr1+uJiTKE4kUtQyjxBNpW72y7MmlrNEiof/3AxtWYDKgqWP+pOdwqTr73nej2k0fdNxLPxQCLfYWLdQwEveYcx08DsGIMs6/yxKgYU7T8OEv/IclOC8ozdX2Nx8xl0kOHN5hSqd6KAps/TsDqPxo9h4pGg1Y4vEss/VCJ7J7BV3AA4031oD7RNAIcSemMpUd8YD8AGjU6S5HLDWvk X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Apr 25, 2025 at 10:17:06AM +0200, David Hildenbrand wrote: > ... by factoring it out from track_pfn_remap(). > > For PMDs/PUDs, actually check the full range, and trigger a fallback > if we run into this "different memory types / cachemodes" scenario. The current patch looks like to still pass PAGE_SIZE into the new helper at all track_pfn_insert() call sites, so it seems this comment does not 100% match with the code? Or I may have misread somewhere. Maybe it's still easier to keep the single-pfn lookup to never fail.. more below. > > Add some documentation. > > Will checking each page result in undesired overhead? We'll have to > learn. Not checking each page looks wrong, though. Maybe we could > optimize the lookup internally. > > Signed-off-by: David Hildenbrand > --- > arch/x86/mm/pat/memtype.c | 24 ++++++++---------------- > include/linux/pgtable.h | 28 ++++++++++++++++++++-------- > mm/huge_memory.c | 7 +++++-- > mm/memory.c | 4 ++-- > 4 files changed, 35 insertions(+), 28 deletions(-) > > diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c > index edec5859651d6..193e33251b18f 100644 > --- a/arch/x86/mm/pat/memtype.c > +++ b/arch/x86/mm/pat/memtype.c > @@ -1031,7 +1031,6 @@ int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot, > unsigned long pfn, unsigned long addr, unsigned long size) > { > resource_size_t paddr = (resource_size_t)pfn << PAGE_SHIFT; > - enum page_cache_mode pcm; > > /* reserve the whole chunk starting from paddr */ > if (!vma || (addr == vma->vm_start > @@ -1044,13 +1043,17 @@ int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot, > return ret; > } > > + return pfnmap_sanitize_pgprot(pfn, size, prot); > +} > + > +int pfnmap_sanitize_pgprot(unsigned long pfn, unsigned long size, pgprot_t *prot) > +{ > + resource_size_t paddr = (resource_size_t)pfn << PAGE_SHIFT; > + enum page_cache_mode pcm; > + > if (!pat_enabled()) > return 0; > > - /* > - * For anything smaller than the vma size we set prot based on the > - * lookup. > - */ > pcm = lookup_memtype(paddr); > > /* Check memtype for the remaining pages */ > @@ -1065,17 +1068,6 @@ int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot, > return 0; > } > > -void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot, pfn_t pfn) > -{ > - enum page_cache_mode pcm; > - > - if (!pat_enabled()) > - return; > - > - pcm = lookup_memtype(pfn_t_to_phys(pfn)); > - pgprot_set_cachemode(prot, pcm); > -} > - > /* > * untrack_pfn is called while unmapping a pfnmap for a region. > * untrack can be called for a specific region indicated by pfn and size or > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h > index b50447ef1c921..91aadfe2515a5 100644 > --- a/include/linux/pgtable.h > +++ b/include/linux/pgtable.h > @@ -1500,13 +1500,10 @@ static inline int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot, > return 0; > } > > -/* > - * track_pfn_insert is called when a _new_ single pfn is established > - * by vmf_insert_pfn(). > - */ > -static inline void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot, > - pfn_t pfn) > +static inline int pfnmap_sanitize_pgprot(unsigned long pfn, unsigned long size, > + pgprot_t *prot) > { > + return 0; > } > > /* > @@ -1556,8 +1553,23 @@ static inline void untrack_pfn_clear(struct vm_area_struct *vma) > extern int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot, > unsigned long pfn, unsigned long addr, > unsigned long size); > -extern void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot, > - pfn_t pfn); > + > +/** > + * pfnmap_sanitize_pgprot - sanitize the pgprot for a pfn range Nit: s/sanitize/update|setup|.../? But maybe you have good reason to use sanitize. No strong opinions. > + * @pfn: the start of the pfn range > + * @size: the size of the pfn range > + * @prot: the pgprot to sanitize > + * > + * Sanitize the given pgprot for a pfn range, for example, adjusting the > + * cachemode. > + * > + * This function cannot fail for a single page, but can fail for multiple > + * pages. > + * > + * Returns 0 on success and -EINVAL on error. > + */ > +int pfnmap_sanitize_pgprot(unsigned long pfn, unsigned long size, > + pgprot_t *prot); > extern int track_pfn_copy(struct vm_area_struct *dst_vma, > struct vm_area_struct *src_vma, unsigned long *pfn); > extern void untrack_pfn_copy(struct vm_area_struct *dst_vma, > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index fdcf0a6049b9f..b8ae5e1493315 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -1455,7 +1455,9 @@ vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write) > return VM_FAULT_OOM; > } > > - track_pfn_insert(vma, &pgprot, pfn); > + if (pfnmap_sanitize_pgprot(pfn_t_to_pfn(pfn), PAGE_SIZE, &pgprot)) > + return VM_FAULT_FALLBACK; Would "pgtable" leak if it fails? If it's PAGE_SIZE, IIUC it won't ever trigger, though. Maybe we could have a "void pfnmap_sanitize_pgprot_pfn(&pgprot, pfn)" to replace track_pfn_insert() and never fail? Dropping vma ref is definitely a win already in all cases. > + > ptl = pmd_lock(vma->vm_mm, vmf->pmd); > error = insert_pfn_pmd(vma, addr, vmf->pmd, pfn, pgprot, write, > pgtable); > @@ -1577,7 +1579,8 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write) > if (addr < vma->vm_start || addr >= vma->vm_end) > return VM_FAULT_SIGBUS; > > - track_pfn_insert(vma, &pgprot, pfn); > + if (pfnmap_sanitize_pgprot(pfn_t_to_pfn(pfn), PAGE_SIZE, &pgprot)) > + return VM_FAULT_FALLBACK; > > ptl = pud_lock(vma->vm_mm, vmf->pud); > insert_pfn_pud(vma, addr, vmf->pud, pfn, write); > diff --git a/mm/memory.c b/mm/memory.c > index 424420349bd3c..c737a8625866a 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -2563,7 +2563,7 @@ vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr, > if (!pfn_modify_allowed(pfn, pgprot)) > return VM_FAULT_SIGBUS; > > - track_pfn_insert(vma, &pgprot, __pfn_to_pfn_t(pfn, PFN_DEV)); > + pfnmap_sanitize_pgprot(pfn, PAGE_SIZE, &pgprot); > > return insert_pfn(vma, addr, __pfn_to_pfn_t(pfn, PFN_DEV), pgprot, > false); > @@ -2626,7 +2626,7 @@ static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma, > if (addr < vma->vm_start || addr >= vma->vm_end) > return VM_FAULT_SIGBUS; > > - track_pfn_insert(vma, &pgprot, pfn); > + pfnmap_sanitize_pgprot(pfn_t_to_pfn(pfn), PAGE_SIZE, &pgprot); > > if (!pfn_modify_allowed(pfn_t_to_pfn(pfn), pgprot)) > return VM_FAULT_SIGBUS; > -- > 2.49.0 > -- Peter Xu