From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 653B2C3ABD2 for ; Mon, 12 May 2025 12:34:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C200E6B0100; Mon, 12 May 2025 08:34:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B7DF06B0102; Mon, 12 May 2025 08:34:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 95B186B0103; Mon, 12 May 2025 08:34:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 70DDB6B0100 for ; Mon, 12 May 2025 08:34:37 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 79D131D096A for ; Mon, 12 May 2025 12:34:37 +0000 (UTC) X-FDA: 83434199394.25.4F9D3C1 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf25.hostedemail.com (Postfix) with ESMTP id 1F711A000D for ; Mon, 12 May 2025 12:34:34 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=NKuLvVQc; dmarc=pass (policy=quarantine) header.from=redhat.com; spf=pass (imf25.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747053275; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6nHoWozKkbl8Jpefo8Zb0yf0bGe9671cYhr6GgVPgWI=; b=WWnWudaAYd2zHU1BMg3M+1KGyBYi5IcfI3yDYaMvK5AkuAPQhvrzj3S7yuEU9kokhwbXvi BFkznnyXF32/5q09lqWbPbdt9oEsGgA7lrI3io4l5xatgxozHPesUc8+gLhiAt5i9TNcQi Bm6tbX8bHxT1FlUwjSowHRpuJovq8xw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747053275; a=rsa-sha256; cv=none; b=Ee/0pCyOXSnEH5Y7x6KdrIbMlLvxs2Osxza3z7ZroYrQCHXTcNVIW00t2E67JVX/s9G4uc xsOg3v6MikTe9caNP1DIOTUMPeiATchNQ68sNgsUNhNaMUBUz+ICaLX7YfPQJrzOVMzBuI XnTL/9RDh4+VPsGSl5zDUVxaswG4wFc= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=NKuLvVQc; dmarc=pass (policy=quarantine) header.from=redhat.com; spf=pass (imf25.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1747053274; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6nHoWozKkbl8Jpefo8Zb0yf0bGe9671cYhr6GgVPgWI=; b=NKuLvVQcFCbfva4+Dq2Zyj+Jjfifl/4jbur4aORBavSd5zzika/TVLai+HIQofLsmrc7Co WLUrrtlOYzVcT3OF2CF4FAatSU0W3lw8OkXvoynRX6kmVJIRoQDENtSaRQHSmcQkGoY8RQ zgum4fpqNWjmhIipqBZGIuvhP2EkCBY= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-642-gcKWAVbqP2mv5dJguOFBjA-1; Mon, 12 May 2025 08:34:33 -0400 X-MC-Unique: gcKWAVbqP2mv5dJguOFBjA-1 X-Mimecast-MFC-AGG-ID: gcKWAVbqP2mv5dJguOFBjA_1747053272 Received: by mail-wm1-f71.google.com with SMTP id 5b1f17b1804b1-43d734da1a3so20824345e9.0 for ; Mon, 12 May 2025 05:34:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747053272; x=1747658072; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6nHoWozKkbl8Jpefo8Zb0yf0bGe9671cYhr6GgVPgWI=; b=sAcy98DbE0CjXEiPlYSyxHXO6hN8DEwflgrxxuZK2XkoL2lcLtfVDq2p7u6KLvbTBM poZAYlGDrPjBNWGkU5BD+7baY0XnBcIGHbqF6pFT7Zgm4sLVsTsy6spILf2+8kbbJz3M JO4gBsVZ1fqpLAWqLSv2/KsucfkJ+VncUupWlhdbupWz2qCZuFHVpB3MM6rmKGqfb1G9 6A59khPd/9ngW6wACEtOMXZpraylBIEzfr2ObWezIRBoxh+pFWVQYg72wrJ8kHwF16Pc qkMqDe9aPeTz0y8DohcqfOqdUHljFhLZor04MgUtv0mUVvPEbUi60vBaW0N4uj9WMKgB 34Tw== X-Gm-Message-State: AOJu0YzecLeWi8dnc6ggwO2DSvVTPGdUUqBRZUuMX1byYwpHLQwJahN0 uzSBoqWh8n854euQ+mvxi2V20I0NnxK3YEVUnLE46g4doEOop77riM1Oy4xLpx2QQMwcP6/zyqf wV0fOsug5IyOSnkenN01poQvXb7XJJq9/SAtjM+JDLGIIZME8 X-Gm-Gg: ASbGncvNW/O9h95Oq1rZAo/PpESZ1xbFgvXrcWvAcm3EVyeR4H7Xdey0xnD5HeRNyZ0 Q8qwdQ/p01NwxOcmhECEuRfUZn/JXGw8380nAS9nHUK+uEOvPjpbboj7dKZS3JUxkuArRq1icub c9qjTVNs4590AayzHH9BSa3ORa9i8uFGuq5GvDZo95zbqmbWPN1xM7xzU1yxuUQfs9AeOMnGqCX ErvR7DKqKc1S6Nt4EUX0Z1qkVEYUeeTI0BISrmQZ6El3GdJ+le++id3aW+Yx4G2IBgDpRaIzQox OcekHypvmlHpawIDcDdPYXnKgFWTnPHpi8ZPS3zB3CONfUbVsTRg/B39RyAfXLma//Fa7KDg X-Received: by 2002:a05:600c:3490:b0:441:d4e8:76cd with SMTP id 5b1f17b1804b1-442d6ddec00mr105442245e9.29.1747053272043; Mon, 12 May 2025 05:34:32 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEAkmJW1RYt0shbyVSJYG/Cde8Y5kgDkvTgdUylwEM8H1PlNDd2NNsMG+k4pYVsizSZx0oo/Q== X-Received: by 2002:a05:600c:3490:b0:441:d4e8:76cd with SMTP id 5b1f17b1804b1-442d6ddec00mr105442015e9.29.1747053271651; Mon, 12 May 2025 05:34:31 -0700 (PDT) Received: from localhost (p200300d82f4a5800f1ae8e20d7f451b0.dip0.t-ipconnect.de. [2003:d8:2f4a:5800:f1ae:8e20:d7f4:51b0]) by smtp.gmail.com with UTF8SMTPSA id 5b1f17b1804b1-442cd32f331sm168479925e9.13.2025.05.12.05.34.29 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 12 May 2025 05:34:31 -0700 (PDT) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, x86@kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-trace-kernel@vger.kernel.org, David Hildenbrand , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Simona Vetter , Andrew Morton , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn , Pedro Falcato , Peter Xu , Ingo Molnar Subject: [PATCH v2 02/11] mm: convert track_pfn_insert() to pfnmap_setup_cachemode*() Date: Mon, 12 May 2025 14:34:15 +0200 Message-ID: <20250512123424.637989-3-david@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250512123424.637989-1-david@redhat.com> References: <20250512123424.637989-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: qDz_VTVNn7AHDu6uCwOYi2AS7o955rMu8ZSPNwkv9vc_1747053272 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit content-type: text/plain; charset="US-ASCII"; x-default=true X-Stat-Signature: 4dbbf9odg5xzid9wi1micywoiz97ysfu X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 1F711A000D X-HE-Tag: 1747053274-759358 X-HE-Meta: U2FsdGVkX198CA9kRBOXhga5mkAg+fVENFBM6ueo1i68bApuk/uLV1TlSDJ7v1yb2msjJ9vO2TQ2H4fJ8XpIVL/EgvHtbrVQPx5sN6CFZKbw+Jg0KFiB/FBdG+buc/MI27DgoC2X8Lgza5/awNM6czQEZdyzMmIfzSbmn2PkMVlm3rcwWyRGz8wD5I2Iwxze3mAC/QQ4bYV4G8WGDNO12AeelziBP2G4L5ggoFfgB4h8knaLWHdxxoHu9lytKSJLVByvZT22fNw6FG4NNiitReftYU8ZkU07jv7VuEq/d7c9GrH2PnrUj0hdSFc62lVIDEp2gvqINCXbxWlaq11aZvFgTSKsqEpbLW9kc1U2Zz1iCDzOim2BGXPUrk6rcFUp1p41zXdKorjia36zTRFcH/SbLWkvgKozHml8i1Ctx6tJXSzyayb/gpM/9FraYRk7U4L2/p/tadzcYYFjMaxeKYPB3yK7RotCQK+3FevZyoxAaGgrHK/OucxzZpwSQsTivipkDA5uYViOY4jUlBESFKq6kY5796t4lpIRHWbsMAZi9spug4msTd0ZhK/GKlCNAEli93IvAY4q423aB4fNQ9yWQC+lDM4riK3plsVwZkTB/vi/REDv39MP9SfjjKj1cDGtCZzB3tQVJggvFrpeVGw57TxrdQeFRD8pJprcm5ELQksr0VFGZfJGfmlLL4jU9OU0LQCT8c1WUPLZBRtEti+sYVIFA8cEqzTyD4h38MRDejNa4FTs0+KNnfcAk3XXMDO5+ewZ+P4+1g0qcrJpUCZVGHZrrInt++QqD3kN7t/ijotT+WfZOItdrxq7BoRKjisNhuczog5WvuGKLqMJ/cHiLftFxgtyT/AZc/OVLgQ5qmZByEDm+qJMqfv/deea5Zb0FDDt+DvzTTLujGk1gv6OupLj/mfeVoT90aN/ds5yPP81dVDAli73m/vmB4/Z0exB2B5pRgRHjaFYu2p 523bVBwn VhfwOn+eKFQiGGjVvs969BKyyStBTm+Mn36AD2xL/zi8S86qmRucabtnhSpcijuOOS3LgOV2wzyqeHnaA9Gz97qu4Y5loOJQQmxTa0GE4+hPjnQWK3c23GxEtl2sfthwn7NRNZ5Sr//4jlJ/xca3AQtzRZVPzCIiq8LOuDfU6lFamuyJLRZiMvvV1XH146OZgqtqLEITBGNG2rTSV090rlrg1FcwbUe9LSBuyTtkolZUf2z9U1LhZ/x0MDgbcreNAjMo+Wj9iYwVWGzYSNgvX96R1atFPLxVgWx81Bwqszxo9Ae8Qt8EKMpWQPjNLQxRxLZLJNHKUDIBqj3BZ2BVVXVLMaXgblmhzWD52Wbc9w2K8PqtvW6o8w9Co+fKcma8TCK8pu2y0p4fSrlFuXPqoKrcGQU2mz+E0c9r8woxkd/t97T0AIccENtgPanDf9GGb01NtjeRYi8fwkyOGS41B6kr6csWD/43VCg4F1WevPRafqEE39KMK7FSAEMIMZSPvHUwSBmakEHrw+YZ+adjV9nsfax+xkP8zW29bIi8w5y2Q8P6ReSlNVvODcdWRBhxn1MXFHJfo7wD9gsnxt2AjL/P6/hzPXHYqzE6w9ukpX+Qa//K9Lx0WyD/kx+/vrF3UHyhmyj/lofxCR76HEc3G3OLZKlzCS9H20vRutO1DL0wOhRHZMWq3B8f/vg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: ... by factoring it out from track_pfn_remap() into pfnmap_setup_cachemode() and provide pfnmap_setup_cachemode_pfn() as a replacement for track_pfn_insert(). For PMDs/PUDs, we keep checking a single pfn only. Add some documentation, and also document why it is valid to not check the whole pfn range. We'll reuse pfnmap_setup_cachemode() from core MM next. Acked-by: Ingo Molnar # x86 bits Signed-off-by: David Hildenbrand --- arch/x86/mm/pat/memtype.c | 24 ++++++------------ include/linux/pgtable.h | 52 +++++++++++++++++++++++++++++++++------ mm/huge_memory.c | 5 ++-- mm/memory.c | 4 +-- 4 files changed, 57 insertions(+), 28 deletions(-) diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c index edec5859651d6..fa78facc6f633 100644 --- a/arch/x86/mm/pat/memtype.c +++ b/arch/x86/mm/pat/memtype.c @@ -1031,7 +1031,6 @@ int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot, unsigned long pfn, unsigned long addr, unsigned long size) { resource_size_t paddr = (resource_size_t)pfn << PAGE_SHIFT; - enum page_cache_mode pcm; /* reserve the whole chunk starting from paddr */ if (!vma || (addr == vma->vm_start @@ -1044,13 +1043,17 @@ int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot, return ret; } + return pfnmap_setup_cachemode(pfn, size, prot); +} + +int pfnmap_setup_cachemode(unsigned long pfn, unsigned long size, pgprot_t *prot) +{ + resource_size_t paddr = (resource_size_t)pfn << PAGE_SHIFT; + enum page_cache_mode pcm; + if (!pat_enabled()) return 0; - /* - * For anything smaller than the vma size we set prot based on the - * lookup. - */ pcm = lookup_memtype(paddr); /* Check memtype for the remaining pages */ @@ -1065,17 +1068,6 @@ int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot, return 0; } -void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot, pfn_t pfn) -{ - enum page_cache_mode pcm; - - if (!pat_enabled()) - return; - - pcm = lookup_memtype(pfn_t_to_phys(pfn)); - pgprot_set_cachemode(prot, pcm); -} - /* * untrack_pfn is called while unmapping a pfnmap for a region. * untrack can be called for a specific region indicated by pfn and size or diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index f1e890b604609..be1745839871c 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1496,13 +1496,10 @@ static inline int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot, return 0; } -/* - * track_pfn_insert is called when a _new_ single pfn is established - * by vmf_insert_pfn(). - */ -static inline void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot, - pfn_t pfn) +static inline int pfnmap_setup_cachemode(unsigned long pfn, unsigned long size, + pgprot_t *prot) { + return 0; } /* @@ -1552,8 +1549,32 @@ static inline void untrack_pfn_clear(struct vm_area_struct *vma) extern int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot, unsigned long pfn, unsigned long addr, unsigned long size); -extern void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot, - pfn_t pfn); + +/** + * pfnmap_setup_cachemode - setup the cachemode in the pgprot for a pfn range + * @pfn: the start of the pfn range + * @size: the size of the pfn range in bytes + * @prot: the pgprot to modify + * + * Lookup the cachemode for the pfn range starting at @pfn with the size + * @size and store it in @prot, leaving other data in @prot unchanged. + * + * This allows for a hardware implementation to have fine-grained control of + * memory cache behavior at page level granularity. Without a hardware + * implementation, this function does nothing. + * + * Currently there is only one implementation for this - x86 Page Attribute + * Table (PAT). See Documentation/arch/x86/pat.rst for more details. + * + * This function can fail if the pfn range spans pfns that require differing + * cachemodes. If the pfn range was previously verified to have a single + * cachemode, it is sufficient to query only a single pfn. The assumption is + * that this is the case for drivers using the vmf_insert_pfn*() interface. + * + * Returns 0 on success and -EINVAL on error. + */ +int pfnmap_setup_cachemode(unsigned long pfn, unsigned long size, + pgprot_t *prot); extern int track_pfn_copy(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, unsigned long *pfn); extern void untrack_pfn_copy(struct vm_area_struct *dst_vma, @@ -1563,6 +1584,21 @@ extern void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn, extern void untrack_pfn_clear(struct vm_area_struct *vma); #endif +/** + * pfnmap_setup_cachemode_pfn - setup the cachemode in the pgprot for a pfn + * @pfn: the pfn + * @prot: the pgprot to modify + * + * Lookup the cachemode for @pfn and store it in @prot, leaving other + * data in @prot unchanged. + * + * See pfnmap_setup_cachemode() for details. + */ +static inline void pfnmap_setup_cachemode_pfn(unsigned long pfn, pgprot_t *prot) +{ + pfnmap_setup_cachemode(pfn, PAGE_SIZE, prot); +} + #ifdef CONFIG_MMU #ifdef __HAVE_COLOR_ZERO_PAGE static inline int is_zero_pfn(unsigned long pfn) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2780a12b25f01..d3e66136e41a3 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1455,7 +1455,8 @@ vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write) return VM_FAULT_OOM; } - track_pfn_insert(vma, &pgprot, pfn); + pfnmap_setup_cachemode_pfn(pfn_t_to_pfn(pfn), &pgprot); + ptl = pmd_lock(vma->vm_mm, vmf->pmd); error = insert_pfn_pmd(vma, addr, vmf->pmd, pfn, pgprot, write, pgtable); @@ -1577,7 +1578,7 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write) if (addr < vma->vm_start || addr >= vma->vm_end) return VM_FAULT_SIGBUS; - track_pfn_insert(vma, &pgprot, pfn); + pfnmap_setup_cachemode_pfn(pfn_t_to_pfn(pfn), &pgprot); ptl = pud_lock(vma->vm_mm, vmf->pud); insert_pfn_pud(vma, addr, vmf->pud, pfn, write); diff --git a/mm/memory.c b/mm/memory.c index 99af83434e7c5..064fc55d8eab9 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2564,7 +2564,7 @@ vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr, if (!pfn_modify_allowed(pfn, pgprot)) return VM_FAULT_SIGBUS; - track_pfn_insert(vma, &pgprot, __pfn_to_pfn_t(pfn, PFN_DEV)); + pfnmap_setup_cachemode_pfn(pfn, &pgprot); return insert_pfn(vma, addr, __pfn_to_pfn_t(pfn, PFN_DEV), pgprot, false); @@ -2627,7 +2627,7 @@ static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma, if (addr < vma->vm_start || addr >= vma->vm_end) return VM_FAULT_SIGBUS; - track_pfn_insert(vma, &pgprot, pfn); + pfnmap_setup_cachemode_pfn(pfn_t_to_pfn(pfn), &pgprot); if (!pfn_modify_allowed(pfn_t_to_pfn(pfn), pgprot)) return VM_FAULT_SIGBUS; -- 2.49.0